Skip to content

uqregressors.conformal.conformal_ens

This method employs normalized conformal prediction as described in Tibshirani, 2023. The difficulty measure, , used for normalization is taken to be the standard deviation of the predictions of all models in an ensemble, while the ensemble mean is returned as the mean prediction.

Normalized Conformal Ensemble

This module implements normalized conformal ensemble prediction in a split conformal context for regression on a one dimensional output

Key features are
  • Customizable neural network architecture
  • Customizable dropout to increase ensemble diversity
  • Prediction intervals without distributional assumptions
  • Customizable optimizer and loss function
  • Optional Input/Output Normalization

ConformalEnsRegressor

Bases: BaseEstimator, RegressorMixin

Conformal Ensemble Regressor for uncertainty estimation in regression tasks.

This class trains an ensemble of MLP models, and applies normalized conformal prediction on a split calibration set to calibrate prediction intervals.

Parameters:

Name Type Description Default
name str

Name of the model.

'Conformal_Ens_Regressor'
n_estimators int

Number of models to train.

5
hidden_sizes list

sizes of the hidden layers for each quantile regressor.

[64, 64]
alpha float

Miscoverage rate (1 - confidence level).

0.1
requires_grad bool

Whether inputs should require gradient, determines output type.

False
dropout float or None

Dropout rate for the neural network layers.

None
pred_with_dropout bool

Whether dropout should be applied at test time, dropout must be non-Null

False
activation_str str

String identifier of the activation function.

'ReLU'
cal_size float

Proportion of training samples to use for calibration, between 0 and 1.

0.2
gamma float

Stability constant added to difficulty score .

0
learning_rate float

Learning rate for training.

0.001
epochs int

Number of training epochs.

200
batch_size int

Batch size for training.

32
optimizer_cls type

Optimizer class.

Adam
optimizer_kwargs dict

Keyword arguments for optimizer.

None
scheduler_cls type or None

Learning rate scheduler class.

None
scheduler_kwargs dict

Keyword arguments for scheduler.

None
loss_fn callable or None

Loss function, defaults to quantile loss.

mse_loss
device str

Device to use for training and inference.

'cpu'
use_wandb bool

Whether to log training with Weights & Biases.

False
wandb_project str or None

wandb project name.

None
wandb_run_name str or None

wandb run name.

None
n_jobs float

Number of parallel jobs for training.

1
random_seed int or None

Random seed for reproducibility.

None
scale_data bool

Whether to normalize input/output data.

True
input_scaler TorchStandardScaler

Scaler for input features.

None
output_scaler TorchStandardScaler

Scaler for target outputs.

None
tuning_loggers list

Optional list of loggers for tuning.

[]

Attributes:

Name Type Description
models list[QuantNN]

A list of the models in the ensemble.

residuals Tensor

The combined residuals on the calibration set.

conformal_width Tensor

The width needed to conformalize the quantile regressor, q.

_loggers list[Logger]

Training loggers for each ensemble member.

Source code in uqregressors\conformal\conformal_ens.py
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
class ConformalEnsRegressor(BaseEstimator, RegressorMixin): 
    """
    Conformal Ensemble Regressor for uncertainty estimation in regression tasks. 

    This class trains an ensemble of MLP models, and applies normalized conformal prediction on a split
    calibration set to calibrate prediction intervals. 

    Args: 
        name (str): Name of the model. 
        n_estimators (int): Number of models to train. 
        hidden_sizes (list): sizes of the hidden layers for each quantile regressor. 
        alpha (float): Miscoverage rate (1 - confidence level). 
        requires_grad (bool): Whether inputs should require gradient, determines output type.
        dropout (float or None): Dropout rate for the neural network layers. 
        pred_with_dropout (bool): Whether dropout should be applied at test time, dropout must be non-Null
        activation_str (str): String identifier of the activation function. 
        cal_size (float): Proportion of training samples to use for calibration, between 0 and 1.  
        gamma (float): Stability constant added to difficulty score . 
        learning_rate (float): Learning rate for training.
        epochs (int): Number of training epochs.
        batch_size (int): Batch size for training.
        optimizer_cls (type): Optimizer class.
        optimizer_kwargs (dict): Keyword arguments for optimizer.
        scheduler_cls (type or None): Learning rate scheduler class.
        scheduler_kwargs (dict): Keyword arguments for scheduler.
        loss_fn (callable or None): Loss function, defaults to quantile loss.
        device (str): Device to use for training and inference.
        use_wandb (bool): Whether to log training with Weights & Biases.
        wandb_project (str or None): wandb project name.
        wandb_run_name (str or None): wandb run name.
        n_jobs (float): Number of parallel jobs for training.
        random_seed (int or None): Random seed for reproducibility.
        scale_data (bool): Whether to normalize input/output data.
        input_scaler (TorchStandardScaler): Scaler for input features.
        output_scaler (TorchStandardScaler): Scaler for target outputs.
        tuning_loggers (list): Optional list of loggers for tuning.

    Attributes: 
        models (list[QuantNN]): A list of the models in the ensemble.
        residuals (Tensor): The combined residuals on the calibration set. 
        conformal_width (Tensor): The width needed to conformalize the quantile regressor, q. 
        _loggers (list[Logger]): Training loggers for each ensemble member. 
    """
    def __init__(self, 
                 name="Conformal_Ens_Regressor",
                 n_estimators=5, 
                 hidden_sizes=[64, 64], 
                 alpha=0.1, 
                 requires_grad=False,
                 dropout=None,
                 pred_with_dropout=False,
                 activation_str="ReLU",
                 cal_size = 0.2, 
                 gamma = 0,
                 learning_rate=1e-3,
                 epochs=200,
                 batch_size=32,
                 optimizer_cls=torch.optim.Adam,
                 optimizer_kwargs=None,
                 scheduler_cls=None,
                 scheduler_kwargs=None,
                 loss_fn=nn.functional.mse_loss,
                 device="cpu", 
                 use_wandb=False, 
                 wandb_project=None, 
                 wandb_run_name=None, 
                 n_jobs=1, 
                 random_seed=None, 
                 scale_data=True, 
                 input_scaler=None,
                 output_scaler=None,
                 tuning_loggers = []
    ): 
        self.name = name
        self.n_estimators = n_estimators
        self.hidden_sizes = hidden_sizes
        self.alpha = alpha
        self.requires_grad = requires_grad
        self.dropout = dropout
        self.pred_with_dropout = pred_with_dropout
        self.activation_str = activation_str
        self.cal_size = cal_size
        self.gamma = gamma
        self.learning_rate = learning_rate
        self.epochs = epochs
        self.batch_size = batch_size
        self.optimizer_cls = optimizer_cls
        self.optimizer_kwargs = optimizer_kwargs or {}
        self.scheduler_cls = scheduler_cls
        self.scheduler_kwargs = scheduler_kwargs or {}
        self.loss_fn = loss_fn
        self.device = device

        self.use_wandb = use_wandb
        self.wandb_project = wandb_project
        self.wandb_run_name = wandb_run_name

        self.n_jobs = n_jobs
        self.random_seed = random_seed

        self.scale_data = scale_data 
        self.input_scaler = input_scaler or TorchStandardScaler() 
        self.output_scaler = output_scaler or TorchStandardScaler()

        self.input_dim = None
        self.conformity_scores = None
        self.conformity_score = None
        self.models = []
        self.residuals = []

        self._loggers = []
        self.training_logs = None
        self.tuning_loggers = tuning_loggers
        self.tuning_logs = None

    def _train_single_model(self, X_tensor, y_tensor, input_dim, train_idx, cal_idx, model_idx): 
        if self.random_seed is not None: 
            torch.manual_seed(self.random_seed + model_idx)
            np.random.seed(self.random_seed + model_idx)

        activation = get_activation(self.activation_str)
        model = MLP(input_dim, self.hidden_sizes, self.dropout, activation).to(self.device)

        optimizer = self.optimizer_cls(
            model.parameters(), lr=self.learning_rate, **self.optimizer_kwargs
        )
        scheduler = None 
        if self.scheduler_cls: 
            scheduler = self.scheduler_cls(optimizer, **self.scheduler_kwargs)

        dataset = TensorDataset(X_tensor[train_idx], y_tensor[train_idx])
        dataloader = DataLoader(dataset, batch_size=self.batch_size, shuffle=True)

        logger = Logger(
            use_wandb=self.use_wandb,
            project_name=self.wandb_project,
            run_name=self.wandb_run_name + str(model_idx) if self.wandb_run_name is not None else None,
            config={"n_estimators": self.n_estimators, "learning_rate": self.learning_rate, "epochs": self.epochs},
            name=f"Estimator-{model_idx}"
        )

        for epoch in range(self.epochs): 
            model.train()
            epoch_loss = 0.0 
            for xb, yb in dataloader: 
                optimizer.zero_grad() 
                preds = model(xb)
                loss = self.loss_fn(preds, yb)
                loss.backward() 
                optimizer.step() 
                epoch_loss += loss.item()

            if epoch % (self.epochs / 20) == 0:
                logger.log({"epoch": epoch, "train_loss": epoch_loss})

            if scheduler: 
                scheduler.step()

        if self.pred_with_dropout: 
            model.train()
        else: 
            model.eval()

        test_X = X_tensor[cal_idx]
        cal_preds = model(test_X)

        logger.finish()
        return model, cal_preds, logger

    def fit(self, X, y): 
        """
        Fit the ensemble on training data.

        Args:
            X (array-like or torch.Tensor): Training inputs.
            y (array-like or torch.Tensor): Training targets.

        Returns:
            (ConformalEnsRegressor): Fitted estimator.
        """
        X_tensor, y_tensor = validate_and_prepare_inputs(X, y, device=self.device)
        input_dim = X_tensor.shape[1]
        self.input_dim = input_dim

        if self.scale_data: 
            X_tensor = self.input_scaler.fit_transform(X_tensor)
            y_tensor = self.output_scaler.fit_transform(y_tensor)

        train_idx, cal_idx = self._train_test_split(X_tensor, 0.2, self.random_seed)
        results = Parallel(n_jobs=self.n_jobs)(
            delayed(self._train_single_model)(X_tensor, y_tensor, input_dim, train_idx, cal_idx, i)
            for i in range(self.n_estimators)
        )

        self.models = [result[0] for result in results]
        cal_preds = torch.stack([result[1] for result in results]).squeeze()
        self._loggers = [result[2] for result in results]

        mean_cal_preds = torch.mean(cal_preds, dim=0).squeeze()
        var_cal_preds = torch.var(cal_preds, dim=0).squeeze()
        std_cal_preds = var_cal_preds.sqrt()
        self.residuals = torch.abs(mean_cal_preds - y_tensor[cal_idx].squeeze())

        self.conformity_scores = self.residuals / (std_cal_preds + self.gamma)

        return self 

    def predict(self, X): 
        """
        Predicts the target values with uncertainty estimates.

        Args:
            X (np.ndarray): Feature matrix of shape (n_samples, n_features).

        Returns:
            (Union[Tuple[np.ndarray, np.ndarray, np.ndarray], Tuple[torch.Tensor, torch.Tensor, torch.Tensor]]): Tuple containing:
                mean predictions,
                lower bound of the prediction interval,
                upper bound of the prediction interval.

        !!! note
            If `requires_grad` is False, all returned arrays are NumPy arrays.
            Otherwise, they are PyTorch tensors with gradients.
        """        
        X_tensor = validate_X_input(X, input_dim=self.input_dim, device=self.device, requires_grad=self.requires_grad)
        if self.scale_data: 
            X_tensor = self.input_scaler.transform(X_tensor)
        n = len(self.residuals)
        q = int((1 - self.alpha) * (n+1)) 
        q = min(q, n-1) 

        res_quantile = n-q
        self.conformity_score = torch.topk(self.conformity_scores, res_quantile).values[-1]

        preds = []

        with torch.no_grad(): 
            for model in self.models: 
                if self.pred_with_dropout: 
                    model.train()
                else: 
                    model.eval()
                pred = model(X_tensor)
                preds.append(pred)

        preds = torch.stack(preds)[:, :, 0]
        mean = torch.mean(preds, dim=0)
        variances = torch.var(preds, dim=0)
        stds = variances.sqrt()
        conformal_widths = self.conformity_score * (stds + self.gamma) 
        lower = mean - conformal_widths 
        upper = mean + conformal_widths 

        if self.scale_data: 
            mean = self.output_scaler.inverse_transform(mean.view(-1, 1)).squeeze()
            lower = self.output_scaler.inverse_transform(lower.view(-1, 1)).squeeze()
            upper = self.output_scaler.inverse_transform(upper.view(-1, 1)).squeeze()

        if not self.requires_grad: 
            return mean.detach().cpu().numpy(), lower.detach().cpu().numpy(), upper.detach().cpu().numpy()

        else: 
            return mean, lower, upper

    def save(self, path):
        """
        Save the trained model and associated configuration to disk.

        Args:
            path (str or Path): Directory to save model files.
        """
        path = Path(path)
        path.mkdir(parents=True, exist_ok=True)

        # Save config (exclude non-serializable or large objects)
        config = {
            k: v for k, v in self.__dict__.items()
            if k not in ["models", "residuals", "conformity_score", "conformity_scores", "optimizer_cls", "optimizer_kwargs", "scheduler_cls", "scheduler_kwargs", 
                         "input_scaler", "output_scaler", "_loggers", "training_logs", "tuning_loggers", "tuning_logs"]
            and not callable(v)
            and not isinstance(v, (torch.nn.Module,))
        }

        config["optimizer"] = self.optimizer_cls.__class__.__name__ if self.optimizer_cls is not None else None
        config["scheduler"] = self.scheduler_cls.__class__.__name__ if self.scheduler_cls is not None else None
        config["input_scaler"] = self.input_scaler.__class__.__name__ if self.input_scaler is not None else None 
        config["output_scaler"] = self.output_scaler.__class__.__name__ if self.output_scaler is not None else None

        with open(path / "config.json", "w") as f:
            json.dump(config, f, indent=4)

        # Save model weights
        for i, model in enumerate(self.models):
            torch.save(model.state_dict(), path / f"model_{i}.pt")

        # Save residuals and conformity score
        torch.save({
            "residuals": self.residuals.cpu(),
            "conformity_score": self.conformity_score, 
            "conformity_scores": self.conformity_scores
        }, path / "extras.pt")

        with open(path / "extras.pkl", 'wb') as f: 
            pickle.dump([self.optimizer_cls, 
                         self.optimizer_kwargs, self.scheduler_cls, self.scheduler_kwargs, self.input_scaler, self.output_scaler], f)

        for i, logger in enumerate(getattr(self, "_loggers", [])):
            logger.save_to_file(path, idx=i, name="estimator")

        for i, logger in enumerate(getattr(self, "tuning_loggers", [])): 
            logger.save_to_file(path, name="tuning", idx=i)


    @classmethod
    def load(cls, path, device="cpu", load_logs=False):
        """
        Load a saved KFoldCQR model from disk.

        Args:
            path (str or Path): Directory containing saved model files.
            device (str): Device to load the model on ("cpu" or "cuda").
            load_logs (bool): Whether to also load training logs.

        Returns:
            (ConformalEnsRegressor): The loaded model instance.
        """
        path = Path(path)

        # Load config
        with open(path / "config.json", "r") as f:
            config = json.load(f)
        config["device"] = device

        config.pop("optimizer", None)
        config.pop("scheduler", None)
        config.pop("input_scaler", None)
        config.pop("output_scaler", None)

        input_dim = config.pop("input_dim", None)
        model = cls(**config)

        with open(path / "extras.pkl", 'rb') as f: 
            optimizer_cls, optimizer_kwargs, scheduler_cls, scheduler_kwargs, input_scaler, output_scaler = pickle.load(f)

        # Recreate models
        model.input_dim = input_dim
        activation = get_activation(config["activation_str"])
        model.models = []
        for i in range(config["n_estimators"]):
            m = MLP(model.input_dim, config["hidden_sizes"], config["dropout"], activation).to(device)
            m.load_state_dict(torch.load(path / f"model_{i}.pt", map_location=device))
            model.models.append(m)

        # Load extras
        extras_path = path / "extras.pt"
        if extras_path.exists():
            extras = torch.load(extras_path, map_location=device, weights_only=False)
            model.residuals = extras.get("residuals", None)
            model.conformity_score = extras.get("conformity_score", None)
            model.conformity_scores = extras.get("conformity_scores", None)
        else:
            model.residuals = None
            model.conformity_score = None
            model.conformity_scores = None

        model.optimizer_cls = optimizer_cls 
        model.optimizer_kwargs = optimizer_kwargs 
        model.scheduler_cls = scheduler_cls 
        model.scheduler_kwargs = scheduler_kwargs
        model.input_scaler = input_scaler 
        model.output_scaler = output_scaler

        if load_logs: 
            logs_path = path / "logs"
            training_logs = [] 
            tuning_logs = []
            if logs_path.exists() and logs_path.is_dir(): 
                estimator_log_files = sorted(logs_path.glob("estimator_*.log"))
                for log_file in estimator_log_files:
                    with open(log_file, "r", encoding="utf-8") as f:
                        training_logs.append(f.read())

                tuning_log_files = sorted(logs_path.glob("tuning_*.log"))
                for log_file in tuning_log_files: 
                    with open(log_file, "r", encoding="utf-8") as f: 
                        tuning_logs.append(f.read())

            model.training_logs = training_logs
            model.tuning_logs = tuning_logs

        return model

    def _train_test_split(self, X, cal_size, seed=None):
        """
        For internal use in calibration splitting only, 
        see uqregressors/utils/torch_sklearn_utils for a global version
        """
        if seed is not None: 
            torch.manual_seed(seed)

        n = len(X)
        n_cal = int(np.ceil(cal_size * n))
        all_idx = np.arange(n)
        cal_idx = np.random.randint(n, size=n_cal)
        mask = np.ones(n, dtype=bool)
        mask[cal_idx] = False 
        train_idx = all_idx[mask] 
        return train_idx, cal_idx

fit(X, y)

Fit the ensemble on training data.

Parameters:

Name Type Description Default
X array - like or Tensor

Training inputs.

required
y array - like or Tensor

Training targets.

required

Returns:

Type Description
ConformalEnsRegressor

Fitted estimator.

Source code in uqregressors\conformal\conformal_ens.py
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
def fit(self, X, y): 
    """
    Fit the ensemble on training data.

    Args:
        X (array-like or torch.Tensor): Training inputs.
        y (array-like or torch.Tensor): Training targets.

    Returns:
        (ConformalEnsRegressor): Fitted estimator.
    """
    X_tensor, y_tensor = validate_and_prepare_inputs(X, y, device=self.device)
    input_dim = X_tensor.shape[1]
    self.input_dim = input_dim

    if self.scale_data: 
        X_tensor = self.input_scaler.fit_transform(X_tensor)
        y_tensor = self.output_scaler.fit_transform(y_tensor)

    train_idx, cal_idx = self._train_test_split(X_tensor, 0.2, self.random_seed)
    results = Parallel(n_jobs=self.n_jobs)(
        delayed(self._train_single_model)(X_tensor, y_tensor, input_dim, train_idx, cal_idx, i)
        for i in range(self.n_estimators)
    )

    self.models = [result[0] for result in results]
    cal_preds = torch.stack([result[1] for result in results]).squeeze()
    self._loggers = [result[2] for result in results]

    mean_cal_preds = torch.mean(cal_preds, dim=0).squeeze()
    var_cal_preds = torch.var(cal_preds, dim=0).squeeze()
    std_cal_preds = var_cal_preds.sqrt()
    self.residuals = torch.abs(mean_cal_preds - y_tensor[cal_idx].squeeze())

    self.conformity_scores = self.residuals / (std_cal_preds + self.gamma)

    return self 

load(path, device='cpu', load_logs=False) classmethod

Load a saved KFoldCQR model from disk.

Parameters:

Name Type Description Default
path str or Path

Directory containing saved model files.

required
device str

Device to load the model on ("cpu" or "cuda").

'cpu'
load_logs bool

Whether to also load training logs.

False

Returns:

Type Description
ConformalEnsRegressor

The loaded model instance.

Source code in uqregressors\conformal\conformal_ens.py
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
@classmethod
def load(cls, path, device="cpu", load_logs=False):
    """
    Load a saved KFoldCQR model from disk.

    Args:
        path (str or Path): Directory containing saved model files.
        device (str): Device to load the model on ("cpu" or "cuda").
        load_logs (bool): Whether to also load training logs.

    Returns:
        (ConformalEnsRegressor): The loaded model instance.
    """
    path = Path(path)

    # Load config
    with open(path / "config.json", "r") as f:
        config = json.load(f)
    config["device"] = device

    config.pop("optimizer", None)
    config.pop("scheduler", None)
    config.pop("input_scaler", None)
    config.pop("output_scaler", None)

    input_dim = config.pop("input_dim", None)
    model = cls(**config)

    with open(path / "extras.pkl", 'rb') as f: 
        optimizer_cls, optimizer_kwargs, scheduler_cls, scheduler_kwargs, input_scaler, output_scaler = pickle.load(f)

    # Recreate models
    model.input_dim = input_dim
    activation = get_activation(config["activation_str"])
    model.models = []
    for i in range(config["n_estimators"]):
        m = MLP(model.input_dim, config["hidden_sizes"], config["dropout"], activation).to(device)
        m.load_state_dict(torch.load(path / f"model_{i}.pt", map_location=device))
        model.models.append(m)

    # Load extras
    extras_path = path / "extras.pt"
    if extras_path.exists():
        extras = torch.load(extras_path, map_location=device, weights_only=False)
        model.residuals = extras.get("residuals", None)
        model.conformity_score = extras.get("conformity_score", None)
        model.conformity_scores = extras.get("conformity_scores", None)
    else:
        model.residuals = None
        model.conformity_score = None
        model.conformity_scores = None

    model.optimizer_cls = optimizer_cls 
    model.optimizer_kwargs = optimizer_kwargs 
    model.scheduler_cls = scheduler_cls 
    model.scheduler_kwargs = scheduler_kwargs
    model.input_scaler = input_scaler 
    model.output_scaler = output_scaler

    if load_logs: 
        logs_path = path / "logs"
        training_logs = [] 
        tuning_logs = []
        if logs_path.exists() and logs_path.is_dir(): 
            estimator_log_files = sorted(logs_path.glob("estimator_*.log"))
            for log_file in estimator_log_files:
                with open(log_file, "r", encoding="utf-8") as f:
                    training_logs.append(f.read())

            tuning_log_files = sorted(logs_path.glob("tuning_*.log"))
            for log_file in tuning_log_files: 
                with open(log_file, "r", encoding="utf-8") as f: 
                    tuning_logs.append(f.read())

        model.training_logs = training_logs
        model.tuning_logs = tuning_logs

    return model

predict(X)

Predicts the target values with uncertainty estimates.

Parameters:

Name Type Description Default
X ndarray

Feature matrix of shape (n_samples, n_features).

required

Returns:

Type Description
Union[Tuple[ndarray, ndarray, ndarray], Tuple[Tensor, Tensor, Tensor]]

Tuple containing: mean predictions, lower bound of the prediction interval, upper bound of the prediction interval.

Note

If requires_grad is False, all returned arrays are NumPy arrays. Otherwise, they are PyTorch tensors with gradients.

Source code in uqregressors\conformal\conformal_ens.py
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
def predict(self, X): 
    """
    Predicts the target values with uncertainty estimates.

    Args:
        X (np.ndarray): Feature matrix of shape (n_samples, n_features).

    Returns:
        (Union[Tuple[np.ndarray, np.ndarray, np.ndarray], Tuple[torch.Tensor, torch.Tensor, torch.Tensor]]): Tuple containing:
            mean predictions,
            lower bound of the prediction interval,
            upper bound of the prediction interval.

    !!! note
        If `requires_grad` is False, all returned arrays are NumPy arrays.
        Otherwise, they are PyTorch tensors with gradients.
    """        
    X_tensor = validate_X_input(X, input_dim=self.input_dim, device=self.device, requires_grad=self.requires_grad)
    if self.scale_data: 
        X_tensor = self.input_scaler.transform(X_tensor)
    n = len(self.residuals)
    q = int((1 - self.alpha) * (n+1)) 
    q = min(q, n-1) 

    res_quantile = n-q
    self.conformity_score = torch.topk(self.conformity_scores, res_quantile).values[-1]

    preds = []

    with torch.no_grad(): 
        for model in self.models: 
            if self.pred_with_dropout: 
                model.train()
            else: 
                model.eval()
            pred = model(X_tensor)
            preds.append(pred)

    preds = torch.stack(preds)[:, :, 0]
    mean = torch.mean(preds, dim=0)
    variances = torch.var(preds, dim=0)
    stds = variances.sqrt()
    conformal_widths = self.conformity_score * (stds + self.gamma) 
    lower = mean - conformal_widths 
    upper = mean + conformal_widths 

    if self.scale_data: 
        mean = self.output_scaler.inverse_transform(mean.view(-1, 1)).squeeze()
        lower = self.output_scaler.inverse_transform(lower.view(-1, 1)).squeeze()
        upper = self.output_scaler.inverse_transform(upper.view(-1, 1)).squeeze()

    if not self.requires_grad: 
        return mean.detach().cpu().numpy(), lower.detach().cpu().numpy(), upper.detach().cpu().numpy()

    else: 
        return mean, lower, upper

save(path)

Save the trained model and associated configuration to disk.

Parameters:

Name Type Description Default
path str or Path

Directory to save model files.

required
Source code in uqregressors\conformal\conformal_ens.py
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
def save(self, path):
    """
    Save the trained model and associated configuration to disk.

    Args:
        path (str or Path): Directory to save model files.
    """
    path = Path(path)
    path.mkdir(parents=True, exist_ok=True)

    # Save config (exclude non-serializable or large objects)
    config = {
        k: v for k, v in self.__dict__.items()
        if k not in ["models", "residuals", "conformity_score", "conformity_scores", "optimizer_cls", "optimizer_kwargs", "scheduler_cls", "scheduler_kwargs", 
                     "input_scaler", "output_scaler", "_loggers", "training_logs", "tuning_loggers", "tuning_logs"]
        and not callable(v)
        and not isinstance(v, (torch.nn.Module,))
    }

    config["optimizer"] = self.optimizer_cls.__class__.__name__ if self.optimizer_cls is not None else None
    config["scheduler"] = self.scheduler_cls.__class__.__name__ if self.scheduler_cls is not None else None
    config["input_scaler"] = self.input_scaler.__class__.__name__ if self.input_scaler is not None else None 
    config["output_scaler"] = self.output_scaler.__class__.__name__ if self.output_scaler is not None else None

    with open(path / "config.json", "w") as f:
        json.dump(config, f, indent=4)

    # Save model weights
    for i, model in enumerate(self.models):
        torch.save(model.state_dict(), path / f"model_{i}.pt")

    # Save residuals and conformity score
    torch.save({
        "residuals": self.residuals.cpu(),
        "conformity_score": self.conformity_score, 
        "conformity_scores": self.conformity_scores
    }, path / "extras.pt")

    with open(path / "extras.pkl", 'wb') as f: 
        pickle.dump([self.optimizer_cls, 
                     self.optimizer_kwargs, self.scheduler_cls, self.scheduler_kwargs, self.input_scaler, self.output_scaler], f)

    for i, logger in enumerate(getattr(self, "_loggers", [])):
        logger.save_to_file(path, idx=i, name="estimator")

    for i, logger in enumerate(getattr(self, "tuning_loggers", [])): 
        logger.save_to_file(path, name="tuning", idx=i)

MLP

Bases: Module

A simple feedforward neural network with dropout for regression.

This MLP supports customizable hidden layer sizes, activation functions, and dropout. It outputs a single scalar per input — the predictive mean.

Parameters:

Name Type Description Default
input_dim int

Number of input features.

required
hidden_sizes list of int

Sizes of the hidden layers.

required
dropout float

Dropout rate (applied after each activation).

required
activation callable

Activation function (e.g., nn.ReLU).

required
Source code in uqregressors\conformal\conformal_ens.py
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
class MLP(nn.Module): 
    """
    A simple feedforward neural network with dropout for regression.

    This MLP supports customizable hidden layer sizes, activation functions,
    and dropout. It outputs a single scalar per input — the predictive mean.

    Args:
        input_dim (int): Number of input features.
        hidden_sizes (list of int): Sizes of the hidden layers.
        dropout (float): Dropout rate (applied after each activation).
        activation (callable): Activation function (e.g., nn.ReLU).
    """
    def __init__(self, input_dim, hidden_sizes, dropout, activation): 
        super().__init__()
        layers = []
        for h in hidden_sizes: 
            layers.append(nn.Linear(input_dim, h))
            layers.append(activation())
            if dropout is not None: 
                layers.append(nn.Dropout(dropout))
            input_dim=h 
        output_layer = nn.Linear(hidden_sizes[-1], 1)
        layers.append(output_layer)
        self.model = nn.Sequential(*layers)

    def forward(self, x): 
        return self.model(x)