conformal.evaluation

Evaluation module contains methods for evaluation of conformal predictors.

Function run() produces Results of an appropriate type by using a Sampler on a given data set to split it into a training and testing set.

Structure:

class conformal.evaluation.Sampler(data)[source]

Bases: object

Base class for various data sampling/splitting methods.

data

Data set for sampling.

Type:Table
n

Size of the data set.

Type:int

Examples

>>> s = CrossSampler(Table('iris'), 4)
>>> for train, test in s.repeat(3):
>>>     print(train)
__init__(data)[source]

Initialize the data set.

__iter__()[source]
__next__()[source]

Extending samplers should implement the __next__ method to return the selected and remaining part of the data.

repeat(rep=1)[source]

Repeat sampling several times.

class conformal.evaluation.RandomSampler(data, a, b)[source]

Bases: conformal.evaluation.Sampler

Randomly samples a subset of data in proportion a:b.

k

Size of the selected subset.

Type:float

Examples

>>> s = RandomSampler(Table('iris'), 3, 2)
>>> train, test = next(s)
__init__(data, a, b)[source]

Initialize the data set and the size of the desired selection.

__iter__()[source]

Return a special iterator over a single split of data.

__next__()[source]

Splits the data based on a random permutation.

class conformal.evaluation.CrossSampler(data, k)[source]

Bases: conformal.evaluation.Sampler

Sample the data in k folds. Shuffle the data before determining the folds.

k

Number of folds.

Type:int

Examples

>>> s = CrossSampler(Table('iris'), 4)
>>> for train, test in s:
>>>     print(train)
__init__(data, k)[source]

Initialize the data set.

__next__()[source]

Compute the next fold. Initializes a new k-fold split on each repetition of the entire sampling procedure.

class conformal.evaluation.LOOSampler(data)[source]

Bases: conformal.evaluation.CrossSampler

Leave-One-Out sampler is a cross sampler with the number of folds equal to the size of the data set.

Examples

>>> s = LOOSampler(Table('iris'))
>>> for train, test in s:
>>>     print(len(test))
__init__(data)[source]

Initialize the data set.

class conformal.evaluation.Results[source]

Bases: object

Contains results of an evaluation of a conformal predictor returned by the run() function.

Examples

>>> cp = CrossClassifier(InverseProbability(LogisticRegressionLearner()), 5)
>>> r = run(cp, 0.1, RandomSampler(Table('iris'), 2, 1))
>>> print(r.accuracy())
__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

add(pred, ref)[source]

Add a new predicted and corresponding reference value.

concatenate(r)[source]

Concatenate another set of results.

accuracy()[source]

Compute the accuracy of the predictor averaging verdicts of individual predictions. This is the fraction of instances that contain the actual/reference class among the predicted ones for classification and the fraction of instances that contain the actual value within the predicted range for regression.

time()[source]
class conformal.evaluation.ResultsClass[source]

Bases: conformal.evaluation.Results

Results of evaluating a conformal classifier. Provides classification specific efficiency measures.

Examples

>>> cp = CrossClassifier(InverseProbability(LogisticRegressionLearner()), 5)
>>> r = run(cp, 0.1, RandomSampler(Table('iris'), 2, 1))
>>> print(r.singleton_criterion())
accuracy(class_value=None, eps=None)[source]

Compute accuracy for test instances with a given class value. If this parameter is not given, compute accuracy over all instances, regardless of their class.

confidence()[source]

Average confidence of predictions.

credibility()[source]

Average credibility of predictions.

confusion(actual, predicted)[source]

Compute the number of singleton predictions of class predicted when the actual class is actual.

Examples

Drawing a confusion matrix.

>>> data = Table('iris')
>>> cp = CrossClassifier(InverseProbability(LogisticRegressionLearner()), 3)
>>> r = run(cp, 0.1, RandomSampler(data, 2, 1))
>>> values = data.domain.class_var.values
>>> form = '{: >20}'*(len(values)+1)
>>> print(form.format('actual\predicted', *values))
>>> for a in values:
>>>     c = [r.confusion(a, p) for p in values]
>>>     print(('{: >20}'*(len(c)+1)).format(a, *c))
    actual\predicted         Iris-setosa     Iris-versicolor      Iris-virginica
         Iris-setosa                  18                   0                   0
     Iris-versicolor                   0                  14                   4
      Iris-virginica                   0                   0                  12
multiple_criterion()[source]

Number of cases with multiple predicted classes.

singleton_criterion()[source]

Number of cases with a single predicted class.

empty_criterion()[source]

Number of cases with no predicted classes.

singleton_correct()[source]

Fraction of singleton predictions that are correct.

class conformal.evaluation.ResultsRegr[source]

Bases: conformal.evaluation.Results

Results of evaluating a conformal regressor. Provides regression specific efficiency measures.

Examples

>>> ir = InductiveRegressor(AbsErrorKNN(Euclidean(), 10, average=True))
>>> r = run(ir, 0.1, RandomSampler(Table('housing'), 2, 1))
>>> print(r.interdecile_range())
widths()[source]
median_range()[source]

Median width of predicted ranges.

mean_range()[source]

Mean width of predicted ranges.

std_dev()[source]

Standard deviation of widths of predicted ranges.

interdecile_range()[source]

Difference between the first and ninth decile of widths of predicted ranges.

interdecile_mean()[source]

Mean width discarding the smallest and largest 10% of widths of predicted ranges.

conformal.evaluation.run(cp, eps, sampler, rep=1)[source]

Run method is used to repeat an experiment one or more times with different splits of the dataset into a training and testing set. The splits are defined by the provided sampler. The conformal predictor itself might further split the testing set internally for its computations (e.g. inductive or cross predictors).

Run the conformal predictor cp on the datasets defined by the provided sampler and number of repetitions and construct the results. Fit the conformal predictor on each training set returned by the sampler and evaluate it on the corresponding test set. Inductive conformal predictors use one third of the training set (random subset) for calibration.

For more control over the exact datasets used for training, testing and calibration see run_train_test().

Returns:ResultsClass or ResultsRegr

Examples

>>> cp = CrossClassifier(InverseProbability(LogisticRegressionLearner()), 5)
>>> r = run(cp, 0.1, CrossSampler(Table('iris'), 4), rep=3)
>>> print(r.accuracy(), r.empty_criterion())

The above example uses a CrossSampler to define training and testing datasets. Each fold is used as the test set and the rest as a training set. The entire process is repeated three times with different fold splits and results in 3*n predictions, where n is the size of the dataset.

conformal.evaluation.run_train_test(cp, eps, train, test, calibrate=None)[source]

Fits the conformal predictor cp on the training dataset and evaluates it on the testing set. Inductive conformal predictors use the provided calibration set or default to extracting one third of the training set (random subset) for calibration.

Returns:ResultsClass or ResultsRegr

Examples

>>> tab = Table('iris')
>>> cp = CrossClassifier(InverseProbability(LogisticRegressionLearner()), 4)
>>> r = run_train_test(cp, 0.1, tab[:100], tab[100:])
>>> print(r.accuracy(), r.singleton_criterion())