Model validation.
Bootstrap validation on a generic regression model.
Bootstrap validation on a generic regression model.
data samples.
response variable.
k-round bootstrap estimation.
validation measures such as MSE, AbsoluteDeviation, etc.
a code block to return a regression model trained on the given data.
measure results.
Bootstrap validation on a generic classifier.
Bootstrap validation on a generic classifier. The bootstrap is a general tool for assessing statistical accuracy. The basic idea is to randomly draw datasets with replacement from the training data, each sample the same size as the original training set. This is done many times (say k = 100), producing k bootstrap datasets. Then we refit the model to each of the bootstrap datasets and examine the behavior of the fits over the k replications.
data samples.
sample labels.
k-round bootstrap estimation.
validation measures such as accuracy, specificity, etc.
a code block to return a classifier trained on the given data.
measure results.
Cross validation on a generic regression model.
Cross validation on a generic regression model.
data samples.
response variable.
k-fold cross validation.
validation measures such as MSE, AbsoluteDeviation, etc.
a code block to return a regression model trained on the given data.
measure results.
Cross validation on a generic classifier.
Cross validation on a generic classifier. Cross-validation is a technique for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are averaged over the rounds.
data samples.
sample labels.
k-fold cross validation.
validation measures such as accuracy, specificity, etc.
a code block to return a classifier trained on the given data.
measure results.
Leave-one-out cross validation on a generic regression model.
Leave-one-out cross validation on a generic regression model.
data samples.
response variable.
validation measures such as MSE, AbsoluteDeviation, etc.
a code block to return a regression model trained on the given data.
measure results.
Leave-one-out cross validation on a generic classifier.
Leave-one-out cross validation on a generic classifier. LOOCV uses a single observation from the original sample as the validation data, and the remaining observations as the training data. This is repeated such that each observation in the sample is used once as the validation data. This is the same as a K-fold cross-validation with K being equal to the number of observations in the original sample. Leave-one-out cross-validation is usually very expensive from a computational point of view because of the large number of times the training process is repeated.
data samples.
sample labels.
validation measures such as accuracy, specificity, etc.
a code block to return a classifier trained on the given data.
measure results.
Test a generic classifier.
Test a generic classifier. The accuracy will be measured and printed out on standard output.
the type of training and test data.
training data.
training labels.
test data.
test data labels.
Parallel test if true.
a code block to return a classifier trained on the given data.
the trained classifier.
Test a binary classifier.
Test a binary classifier. The accuracy, sensitivity, specificity, precision, F-1 score, F-2 score, and F-0.5 score will be measured and printed out on standard output.
the type of training and test data.
training data.
training labels.
test data.
test data labels.
Parallel test if true.
a code block to return a binary classifier trained on the given data.
the trained classifier.
Test a binary soft classifier.
Test a binary soft classifier. The accuracy, sensitivity, specificity, precision, F-1 score, F-2 score, F-0.5 score, and AUC will be measured and printed out on standard output.
the type of training and test data.
training data.
training labels.
test data.
test data labels.
Parallel test if true.
a code block to return a binary classifier trained on the given data.
the trained classifier.
Model validation.