public class MLP extends MultilayerPerceptron implements OnlineClassifier<double[]>, SoftClassifier<double[]>, java.io.Serializable
The representational capabilities of a MLP are determined by the range of mappings it may implement through weight variation. Single layer perceptrons are capable of solving only linearly separable problems. With the sigmoid function as activation function, the singlelayer network is identical to the logistic regression model.
The universal approximation theorem for neural networks states that every continuous function that maps intervals of real numbers to some output interval of real numbers can be approximated arbitrarily closely by a multilayer perceptron with just one hidden layer. This result holds only for restricted classes of activation functions, which are extremely complex and NOT smooth for subtle mathematical reasons. On the other hand, smoothness is important for gradient descent learning. Besides, the proof is not constructive regarding the number of neurons required or the settings of the weights. Therefore, complex systems will have more layers of neurons with some having increased layers of input neurons and output neurons in practice.
The most popular algorithm to train MLPs is backpropagation, which is a gradient descent method. Based on chain rule, the algorithm propagates the error back through the network and adjusts the weights of each connection in order to reduce the value of the error function by some small amount. For this reason, backpropagation can only be applied on networks with differentiable activation functions.
During error back propagation, we usually times the gradient with a small number η, called learning rate, which is carefully selected to ensure that the network converges to a local minimum of the error function fast enough, without producing oscillations. One way to avoid oscillation at large η, is to make the change in weight dependent on the past weight change by adding a momentum term.
Although the backpropagation algorithm may performs gradient descent on the total error of all instances in a batch way, the learning rule is often applied to each instance separately in an online way or stochastic way. There exists empirical indication that the stochastic way results in faster convergence.
In practice, the problem of overfitting has emerged. This arises in convoluted or overspecified systems when the capacity of the network significantly exceeds the needed free parameters. There are two general approaches for avoiding this problem: The first is to use crossvalidation and similar techniques to check for the presence of overfitting and optimally select hyperparameters such as to minimize the generalization error. The second is to use some form of regularization, which emerges naturally in a Bayesian framework, where the regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over the "empirical risk" and the "structural risk".
For neural networks, the input patterns usually should be scaled/standardized. Commonly, each input variable is scaled into interval [0, 1] or to have mean 0 and standard deviation 1.
For penalty functions and output units, the following natural pairings are recommended:
Constructor and Description 

MLP(int p,
LayerBuilder... builders)
Constructor.

MLP(IntSet labels,
int p,
LayerBuilder... builders)
Constructor.

Modifier and Type  Method and Description 

int 
predict(double[] x)
Predicts the class label of an instance.

int 
predict(double[] x,
double[] posteriori)
Predicts the class label of an instance and also calculate a posteriori
probabilities.

void 
update(double[][] x,
int[] y)
Updates the model with a minibatch.

void 
update(double[] x,
int y)
Updates the model with a single sample.

backpropagate, getLearningRate, getMomentum, getWeightDecay, propagate, setLearningRate, setMomentum, setRMSProp, setWeightDecay, toString, update
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
applyAsDouble, applyAsInt, f, predict
public MLP(int p, LayerBuilder... builders)
p
 the number of variables in input layer.builders
 the builders of layers from bottom to top.public MLP(IntSet labels, int p, LayerBuilder... builders)
p
 the number of variables in input layer.builders
 the builders of layers from bottom to top.public int predict(double[] x, double[] posteriori)
SoftClassifier
predict
in interface SoftClassifier<double[]>
x
 an instance to be classified.posteriori
 the array to store a posteriori probabilities on output.public int predict(double[] x)
Classifier
predict
in interface Classifier<double[]>
x
 the instance to be classified.public void update(double[] x, int y)
update
in interface OnlineClassifier<double[]>
x
 training instance.y
 training label.public void update(double[][] x, int[] y)
update
in interface OnlineClassifier<double[]>
x
 the training instances.y
 the target values.