Package smile.base.mlp
Class MultilayerPerceptron
java.lang.Object
smile.base.mlp.MultilayerPerceptron
- All Implemented Interfaces:
Serializable
Fully connected multilayer perceptron neural network.
An MLP consists of at least three layers of nodes: an input layer,
a hidden layer and an output layer. The nodes are interconnected
through weighted acyclic arcs from each preceding layer to the
following, without lateral or feedback connections. Each node
calculates a transformed weighted linear combination of its inputs
(output activations from the preceding layer), with one of the weights
acting as a trainable bias connected to a constant input. The
transformation, called activation function, is a bounded non-decreasing
(non-linear) function.
- See Also:
-
Field Summary
Modifier and TypeFieldDescriptionprotected double
The gradient clipping norm.protected double
The gradient clipping value.protected double
A small constant for numerical stability in RMSProp.protected double
The L2 regularization factor, which is also the weight decay factor.protected TimeFunction
The learning rate.protected TimeFunction
The momentum factor.protected Layer[]
The input and hidden layers.protected OutputLayer
The output layer.protected int
The dimensionality of input data.protected double
The discounting factor for the history/coming gradient in RMSProp.protected int
The training iterations.protected ThreadLocal
<double[]> The buffer to store desired target value of training instance. -
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprotected void
backpropagate
(boolean update) Propagates the errors back through the network.double
Returns the gradient clipping norm.double
Returns the gradient clipping value.double
Returns the learning rate.double
Returns the momentum factor.double
Returns the weight decay factor.protected void
propagate
(double[] x, boolean training) Propagates the signals through the neural network.void
setClipNorm
(double clipNorm) Sets the gradient clipping norm.void
setClipValue
(double clipValue) Sets the gradient clipping value.void
setLearningRate
(TimeFunction rate) Sets the learning rate.void
setMomentum
(TimeFunction momentum) Sets the momentum factor.void
setParameters
(Properties params) Sets MLP hyper-parameters such as learning rate, weight decay, momentum, RMSProp, etc.void
setRMSProp
(double rho, double epsilon) Sets RMSProp parameters.void
setWeightDecay
(double lambda) Sets the weight decay factor.toString()
protected void
update
(int m) Updates the weights for mini-batch training.
-
Field Details
-
p
protected int pThe dimensionality of input data. -
output
The output layer. -
net
The input and hidden layers. -
target
The buffer to store desired target value of training instance. -
learningRate
The learning rate. -
momentum
The momentum factor. -
rho
protected double rhoThe discounting factor for the history/coming gradient in RMSProp. -
epsilon
protected double epsilonA small constant for numerical stability in RMSProp. -
lambda
protected double lambdaThe L2 regularization factor, which is also the weight decay factor. -
clipValue
protected double clipValueThe gradient clipping value. -
clipNorm
protected double clipNormThe gradient clipping norm. -
t
protected int tThe training iterations.
-
-
Constructor Details
-
MultilayerPerceptron
Constructor.- Parameters:
net
- the input layer, hidden layers, and output layer in order.
-
-
Method Details
-
toString
-
setLearningRate
Sets the learning rate.- Parameters:
rate
- the learning rate.
-
setMomentum
Sets the momentum factor. momentum = 0.0 means no momentum.- Parameters:
momentum
- the momentum factor.
-
setRMSProp
public void setRMSProp(double rho, double epsilon) Sets RMSProp parameters.- Parameters:
rho
- The discounting factor for the history/coming gradient.epsilon
- A small constant for numerical stability.
-
setWeightDecay
public void setWeightDecay(double lambda) Sets the weight decay factor. After each weight update, every weight is simply "decayed" or shrunk according to w = w * (1 - 2 * eta * lambda).- Parameters:
lambda
- the weight decay factor.
-
setClipValue
public void setClipValue(double clipValue) Sets the gradient clipping value. If clip value is set, the gradient of each weight is clipped to be no higher than this value.- Parameters:
clipValue
- the gradient clipping value.
-
setClipNorm
public void setClipNorm(double clipNorm) Sets the gradient clipping norm. If clip norm is set, the gradient of each weight is individually clipped so that its norm is no higher than this value.- Parameters:
clipNorm
- the gradient clipping norm.
-
getLearningRate
public double getLearningRate()Returns the learning rate.- Returns:
- the learning rate.
-
getMomentum
public double getMomentum()Returns the momentum factor.- Returns:
- the momentum factor.
-
getWeightDecay
public double getWeightDecay()Returns the weight decay factor.- Returns:
- the weight decay factor.
-
getClipValue
public double getClipValue()Returns the gradient clipping value.- Returns:
- the gradient clipping value.
-
getClipNorm
public double getClipNorm()Returns the gradient clipping norm.- Returns:
- the gradient clipping norm.
-
propagate
protected void propagate(double[] x, boolean training) Propagates the signals through the neural network.- Parameters:
x
- the input signal.training
- true if this is in training pass.
-
backpropagate
protected void backpropagate(boolean update) Propagates the errors back through the network.- Parameters:
update
- the flag if update the weights directly. It should be false for (mini-)batch.
-
update
protected void update(int m) Updates the weights for mini-batch training.- Parameters:
m
- the mini-batch size.
-
setParameters
Sets MLP hyper-parameters such as learning rate, weight decay, momentum, RMSProp, etc.- Parameters:
params
- the MLP hyper-parameters.
-