Class GHA
- All Implemented Interfaces:
Serializable
,Function<Tuple,
,Tuple> Transform
It guarantees that GHA finds the first k eigenvectors of the covariance matrix, assuming that the associated eigenvalues are distinct. The convergence theorem is formulated in terms of a time-varying learning rate η. In practice, the learning rate η is chosen to be a small constant, in which case convergence is guaranteed with mean-squared error in synaptic weights of order η.
It also has a simple and predictable trade-off between learning speed and accuracy of convergence as set by the learning rate parameter η. It was shown that a larger learning rate η leads to faster convergence and larger asymptotic mean-square error, which is intuitively satisfying.
Compared to regular batch PCA algorithm based on eigen decomposition, GHA is an adaptive method and works with an arbitrarily large sample size. The storage requirement is modest. Another attractive feature is that, in a non-stationary environment, it has an inherent ability to track gradual changes in the optimal solution in an inexpensive way.
References
- Terence D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks 2(6):459-473, 1989.
- Simon Haykin. Neural Networks: A Comprehensive Foundation (2 ed.). 1998.
- See Also:
-
Field Summary
Fields inherited from class smile.feature.extraction.Projection
columns, projection, schema
-
Constructor Summary
ConstructorDescriptionGHA
(double[][] w, TimeFunction r, String... columns) Constructor.GHA
(int n, int p, TimeFunction r, String... columns) Constructor. -
Method Summary
Methods inherited from class smile.feature.extraction.Projection
apply, apply, apply, apply, postprocess, preprocess
-
Field Details
-
t
protected int tThe training iterations.
-
-
Constructor Details
-
GHA
Constructor.- Parameters:
n
- the dimension of input space.p
- the dimension of feature space.r
- the learning rate.columns
- the columns to transform when applied on Tuple/DataFrame.
-
GHA
Constructor.- Parameters:
w
- the initial projection matrix. When GHA converges, the column of projection matrix are the first p eigenvectors of covariance matrix, ordered by decreasing eigenvalues.r
- the learning rate.columns
- the columns to transform when applied on Tuple/DataFrame.
-
-
Method Details
-
update
public double update(double[] x) Update the model with a new sample.- Parameters:
x
- the centered learning sample whose E(x) = 0.- Returns:
- the approximation error for input sample.
-
update
Update the model with a new sample.- Parameters:
x
- the centered learning sample whose E(x) = 0.- Returns:
- the approximation error for input sample.
-
update
public void update(double[][] data) Update the model with a set of samples.- Parameters:
data
- the centered learning samples whose E(x) = 0.
-
update
Update the model with a new data frame.- Parameters:
data
- the centered learning samples whose E(x) = 0.
-