Class GHA

All Implemented Interfaces:
Serializable, Function<Tuple,Tuple>, Transform

public class GHA extends Projection
Generalized Hebbian Algorithm. GHA is a linear feed-forward neural network model for unsupervised learning with applications primarily in principal components analysis. It is single-layer process -- that is, a synaptic weight changes only depending on the response of the inputs and outputs of that layer.

It guarantees that GHA finds the first k eigenvectors of the covariance matrix, assuming that the associated eigenvalues are distinct. The convergence theorem is formulated in terms of a time-varying learning rate η. In practice, the learning rate η is chosen to be a small constant, in which case convergence is guaranteed with mean-squared error in synaptic weights of order η.

It also has a simple and predictable trade-off between learning speed and accuracy of convergence as set by the learning rate parameter η. It was shown that a larger learning rate η leads to faster convergence and larger asymptotic mean-square error, which is intuitively satisfying.

Compared to regular batch PCA algorithm based on eigen decomposition, GHA is an adaptive method and works with an arbitrarily large sample size. The storage requirement is modest. Another attractive feature is that, in a non-stationary environment, it has an inherent ability to track gradual changes in the optimal solution in an inexpensive way.

References

  1. Terence D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks 2(6):459-473, 1989.
  2. Simon Haykin. Neural Networks: A Comprehensive Foundation (2 ed.). 1998.
See Also:
  • Field Details

    • t

      protected int t
      The training iterations.
  • Constructor Details

    • GHA

      public GHA(int n, int p, TimeFunction r, String... columns)
      Constructor.
      Parameters:
      n - the dimension of input space.
      p - the dimension of feature space.
      r - the learning rate.
      columns - the columns to transform when applied on Tuple/DataFrame.
    • GHA

      public GHA(double[][] w, TimeFunction r, String... columns)
      Constructor.
      Parameters:
      w - the initial projection matrix. When GHA converges, the column of projection matrix are the first p eigenvectors of covariance matrix, ordered by decreasing eigenvalues.
      r - the learning rate.
      columns - the columns to transform when applied on Tuple/DataFrame.
  • Method Details

    • update

      public double update(double[] x)
      Update the model with a new sample.
      Parameters:
      x - the centered learning sample whose E(x) = 0.
      Returns:
      the approximation error for input sample.
    • update

      public double update(Tuple x)
      Update the model with a new sample.
      Parameters:
      x - the centered learning sample whose E(x) = 0.
      Returns:
      the approximation error for input sample.
    • update

      public void update(double[][] data)
      Update the model with a set of samples.
      Parameters:
      data - the centered learning samples whose E(x) = 0.
    • update

      public void update(DataFrame data)
      Update the model with a new data frame.
      Parameters:
      data - the centered learning samples whose E(x) = 0.