Class RBFNetwork<T>

java.lang.Object
smile.regression.RBFNetwork<T>
Type Parameters:
T - the data type of samples.
All Implemented Interfaces:
Serializable, ToDoubleFunction<T>, Regression<T>

public class RBFNetwork<T> extends Object implements Regression<T>
Radial basis function network. A radial basis function network is an artificial neural network that uses radial basis functions as activation functions. It is a linear combination of radial basis functions. They are used in function approximation, time series prediction, and control.

In its basic form, radial basis function network is in the form

y(x) = Σ wi φ(||x-ci||)

where the approximating function y(x) is represented as a sum of N radial basis functions φ, each associated with a different center ci, and weighted by an appropriate coefficient wi. For distance, one usually chooses Euclidean distance. The weights wi can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights.

The points ci are often called the centers of the RBF networks, which can be randomly selected from training data, or learned by some clustering method (e.g. k-means), or learned together with weight parameters undergo a supervised learning processing (e.g. error-correction learning).

Popular choices for φ comprise the Gaussian function and the so-called thin plate splines. The advantage of the thin plate splines is that their conditioning is invariant under scaling. Gaussian, multi-quadric and inverse multi-quadric are infinitely smooth and involve a scale or shape parameter, r0 > 0. Decreasing r0 tends to flatten the basis function. For a given function, the quality of approximation may strongly depend on this parameter. In particular, increasing r0 has the effect of better conditioning (the separation distance of the scaled points increases).

A variant on RBF networks is normalized radial basis function (NRBF) networks, in which we require the sum of the basis functions to be unity. NRBF arises more naturally from a Bayesian statistical perspective. However, there is no evidence that either the NRBF method is consistently superior to the RBF method, or vice versa.

References

  1. Simon Haykin. Neural Networks: A Comprehensive Foundation (2nd edition). 1999.
  2. T. Poggio and F. Girosi. Networks for approximation and learning. Proc. IEEE 78(9):1484-1487, 1990.
  3. Nabil Benoudjit and Michel Verleysen. On the kernel widths in radial-basis function networks. Neural Process, 2003.
See Also:
  • Constructor Details

    • RBFNetwork

      public RBFNetwork(RBF<T>[] rbf, double[] w, boolean normalized)
      Constructor.
      Parameters:
      rbf - the radial basis functions.
      w - the weights of RBFs.
      normalized - True if this is a normalized RBF network.
  • Method Details

    • fit

      public static <T> RBFNetwork<T> fit(T[] x, double[] y, RBF<T>[] rbf)
      Fits an RBF network.
      Type Parameters:
      T - the data type of samples.
      Parameters:
      x - the training dataset.
      y - the response variable.
      rbf - the radial basis functions.
      Returns:
      the model.
    • fit

      public static <T> RBFNetwork<T> fit(T[] x, double[] y, RBF<T>[] rbf, boolean normalized)
      Fits an RBF network.
      Type Parameters:
      T - the data type of samples.
      Parameters:
      x - the training dataset.
      y - the response variable.
      rbf - the radial basis functions.
      normalized - true for the normalized RBF network.
      Returns:
      the model.
    • fit

      public static RBFNetwork<double[]> fit(double[][] x, double[] y, Properties params)
      Fits an RBF network.
      Parameters:
      x - training samples.
      y - the response variable.
      params - the hyper-parameters.
      Returns:
      the model.
    • isNormalized

      public boolean isNormalized()
      Returns true if the model is normalized.
      Returns:
      true if the model is normalized.
    • predict

      public double predict(T x)
      Description copied from interface: Regression
      Predicts the dependent variable of an instance.
      Specified by:
      predict in interface Regression<T>
      Parameters:
      x - an instance.
      Returns:
      the predicted value of dependent variable.