trait Operators extends AnyRef
High level projection operators for feature extraction.
 Alphabetic
 By Inheritance
 Operators
 AnyRef
 Any
 by any2stringadd
 by StringFormat
 by Ensuring
 by ArrowAssoc
 Hide All
 Show All
 Public
 All
Value Members

final
def
!=(arg0: Any): Boolean
 Definition Classes
 AnyRef → Any

final
def
##(): Int
 Definition Classes
 AnyRef → Any
 def +(other: String): String
 def >[B](y: B): (Operators, B)

final
def
==(arg0: Any): Boolean
 Definition Classes
 AnyRef → Any

final
def
asInstanceOf[T0]: T0
 Definition Classes
 Any

def
clone(): AnyRef
 Attributes
 protected[java.lang]
 Definition Classes
 AnyRef
 Annotations
 @throws( ... )
 def ensuring(cond: (Operators) ⇒ Boolean, msg: ⇒ Any): Operators
 def ensuring(cond: (Operators) ⇒ Boolean): Operators
 def ensuring(cond: Boolean, msg: ⇒ Any): Operators
 def ensuring(cond: Boolean): Operators

final
def
eq(arg0: AnyRef): Boolean
 Definition Classes
 AnyRef

def
equals(arg0: Any): Boolean
 Definition Classes
 AnyRef → Any

def
finalize(): Unit
 Attributes
 protected[java.lang]
 Definition Classes
 AnyRef
 Annotations
 @throws( classOf[java.lang.Throwable] )
 def formatted(fmtstr: String): String

final
def
getClass(): Class[_]
 Definition Classes
 AnyRef → Any

def
gha(data: Array[Array[Double]], k: Int, r: Double): GHA
Generalized Hebbian Algorithm with random initial projection matrix.
Generalized Hebbian Algorithm with random initial projection matrix.
 data
training data.
 k
the dimension of feature space.
 r
the learning rate.

def
gha(data: Array[Array[Double]], w: Array[Array[Double]], r: Double): GHA
Generalized Hebbian Algorithm.
Generalized Hebbian Algorithm. GHA is a linear feedforward neural network model for unsupervised learning with applications primarily in principal components analysis. It is singlelayer process  that is, a synaptic weight changes only depending on the response of the inputs and outputs of that layer.
It guarantees that GHA finds the first k eigenvectors of the covariance matrix, assuming that the associated eigenvalues are distinct. The convergence theorem is forumulated in terms of a timevarying learning rate η. In practice, the learning rate η is chosen to be a small constant, in which case convergence is guaranteed with meansquared error in synaptic weights of order η.
It also has a simple and predictable tradeoff between learning speed and accuracy of convergence as set by the learning rate parameter η. It was shown that a larger learning rate η leads to faster convergence and larger asymptotic meansquare error, which is intuitively satisfying.
Compared to regular batch PCA algorithm based on eigen decomposition, GHA is an adaptive method and works with an arbitrarily large sample size. The storage requirement is modest. Another attractive feature is that, in a nonstationary environment, it has an inherent ability to track gradual changes in the optimal solution in an inexpensive way.
References:
 Terence D. Sanger. Optimal unsupervised learning in a singlelayer linear feedforward neural network. Neural Networks 2(6):459473, 1989.
 Simon Haykin. Neural Networks: A Comprehensive Foundation (2 ed.). 1998.
 data
training data.
 w
the initial projection matrix.
 r
the learning rate.

def
hashCode(): Int
 Definition Classes
 AnyRef → Any

final
def
isInstanceOf[T0]: Boolean
 Definition Classes
 Any

def
kpca[T <: AnyRef](data: Array[T], kernel: MercerKernel[T], k: Int, threshold: Double = 0.0001): KPCA[T]
Kernel principal component analysis.
Kernel principal component analysis. Kernel PCA is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are done in a reproducing kernel Hilbert space with a nonlinear mapping.
In practice, a large data set leads to a large Kernel/Gram matrix K, and storing K may become a problem. One way to deal with this is to perform clustering on your large dataset, and populate the kernel with the means of those clusters. Since even this method may yield a relatively large K, it is common to compute only the top P eigenvalues and eigenvectors of K.
Kernel PCA with an isotropic kernel function is closely related to metric MDS. Carrying out metric MDS on the kernel matrix K produces an equivalent configuration of points as the distance (2(1  K(x_{i}, x_{j})))^{1/2} computed in feature space.
Kernel PCA also has close connections with Isomap, LLE, and Laplacian eigenmaps.
References:
 Bernhard Scholkopf, Alexander Smola, and KlausRobert Muller. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation, 1998.
 data
training data.
 kernel
Mercer kernel to compute kernel matrix.
 k
choose top k principal components used for projection.
 threshold
only principal components with eigenvalues larger than the given threshold will be kept.

final
def
ne(arg0: AnyRef): Boolean
 Definition Classes
 AnyRef

final
def
notify(): Unit
 Definition Classes
 AnyRef

final
def
notifyAll(): Unit
 Definition Classes
 AnyRef

def
pca(data: Array[Array[Double]], cor: Boolean = false): PCA
Principal component analysis.
Principal component analysis. PCA is an orthogonal linear transformation that transforms a number of possibly correlated variables into a smaller number of uncorrelated variables called principal components. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component accounts for as much of the remaining variability as possible. PCA is theoretically the optimum transform for given data in least square terms. PCA can be thought of as revealing the internal structure of the data in a way which best explains the variance in the data. If a multivariate dataset is visualized as a set of coordinates in a highdimensional data space, PCA supplies the user with a lowerdimensional picture when viewed from its (in some sense) most informative viewpoint.
PCA is mostly used as a tool in exploratory data analysis and for making predictive models. PCA involves the calculation of the eigenvalue decomposition of a data covariance matrix or singular value decomposition of a data matrix, usually after mean centering the data for each attribute. The results of a PCA are usually discussed in terms of component scores and loadings.
As a linear technique, PCA is built for several purposes: first, it enables us to decorrelate the original variables; second, to carry out data compression, where we pay decreasing attention to the numerical accuracy by which we encode the sequence of principal components; third, to reconstruct the original input data using a reduced number of variables according to a leastsquares criterion; and fourth, to identify potential clusters in the data.
In certain applications, PCA can be misleading. PCA is heavily influenced when there are outliers in the data. In other situations, the linearity of PCA may be an obstacle to successful data reduction and compression.
 data
training data. If the sample size is larger than the data dimension and cor = false, SVD is employed for efficiency. Otherwise, eigen decomposition on covariance or correlation matrix is performed.
 cor
true if use correlation matrix instead of covariance matrix if ture.

def
ppca(data: Array[Array[Double]], k: Int): PPCA
Probabilistic principal component analysis.
Probabilistic principal component analysis. PPCA is a simplified factor analysis that employs a latent variable model with linear relationship:
y ∼ W * x + μ + ε
where latent variables x ∼ N(0, I), error (or noise) ε ∼ N(0, Ψ), and μ is the location term (mean). In PPCA, an isotropic noise model is used, i.e., noise variances constrained to be equal (Ψ_{i} = σ^{2}). A close form of estimation of above parameters can be obtained by maximum likelihood method.
References:
 Michael E. Tipping and Christopher M. Bishop. Probabilistic Principal Component Analysis. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 61(3):611622, 1999.
 data
training data.
 k
the number of principal component to learn.

final
def
synchronized[T0](arg0: ⇒ T0): T0
 Definition Classes
 AnyRef

def
toString(): String
 Definition Classes
 AnyRef → Any

final
def
wait(): Unit
 Definition Classes
 AnyRef
 Annotations
 @throws( ... )

final
def
wait(arg0: Long, arg1: Int): Unit
 Definition Classes
 AnyRef
 Annotations
 @throws( ... )

final
def
wait(arg0: Long): Unit
 Definition Classes
 AnyRef
 Annotations
 @throws( ... )
 def →[B](y: B): (Operators, B)
High level Smile operators in Scala.