Packages

  • package root

    Smile (Statistical Machine Intelligence and Learning Engine) is a fast and comprehensive machine learning, NLP, linear algebra, graph, interpolation, and visualization system in Java and Scala.

    Smile (Statistical Machine Intelligence and Learning Engine) is a fast and comprehensive machine learning, NLP, linear algebra, graph, interpolation, and visualization system in Java and Scala. With advanced data structures and algorithms, Smile delivers state-of-art performance.

    Smile covers every aspect of machine learning, including classification, regression, clustering, association rule mining, feature selection, manifold learning, multidimensional scaling, genetic algorithms, missing value imputation, efficient nearest neighbor search, etc.

    Definition Classes
    root
  • package smile
    Definition Classes
    root
  • package association

    Frequent item set mining and association rule mining.

    Frequent item set mining and association rule mining. Association rule learning is a popular and well researched method for discovering interesting relations between variables in large databases. Let I = {i1, i2,..., in} be a set of n binary attributes called items. Let D = {t1, t2,..., tm} be a set of transactions called the database. Each transaction in D has a unique transaction ID and contains a subset of the items in I. An association rule is defined as an implication of the form X ⇒ Y where X, Y ⊆ I and X ∩ Y = Ø. The item sets X and Y are called antecedent (left-hand-side or LHS) and consequent (right-hand-side or RHS) of the rule, respectively. The support supp(X) of an item set X is defined as the proportion of transactions in the database which contain the item set. Note that the support of an association rule X ⇒ Y is supp(X ∪ Y). The confidence of a rule is defined conf(X ⇒ Y) = supp(X ∪ Y) / supp(X). Confidence can be interpreted as an estimate of the probability P(Y | X), the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS.

    For example, the rule {onions, potatoes} ⇒ {burger} found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, he or she is likely to also buy burger. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements.

    Association rules are usually required to satisfy a user-specified minimum support and a user-specified minimum confidence at the same time. Association rule generation is usually split up into two separate steps:

    • First, minimum support is applied to find all frequent item sets in a database (i.e. frequent item set mining).
    • Second, these frequent item sets and the minimum confidence constraint are used to form rules.

    Finding all frequent item sets in a database is difficult since it involves searching all possible item sets (item combinations). The set of possible item sets is the power set over I (the set of items) and has size 2n - 1 (excluding the empty set which is not a valid item set). Although the size of the power set grows exponentially in the number of items n in I, efficient search is possible using the downward-closure property of support (also called anti-monotonicity) which guarantees that for a frequent item set also all its subsets are frequent and thus for an infrequent item set, all its supersets must be infrequent.

    In practice, we may only consider the frequent item set that has the maximum number of items bypassing all the sub item sets. An item set is maximal frequent if none of its immediate supersets is frequent.

    For a maximal frequent item set, even though we know that all the sub item sets are frequent, we don't know the actual support of those sub item sets, which are very important to find the association rules within the item sets. If the final goal is association rule mining, we would like to discover closed frequent item sets. An item set is closed if none of its immediate supersets has the same support as the item set.

    Some well known algorithms of frequent item set mining are Apriori, Eclat and FP-Growth. Apriori is the best-known algorithm to mine association rules. It uses a breadth-first search strategy to counting the support of item sets and uses a candidate generation function which exploits the downward closure property of support. Eclat is a depth-first search algorithm using set intersection.

    FP-growth (frequent pattern growth) uses an extended prefix-tree (FP-tree) structure to store the database in a compressed form. FP-growth adopts a divide-and-conquer approach to decompose both the mining tasks and the databases. It uses a pattern fragment growth method to avoid the costly process of candidate generation and testing used by Apriori.

    References:
    • R. Agrawal, T. Imielinski and A. Swami. Mining Association Rules Between Sets of Items in Large Databases, SIGMOD, 207-216, 1993.
    • Rakesh Agrawal and Ramakrishnan Srikant. Fast algorithms for mining association rules in large databases. VLDB, 487-499, 1994.
    • Mohammed J. Zaki. Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering, 12(3):372-390, 2000.
    • Jiawei Han, Jian Pei, Yiwen Yin, and Runying Mao. Mining frequent patterns without candidate generation. Data Mining and Knowledge Discovery 8:53-87, 2004.
    Definition Classes
    smile
  • package cas

    Computer algebra system.

    Computer algebra system. A computer algebra system (CAS) has the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists.

    The symbolic manipulations supported include:

    • simplification to a smaller expression or some standard form, including automatic simplification with assumptions and simplification with constraints
    • substitution of symbols or numeric values for certain expressions
    • change of form of expressions: expanding products and powers, partial and full factorization, rewriting as partial fractions, constraint satisfaction, rewriting trigonometric functions as exponentials, transforming logic expressions, etc.
    • partial and total differentiation
    • matrix operations including products, inverses, etc.
    Definition Classes
    smile
  • package classification

    Classification algorithms.

    Classification algorithms. In machine learning and pattern recognition, classification refers to an algorithmic procedure for assigning a given input object into one of a given number of categories. The input object is formally termed an instance, and the categories are termed classes.

    The instance is usually described by a vector of features, which together constitute a description of all known characteristics of the instance. Typically, features are either categorical (also known as nominal, i.e. consisting of one of a set of unordered items, such as a gender of "male" or "female", or a blood type of "A", "B", "AB" or "O"), ordinal (consisting of one of a set of ordered items, e.g. "large", "medium" or "small"), integer-valued (e.g. a count of the number of occurrences of a particular word in an email) or real-valued (e.g. a measurement of blood pressure).

    Classification normally refers to a supervised procedure, i.e. a procedure that produces an inferred function to predict the output value of new instances based on a training set of pairs consisting of an input object and a desired output value. The inferred function is called a classifier if the output is discrete or a regression function if the output is continuous.

    The inferred function should predict the correct output value for any valid input object. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way.

    A wide range of supervised learning algorithms is available, each with its strengths and weaknesses. There is no single learning algorithm that works best on all supervised learning problems. The most widely used learning algorithms are AdaBoost and gradient boosting, support vector machines, linear regression, linear discriminant analysis, logistic regression, naive Bayes, decision trees, k-nearest neighbor algorithm, and neural networks (multilayer perceptron).

    If the feature vectors include features of many different kinds (discrete, discrete ordered, counts, continuous values), some algorithms cannot be easily applied. Many algorithms, including linear regression, logistic regression, neural networks, and nearest neighbor methods, require that the input features be numerical and scaled to similar ranges (e.g., to the [-1,1] interval). Methods that employ a distance function, such as nearest neighbor methods and support vector machines with Gaussian kernels, are particularly sensitive to this. An advantage of decision trees (and boosting algorithms based on decision trees) is that they easily handle heterogeneous data.

    If the input features contain redundant information (e.g., highly correlated features), some learning algorithms (e.g., linear regression, logistic regression, and distance based methods) will perform poorly because of numerical instabilities. These problems can often be solved by imposing some form of regularization.

    If each of the features makes an independent contribution to the output, then algorithms based on linear functions (e.g., linear regression, logistic regression, linear support vector machines, naive Bayes) generally perform well. However, if there are complex interactions among features, then algorithms such as nonlinear support vector machines, decision trees and neural networks work better. Linear methods can also be applied, but the engineer must manually specify the interactions when using them.

    There are several major issues to consider in supervised learning:

    • Features: The accuracy of the inferred function depends strongly on how the input object is represented. Typically, the input object is transformed into a feature vector, which contains a number of features that are descriptive of the object. The number of features should not be too large, because of the curse of dimensionality; but should contain enough information to accurately predict the output. There are many algorithms for feature selection that seek to identify the relevant features and discard the irrelevant ones. More generally, dimensionality reduction may seek to map the input data into a lower dimensional space prior to running the supervised learning algorithm.
    • Overfitting: Overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model which has been overfit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data. The potential for overfitting depends not only on the number of parameters and data but also the conformability of the model structure with the data shape, and the magnitude of model error compared to the expected level of noise or error in the data. In order to avoid overfitting, it is necessary to use additional techniques (e.g. cross-validation, regularization, early stopping, pruning, Bayesian priors on parameters or model comparison), that can indicate when further training is not resulting in better generalization. The basis of some techniques is either (1) to explicitly penalize overly complex models, or (2) to test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter.
    • Regularization: Regularization involves introducing additional information in order to solve an ill-posed problem or to prevent over-fitting. This information is usually of the form of a penalty for complexity, such as restrictions for smoothness or bounds on the vector space norm. A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.
    • Bias-variance tradeoff: Mean squared error (MSE) can be broken down into two components: variance and squared bias, known as the bias-variance decomposition. Thus in order to minimize the MSE, we need to minimize both the bias and the variance. However, this is not trivial. Therefore, there is a tradeoff between bias and variance.
    Definition Classes
    smile
  • package clustering

    Clustering analysis.

    Clustering analysis. Clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis used in many fields.

    Hierarchical algorithms find successive clusters using previously established clusters. These algorithms usually are either agglomerative ("bottom-up") or divisive ("top-down"). Agglomerative algorithms begin with each element as a separate cluster and merge them into successively larger clusters. Divisive algorithms begin with the whole set and proceed to divide it into successively smaller clusters.

    Partitional algorithms typically determine all clusters at once, but can also be used as divisive algorithms in the hierarchical clustering. Many partitional clustering algorithms require the specification of the number of clusters to produce in the input data set, prior to execution of the algorithm. Barring knowledge of the proper value beforehand, the appropriate value must be determined, a problem on its own for which a number of techniques have been developed.

    Density-based clustering algorithms are devised to discover arbitrary-shaped clusters. In this approach, a cluster is regarded as a region in which the density of data objects exceeds a threshold.

    Subspace clustering methods look for clusters that can only be seen in a particular projection (subspace, manifold) of the data. These methods thus can ignore irrelevant attributes. The general problem is also known as Correlation clustering while the special case of axis-parallel subspaces is also known as two-way clustering, co-clustering or biclustering in bioinformatics: in these methods not only the objects are clustered but also the features of the objects, i.e., if the data is represented in a data matrix, the rows and columns are clustered simultaneously. They usually do not however work with arbitrary feature combinations as in general subspace methods.

    Definition Classes
    smile
  • package data

    Data manipulation functions.

    Data manipulation functions.

    Definition Classes
    smile
  • package feature
    Definition Classes
    smile
  • package manifold

    Manifold learning finds a low-dimensional basis for describing high-dimensional data.

    Manifold learning finds a low-dimensional basis for describing high-dimensional data. Manifold learning is a popular approach to nonlinear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high; though each data point consists of perhaps thousands of features, it may be described as a function of only a few underlying parameters. That is, the data points are actually samples from a low-dimensional manifold that is embedded in a high-dimensional space. Manifold learning algorithms attempt to uncover these parameters in order to find a low-dimensional representation of the data.

    Some prominent approaches are locally linear embedding (LLE), Hessian LLE, Laplacian eigenmaps, and LTSA. These techniques construct a low-dimensional data representation using a cost function that retains local properties of the data, and can be viewed as defining a graph-based kernel for Kernel PCA. More recently, techniques have been proposed that, instead of defining a fixed kernel, try to learn the kernel using semidefinite programming. The most prominent example of such a technique is maximum variance unfolding (MVU). The central idea of MVU is to exactly preserve all pairwise distances between nearest neighbors (in the inner product space), while maximizing the distances between points that are not nearest neighbors.

    An alternative approach to neighborhood preservation is through the minimization of a cost function that measures differences between distances in the input and output spaces. Important examples of such techniques include classical multidimensional scaling (which is identical to PCA), Isomap (which uses geodesic distances in the data space), diffusion maps (which uses diffusion distances in the data space), t-SNE (which minimizes the divergence between distributions over pairs of points), and curvilinear component analysis.

    Definition Classes
    smile
  • $dummy
  • package math

    Mathematical and statistical functions.

    Mathematical and statistical functions.

    Definition Classes
    smile
  • package nlp

    Natural language processing.

    Natural language processing.

    Definition Classes
    smile
  • package plot

    Data visualization.

    Data visualization.

    Definition Classes
    smile
  • package regression

    Regression analysis.

    Regression analysis. Regression analysis includes any techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables. Therefore, the estimation target is a function of the independent variables called the regression function. Regression analysis is widely used for prediction and forecasting.

    Definition Classes
    smile
  • package sequence

    Sequence labeling algorithms.

    Sequence labeling algorithms.

    Definition Classes
    smile
  • package util

    Utility functions.

    Utility functions.

    Definition Classes
    smile
  • package validation

    Model validation.

    Model validation.

    Definition Classes
    smile
  • package wavelet

    A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero.

    A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. Like the fast Fourier transform (FFT), the discrete wavelet transform (DWT) is a fast, linear operation that operates on a data vector whose length is an integer power of 2, transforming it into a numerically different vector of the same length. The wavelet transform is invertible and in fact orthogonal. Both FFT and DWT can be viewed as a rotation in function space.

    Definition Classes
    smile
p

smile

manifold

package manifold

Manifold learning finds a low-dimensional basis for describing high-dimensional data. Manifold learning is a popular approach to nonlinear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high; though each data point consists of perhaps thousands of features, it may be described as a function of only a few underlying parameters. That is, the data points are actually samples from a low-dimensional manifold that is embedded in a high-dimensional space. Manifold learning algorithms attempt to uncover these parameters in order to find a low-dimensional representation of the data.

Some prominent approaches are locally linear embedding (LLE), Hessian LLE, Laplacian eigenmaps, and LTSA. These techniques construct a low-dimensional data representation using a cost function that retains local properties of the data, and can be viewed as defining a graph-based kernel for Kernel PCA. More recently, techniques have been proposed that, instead of defining a fixed kernel, try to learn the kernel using semidefinite programming. The most prominent example of such a technique is maximum variance unfolding (MVU). The central idea of MVU is to exactly preserve all pairwise distances between nearest neighbors (in the inner product space), while maximizing the distances between points that are not nearest neighbors.

An alternative approach to neighborhood preservation is through the minimization of a cost function that measures differences between distances in the input and output spaces. Important examples of such techniques include classical multidimensional scaling (which is identical to PCA), Isomap (which uses geodesic distances in the data space), diffusion maps (which uses diffusion distances in the data space), t-SNE (which minimizes the divergence between distributions over pairs of points), and curvilinear component analysis.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. manifold
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Value Members

  1. def isomap(data: Array[Array[Double]], k: Int, d: Int = 2, CIsomap: Boolean = true): IsoMap

    Isometric feature mapping.

    Isometric feature mapping. Isomap is a widely used low-dimensional embedding methods, where geodesic distances on a weighted graph are incorporated with the classical multidimensional scaling. Isomap is used for computing a quasi-isometric, low-dimensional embedding of a set of high-dimensional data points. Isomap is highly efficient and generally applicable to a broad range of data sources and dimensionalities.

    To be specific, the classical MDS performs low-dimensional embedding based on the pairwise distance between data points, which is generally measured using straight-line Euclidean distance. Isomap is distinguished by its use of the geodesic distance induced by a neighborhood graph embedded in the classical scaling. This is done to incorporate manifold structure in the resulting embedding. Isomap defines the geodesic distance to be the sum of edge weights along the shortest path between two nodes. The top n eigenvectors of the geodesic distance matrix, represent the coordinates in the new n-dimensional Euclidean space.

    The connectivity of each data point in the neighborhood graph is defined as its nearest k Euclidean neighbors in the high-dimensional space. This step is vulnerable to "short-circuit errors" if k is too large with respect to the manifold structure or if noise in the data moves the points slightly off the manifold. Even a single short-circuit error can alter many entries in the geodesic distance matrix, which in turn can lead to a drastically different (and incorrect) low-dimensional embedding. Conversely, if k is too small, the neighborhood graph may become too sparse to approximate geodesic paths accurately.

    This class implements C-Isomap that involves magnifying the regions of high density and shrink the regions of low density of data points in the manifold. Edge weights that are maximized in Multi-Dimensional Scaling(MDS) are modified, with everything else remaining unaffected.

    References:
    • J. B. Tenenbaum, V. de Silva and J. C. Langford A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science 290(5500):2319-2323, 2000.
    data

    the data set.

    k

    k-nearest neighbor.

    d

    the dimension of the manifold.

    CIsomap

    C-Isomap algorithm if true, otherwise standard algorithm.

  2. def isomds(proximity: Array[Array[Double]], k: Int, tol: Double = 0.0001, maxIter: Int = 200): IsotonicMDS

    Kruskal's nonmetric MDS.

    Kruskal's nonmetric MDS. In non-metric MDS, only the rank order of entries in the proximity matrix (not the actual dissimilarities) is assumed to contain the significant information. Hence, the distances of the final configuration should as far as possible be in the same rank order as the original data. Note that a perfect ordinal re-scaling of the data into distances is usually not possible. The relationship is typically found using isotonic regression.

    proximity

    the non-negative proximity matrix of dissimilarities. The diagonal should be zero and all other elements should be positive and symmetric.

    k

    the dimension of the projection.

    tol

    tolerance for stopping iterations.

    maxIter

    maximum number of iterations.

  3. def laplacian(data: Array[Array[Double]], k: Int, d: Int = 2, t: Double = -1): LaplacianEigenmap

    Laplacian Eigenmap.

    Laplacian Eigenmap. Using the notion of the Laplacian of the nearest neighbor adjacency graph, Laplacian Eigenmap compute a low dimensional representation of the dataset that optimally preserves local neighborhood information in a certain sense. The representation map generated by the algorithm may be viewed as a discrete approximation to a continuous map that naturally arises from the geometry of the manifold.

    The locality preserving character of the Laplacian Eigenmap algorithm makes it relatively insensitive to outliers and noise. It is also not prone to "short circuiting" as only the local distances are used.

    References:
    • Mikhail Belkin and Partha Niyogi. Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering. NIPS, 2001.
    data

    the data set.

    k

    k-nearest neighbor.

    d

    the dimension of the manifold.

    t

    the smooth/width parameter of heat kernel e-||x-y||2 / t. Non-positive value means discrete weights.

  4. def lle(data: Array[Array[Double]], k: Int, d: Int = 2): LLE

    Locally Linear Embedding.

    Locally Linear Embedding. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms, and better results with many problems. LLE also begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describe the point as a linear combination of its neighbors. Finally, it uses an eigenvector-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities.

    References:
    • Sam T. Roweis and Lawrence K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 290(5500):2323-2326, 2000.
    data

    the data set.

    k

    k-nearest neighbor.

    d

    the dimension of the manifold.

  5. def mds(proximity: Array[Array[Double]], k: Int, positive: Boolean = false): MDS

    Classical multidimensional scaling, also known as principal coordinates analysis.

    Classical multidimensional scaling, also known as principal coordinates analysis. Given a matrix of dissimilarities (e.g. pairwise distances), MDS finds a set of points in low dimensional space that well-approximates the dissimilarities in A. We are not restricted to using a Euclidean distance metric. However, when Euclidean distances are used MDS is equivalent to PCA.

    proximity

    the non-negative proximity matrix of dissimilarities. The diagonal should be zero and all other elements should be positive and symmetric. For pairwise distances matrix, it should be just the plain distance, not squared.

    k

    the dimension of the projection.

    positive

    if true, estimate an appropriate constant to be added to all the dissimilarities, apart from the self-dissimilarities, that makes the learning matrix positive semi-definite. The other formulation of the additive constant problem is as follows. If the proximity is measured in an interval scale, where there is no natural origin, then there is not a sympathy of the dissimilarities to the distances in the Euclidean space used to represent the objects. In this case, we can estimate a constant c such that proximity + c may be taken as ratio data, and also possibly to minimize the dimensionality of the Euclidean space required for representing the objects.

  6. def sammon(proximity: Array[Array[Double]], k: Int, lambda: Double = 0.2, tol: Double = 0.0001, stepTol: Double = 0.001, maxIter: Int = 100): SammonMapping

    The Sammon's mapping is an iterative technique for making interpoint distances in the low-dimensional projection as close as possible to the interpoint distances in the high-dimensional object.

    The Sammon's mapping is an iterative technique for making interpoint distances in the low-dimensional projection as close as possible to the interpoint distances in the high-dimensional object. Two points close together in the high-dimensional space should appear close together in the projection, while two points far apart in the high dimensional space should appear far apart in the projection. The Sammon's mapping is a special case of metric least-square multidimensional scaling.

    Ideally when we project from a high dimensional space to a low dimensional space the image would be geometrically congruent to the original figure. This is called an isometric projection. Unfortunately it is rarely possible to isometrically project objects down into lower dimensional spaces. Instead of trying to achieve equality between corresponding inter-point distances we can minimize the difference between corresponding inter-point distances. This is one goal of the Sammon's mapping algorithm. A second goal of the Sammon's mapping algorithm is to preserve the topology as best as possible by giving greater emphasize to smaller interpoint distances. The Sammon's mapping algorithm has the advantage that whenever it is possible to isometrically project an object into a lower dimensional space it will be isometrically projected into the lower dimensional space. But whenever an object cannot be projected down isometrically the Sammon's mapping projects it down to reduce the distortion in interpoint distances and to limit the change in the topology of the object.

    The projection cannot be solved in a closed form and may be found by an iterative algorithm such as gradient descent suggested by Sammon. Kohonen also provides a heuristic that is simple and works reasonably well.

    proximity

    the non-negative proximity matrix of dissimilarities. The diagonal should be zero and all other elements should be positive and symmetric.

    k

    the dimension of the projection.

    lambda

    initial value of the step size constant in diagonal Newton method.

    tol

    tolerance for stopping iterations.

    stepTol

    tolerance on step size.

    maxIter

    maximum number of iterations.

  7. def tsne(X: Array[Array[Double]], d: Int = 2, perplexity: Double = 20.0, eta: Double = 200.0, iterations: Int = 1000): TSNE

    t-distributed stochastic neighbor embedding.

    t-distributed stochastic neighbor embedding. t-SNE is a nonlinear dimensionality reduction technique that is particularly well suited for embedding high-dimensional data into a space of two or three dimensions, which can then be visualized in a scatter plot. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points.

    References:
    • L.J.P. van der Maaten. Accelerating t-SNE using Tree-Based Algorithms. Journal of Machine Learning Research 15(Oct):3221-3245, 2014.
    • L.J.P. van der Maaten and G.E. Hinton. Visualizing Non-Metric Similarities in Multiple Maps. Machine Learning 87(1):33-55, 2012.
    • L.J.P. van der Maaten. Learning a Parametric Embedding by Preserving Local Structure. In Proceedings of the Twelfth International Conference on Artificial Intelligence & Statistics (AI-STATS), JMLR W&CP 5:384-391, 2009.
    • L.J.P. van der Maaten and G.E. Hinton. Visualizing High-Dimensional Data Using t-SNE. Journal of Machine Learning Research 9(Nov):2579-2605, 2008.
    X

    input data. If X is a square matrix, it is assumed to be the squared distance/dissimilarity matrix.

    d

    the dimension of the manifold.

    perplexity

    the perplexity of the conditional distribution.

    eta

    the learning rate.

    iterations

    the number of iterations.

  8. def umap(data: Array[Array[Double]], k: Int = 15, d: Int = 2, iterations: Int = 0, learningRate: Double = 1.0, minDist: Double = 0.1, spread: Double = 1.0, negativeSamples: Int = 5, repulsionStrength: Double = 1.0): UMAP

    Uniform Manifold Approximation and Projection.

    Uniform Manifold Approximation and Projection.

    UMAP is a dimension reduction technique that can be used for visualization similarly to t-SNE, but also for general non-linear dimension reduction. The algorithm is founded on three assumptions about the data:

    • The data is uniformly distributed on a Riemannian manifold;
    • The Riemannian metric is locally constant (or can be approximated as such);
    • The manifold is locally connected.

    From these assumptions it is possible to model the manifold with a fuzzy topological structure. The embedding is found by searching for a low dimensional projection of the data that has the closest possible equivalent fuzzy topological structure.

    data

    the input data.

    k

    k-nearest neighbors. Larger values result in more global views of the manifold, while smaller values result in more local data being preserved. Generally in the range 2 to 100.

    d

    The target embedding dimensions. defaults to 2 to provide easy visualization, but can reasonably be set to any integer value in the range 2 to 100.

    iterations

    The number of iterations to optimize the low-dimensional representation. Larger values result in more accurate embedding. Muse be greater than 10. Choose wise value based on the size of the input data, e.g, 200 for large data (1000+ samples), 500 for small.

    learningRate

    The initial learning rate for the embedding optimization, default 1.

    minDist

    The desired separation between close points in the embedding space. Smaller values will result in a more clustered/clumped embedding where nearby points on the manifold are drawn closer together, while larger values will result on a more even disperse of points. The value should be set no-greater than and relative to the spread value, which determines the scale at which embedded points will be spread out. default 0.1.

    spread

    The effective scale of embedded points. In combination with minDist, this determines how clustered/clumped the embedded points are. default 1.0.

    negativeSamples

    The number of negative samples to select per positive sample in the optimization process. Increasing this value will result in greater repulsive force being applied, greater optimization cost, but slightly more accuracy, default 5.

    repulsionStrength

    Weighting applied to negative samples in low dimensional embedding optimization. Values higher than one will result in greater weight being given to negative samples, default 1.0.

  9. object $dummy

    Hacking scaladoc issue-8124.

    Hacking scaladoc issue-8124. The user should ignore this object.

Inherited from AnyRef

Inherited from Any

Ungrouped