Quick Start
Smile is a fast and comprehensive machine learning system. With advanced data structures and algorithms, Smile delivers the state-of-art performance. Smile is self-contained and requires only Java standard library. Since v1.4, Smile may optionally leverage native BLAS/LAPACK library too. It also provides high-level operators in Scala and an interactive shell. In practice, data scientists usually build models with high-level tools such as R, Matlab, SAS, etc. However, developers have to spend a lot of time and energy to incorporate these models in the production system that are often implemented in general purpose programming languages such as Java and Scala. With Smile, data scientists and developers can work in the same environment to build machine learning applications quickly!
Download
Get Smile from the releases page of the project website. The universal tarball is also available and can be used on Mac, Linux and Windows.
If you would like to build Smile from source, please first install Java 17, Scala 2.13 and SBT 1.0+. Then clone the repo and build the package:
$ git clone https://github.com/haifengl/smile.git
$ cd smile
$ sbt package
To build with Scala 3, run
$ sbt ++3.3.3 scala/package
To test the latest code, run the following
$ git pull
$ bin/smile.sh
which will build the system and enter the Smile shell in Scala. If you prefer Java, you may run
$ sbt shell/stage
$ cd shell/target/universal/stage
$ bin/jshell.sh
Shell
Smile comes with an interactive shell for Scala. In the home directory of Smile, type
$ bin/smile
to enter the shell, which is based on Scala REPL.
If you prefer Ammonite REPL,
copy its jar to Smile's lib
directory. Smile Shell
will switch to Ammonite once restarted.
In the shell, you can run any valid Scala expressions.
Besides, all high-level Smile operators are predefined
in the shell. By default, the shell uses up to 75% memory.
If you need more memory to handle large data, use the option
-J-Xmx
or -XX:MaxRAMPercentage
.
For example,
$ bin/smile -J-Xmx30G
You can also modify the configuration file ./conf/smile.ini
for the memory and other JVM settings.
Basics
When the shell starts, we should see something like the following:
..::''''::..
.;'' ``;.
.... :: :: :: ::
,;' .;: () ..: :: :: :: ::
::. ..:,:;.,:;. . :: .::::. :: .:' :: :: `:. ::
'''::, :: :: :: `:: :: ;: .:: :: : : ::
,:'; ::; :: :: :: :: :: ::,::''. :: `:. .:' ::
`:,,,,;;' ,;; ,;;, ;;, ,;;, ,;;, `:,,,,:' `;..``::::''..;'
``::,,,,::''
Welcome to Smile Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the Smile Shell
Version 4.0.0, Scala 2.13.14, SBT 1.9.9 built at 2024-07-03 09:52:24.404-0400
===============================================================================
smile>
The smile> line is the prompt that the shell is waiting for you to enter expressions.
To get help information of Smile high-level operators,
type help
. You can also get detailed information on
each operator by typing help("command")
, e.g.
help("svm")
. To exit the shell, type exit
.
In the shell, type demo
to bring up the demo window,
which shows off various Smile's machine learning capabilities.
You can also type benchmark()
to see Smile's performance
on a couple of test data. You can run a particular benchmark by
bencharm("test name")
, where test name could be "airline",
"usps", etc.
On startup, the shell analyzes the classpath and creates a database of every visible package and path. This is available via tab-completion analogous to the path-completion available in most shells. If you type a partial path, tab will complete as far as it can and show you your options if there is more than one.
smile> smile.classification.r
randomForest rbfnet rda
Calculator
We can run any valid Scala expressions in the shell. In the simplest case, you can use it as a calculator.
smile> "Hello, World"
res0: String = Hello, World
smile> 2
res1: Int = 2
smile> 2+3
res2: Int = 5
We can also define variables and reuse them.
smile> val x = 2 + 3
x: Int = 5
smile> print(x)
5
smile> val y = 2 * (x + 1)
z: Int = 12
Functions can be defined too. As Scala is a functional language, functions are first class citizen, just like other values.
smile> def sq(x: Double) = x * x
sq: (x: Double)Double
smile> sq(y)
res4: Double = 441.0
Scala is a powerful and complicated language that fuses object-oriented programming and functional programming. Although you don't need to know all the bells and whistles of Scala to use Smile, we strongly recommend you to learn some basics.
Script
We may also run Smile code in a script. The script
examples/iris.sc
containing the following Smile code
val data = read.arff(Paths.getTestData("weka/iris.arff"))
println(data)
val formula = "class" ~ "."
val rf = smile.classification.randomForest(formula, data)
println(s"OOB error = %.2f%%" format 100 * rf.error)
It can be run directly from the shell:
$ bin/smile examples/iris.sc
In this example, we use Fisher's Iris data in the data
directory
(including many open data for research purpose). The data
is in Weka's ARFF format. The function read.arff
returns an object of
DataFrame
. The formula "class" ~
defines that
the column "class" will be used as the class label while the rest columns
are predictors. Finally, we train a random forest
with default parameters and print out its OOB (out of bag) error. We can apply
the model on new data samples with the method predict
.
Smile provides an integration with JShell, which is available from Java 9+. In the home directory of Smile, type
$ bin/jshell.sh
to enter the JShell with Smile libraries in the class path.
In the shell, you can run any valid Java expressions.
In the simplest case, you can use it as a calculator.
If you need more memory to handle large data, use the
option -R-Xmx
. For example,
$ bin/jshell.sh -R-Xmx30G
Basics
When the shell starts, we should see something like the following:
..::''''::..
.;'' ``;.
.... :: :: :: ::
,;' .;: () ..: :: :: :: ::
::. ..:,:;.,:;. . :: .::::. :: .:' :: :: `:. ::
'''::, :: :: :: `:: :: ;: .:: :: : : ::
,:'; ::; :: :: :: :: :: ::,::''. :: `:. .:' ::
`:,,,,;;' ,;; ,;;, ;;, ,;;, ,;;, `:,,,,:' `;..``::::''..;'
``::,,,,::''
| Welcome to Smile -- Version 4.0.0
===============================================================================
| Welcome to JShell -- Version 21.0.3
| For an introduction type: /help intro
smile>
We pre-import Smile's definitions in JShell. To exit the shell, type /exit
.
Calculator
With local variable type inference, it is easy to use JShell as a calculator.
smile> "Hello, World"
$2 ==> "Hello, World"
smile> 2
$3 ==> 2
smile> 2+3
$4 ==> 5
We can also define variables and reuse them.
smile> var x = 2 + 3
x ==> 5
smile> var y = 2 * (x + 1)
y ==> 12
Script
We may also run Smile code in a script. The script
examples/iris.jsh
containing the following Smile code
import smile.classification.RandomForest;
import smile.data.formula.Formula;
import smile.io.Read;
import smile.util.Paths;
var data = Read.arff(Paths.getTestData("weka/iris.arff"));
System.out.println(data);
var formula = Formula.lhs("class");
var rf = RandomForest.fit(formula, data);
System.out.println(rf.metrics());
It can be run directly from the shell:
$ bin/jshell.sh examples/iris.jsh
In this example, we use Fisher's Iris data in the data
directory
(including many open data for research purpose). The data
is in Weka's ARFF format. The function Read.arff
returns an object of
DataFrame
. The formula Formula.lhs("class")
defines that
the column "class" will be used as the class label while the rest columns
are predictors. Finally, we train a random forest
with default parameters and print out its OOB (out of bag) error. We can apply
the model on new data samples with the method predict
.
Smile provides an integration with Kotlin REPL. In the home directory of Smile, type
$ bin/kshell.sh
to enter the Kotlin REPL with Smile libraries in the class path.
In the shell, you can run any valid Kotlin expressions.
In the simplest case, you can use it as a calculator.
If you need more memory to handle large data, use the
option -J-Xmx
. For example,
$ bin/kshell.sh -J-Xmx30G
Basics
When the shell starts, we should see something like the following:
Welcome to Kotlin version 2.0.0 (JRE 21.0.3+7-LTS-152)
Type :help for help, :quit for quit
>>>
To exit the REPL, type :quit
. Different from
Smile Shell, we don't pre-import any Smile's definitions in Kotlin REPL.
Calculator
With local variable type inference, it is easy to use JShell as a calculator.
>>> "Hello, World"
res0: kotlin.String = Hello, World
>>> 2
res1: kotlin.Int = 2
>>> 2+3
res2: kotlin.Int = 5
We can also define variables and reuse them.
>>> var x = 2 + 3
>>> var y = 2 * (x + 1)
>>> y
res13: kotlin.Int = 12
Script
We may also run Smile code in a script. The script
examples/iris.kts
containing the following Smile code
import smile.*
import smile.classification.*
import smile.data.formula.Formula
import smile.util.Paths
val data = read.arff(Paths.getTestData("weka/iris.arff"))
println(data)
val formula = Formula.lhs("class")
val rf = randomForest(formula, data)
println(rf.metrics())
It can be run directly from the shell:
$ bin/kshell.sh -Xuse-fir-lt=false -script examples/iris.kts
In this example, we use Fisher's Iris data in the data
directory
(including many open data for research purpose). The data
is in Weka's ARFF format. The function Read.arff
returns an object of
DataFrame
. The formula Formula.lhs("class")
defines that
the column "class" will be used as the class label while the rest columns
are predictors. Finally, we train a random forest
with default parameters and print out its OOB (out of bag) error. We can apply
the model on new data samples with the method predict
.
Training and Inference CLI
A secret functionality of Smile Shell is that it can be used for training and inference through command line (CLI).
$ bin/smile train
Smile 4.0.0
Usage: smile train [random_forest|gradient_boost|adaboost|cart|logistic|fisher|lda|qda|rda|mlp|svm|rbf|ols|lasso|ridge|elastic_net|gaussian_process] [options]
--formula <class ~ .> The model formula
--data <file> The training data file
--test <file> The optional test data file
--model <file> The model file to save
--format <csv,header=true,delimiter=\t,comment=#,escape=\,quote=">
The data file format
--kfold <value> The k-fold cross validation
--round <value> The number of rounds of repeated cross validation
--ensemble Ensemble cross validation models
--seed <value> The random number generator seed
Command: random_forest [options]
Random Forest
--regression To train a regression model
--trees <value> The number of trees
--mtry <value> The number of features to train node split
--split <GINI, ENTROPY, CLASSIFICATION_ERROR>
The split rule
--max_depth <value> The maximum tree depth
--max_nodes <value> The maximum number of leaf nodes
--node_size <value> The minimum leaf node size
--sampling <value> The sampling rate
--class_weight <value> The class weights
Command: gradient_boost [options]
Gradient Boosting
--regression To train a regression model
--trees <value> The number of trees
--shrinkage <value> The shrinkage parameter in (0, 1] controls the learning rate
--max_depth <value> The maximum tree depth
--max_nodes <value> The maximum number of leaf nodes
--node_size <value> The minimum leaf node size
--sampling <value> The sampling rate
Command: adaboost [options]
AdaBoost
--trees <value> The number of trees
--max_depth <value> The maximum tree depth
--max_nodes <value> The maximum number of leaf nodes
--node_size <value> The minimum leaf node size
Command: cart [options]
Classification and Regression Tree
--regression To train a regression model
--split <GINI, ENTROPY, CLASSIFICATION_ERROR>
The split rule
--max_depth <value> The maximum tree depth
--max_nodes <value> The maximum number of leaf nodes
--node_size <value> The minimum leaf node size
Command: logistic [options]
Logistic Regression
--transform <standardizer, winsor(0.01,0.99), minmax, MaxAbs, L1, L2, Linf>
The feature transformation
--lambda <value> The regularization on linear weights
--iterations <value> The maximum number of iterations
--tolerance <value> The tolerance to stop iterations
Command: fisher [options]
Fisher Linear Discriminant
--transform <standardizer, winsor(0.01,0.99), minmax, MaxAbs, L1, L2, Linf>
The feature transformation
--dimension <value> The dimensionality of mapped space
--tolerance <value> The tolerance if a covariance matrix is singular
Command: lda [options]
Linear Discriminant Analysis
--transform <standardizer, winsor(0.01,0.99), minmax, MaxAbs, L1, L2, Linf>
The feature transformation
--priori <value> The priori probability of each class
--tolerance <value> The tolerance if a covariance matrix is singular
Command: qda [options]
Quadratic Discriminant Analysis
--transform <standardizer, winsor(0.01,0.99), minmax, MaxAbs, L1, L2, Linf>
The feature transformation
--priori <value> The priori probability of each class
--tolerance <value> The tolerance if a covariance matrix is singular
Command: rda [options]
Regularized Discriminant Analysis
--transform <standardizer, winsor(0.01,0.99), minmax, MaxAbs, L1, L2, Linf>
The feature transformation
--alpha <value> The regularization factor in [0, 1] allows a continuum of models between LDA and QDA
--priori <value> The priori probability of each class
--tolerance <value> The tolerance if a covariance matrix is singular
Command: mlp [options]
Multilayer Perceptron
--regression To train a regression model
--transform <standardizer, winsor(0.01,0.99), minmax, MaxAbs, L1, L2, Linf>
The feature transformation
--layers <ReLU(100)|Sigmoid(30)>
The neural network layers
--epochs <value> The number of training epochs
--mini_batch <value> The split rule
--learning_rate <0.01, linear(0.01, 10000, 0.001), piecewise(...), polynomial(...), inverse(...), exp(...)>
The learning rate schedule
--momentum <value> The momentum schedule
--weight_decay <value> The weight decay
--clip_norm <value> The gradient clipping norm
--clip_value <value> The gradient clipping value
--rho <value> RMSProp rho
--epsilon <value> RMSProp epsilon
Command: svm [options]
Support Vector Machine
--transform <standardizer, winsor(0.01,0.99), minmax, MaxAbs, L1, L2, Linf>
The feature transformation
--kernel <value> The kernel function
--C <value> The soft margin penalty parameter
--epsilon <value> The parameter of epsilon-insensitive hinge loss
--ovr One vs Rest strategy for multiclass classification
--ovo One vs One strategy for multiclass classification
--tolerance <value> The tolerance of convergence test
Command: rbf [options]
Radial Basis Function Network
--regression To train a regression model
--transform <standardizer, winsor(0.01,0.99), minmax, MaxAbs, L1, L2, Linf>
The feature transformation
--neurons <value> The number of neurons (radial basis functions)
--normalize Normalized RBF network
Command: ols [options]
Ordinary Least Squares
--method <qr, svd> The fitting method
--stderr Compute the standard errors of the estimate of parameters.
--recursive Recursive least squares
Command: lasso [options]
LASSO - Least Absolute Shrinkage and Selection Operator
--lambda <value> The regularization on linear weights
--iterations <value> The maximum number of iterations
--tolerance <value> The tolerance to stop iterations (relative target duality gap)
Command: ridge [options]
Ridge Regression
--lambda <value> The regularization on linear weights
Command: elastic_net [options]
Elastic Net
--lambda1 <value> The L1 regularization on linear weights
--lambda2 <value> The L2 regularization on linear weights
--iterations <value> The maximum number of iterations
--tolerance <value> The tolerance to stop iterations (relative target duality gap)
Command: gaussian_process [options]
Gaussian Process Regression
--transform <standardizer, winsor(0.01,0.99), minmax, MaxAbs, L1, L2, Linf>
The feature transformation
--kernel <value> The kernel function
--noise <value> The noise variance
--normalize Normalize the response variable
--iterations <value> The maximum number of HPO iterations
--tolerance <value> The stopping tolerance for HPO
$ bin/smile predict
Smile 4.0.0
Usage: smile predict [options]
--model <value> The model file
--data <value> The data file
--format <value> The data file format/schema
--probability Output the posteriori probabilities for soft classifier
$ bin/smile serve
Smile 4.0.0
Usage: smile serve [options]
--model <value> The model file
--probability Output the posteriori probabilities for soft classifier
To train a model, one should specify the data file, the output model file, the machine learning algorithm and its hyperparameters, and the model formula. Once the training done, it saves the model to the specified path and also prints the training metrics on the console. If the optional test data is provided too, the validation metrics will be computed and displayed too.
$ bin/smile train random_forest --data data/weka/iris.arff --formula "class ~ ." --model iris_random_forest.model
[main] INFO smile.io.Arff - Read ARFF relation iris
[ForkJoinPool.commonPool-worker-3] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 88.89%
[ForkJoinPool.commonPool-worker-2] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 95.35%
[ForkJoinPool.commonPool-worker-1] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 96.67%
...
[main] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 92.73%
[ForkJoinPool.commonPool-worker-3] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 94.44%
[main] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 92.98%
[ForkJoinPool.commonPool-worker-3] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 92.31%
[main] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 95.00%
[ForkJoinPool.commonPool-worker-3] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 96.30%
[main] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 89.47%
[ForkJoinPool.commonPool-worker-3] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 92.98%
[main] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 92.45%
[ForkJoinPool.commonPool-worker-3] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 96.30%
[main] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 89.83%
[ForkJoinPool.commonPool-worker-3] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 90.57%
[main] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 94.64%
[main] INFO smile.classification.RandomForest - Decision tree OOB accuracy: 97.92%
Training metrics: {
fit time: 191.678 ms,
score time: 17.059 ms,
validation data size: 150,
error: 6,
accuracy: 96.00%,
cross entropy: 0.1316
}
To run a batch inference on a file, run smile predict
command
with the model file and data file path. In this example,
we also specify the optional flag --probability
to compute
the posterior probability. If you don't need it, simply skip this option.
$ bin/smile predict --model iris_random_forest.model --data data/weka/iris.arff --probability
0 [0.9599005925578108, 0.0205362892965633, 0.01956311814562593]
0 [0.9591845549270528, 0.020962057229598156, 0.01985338784334913]
0 [0.959796981740008, 0.02072034636162096, 0.019482671898371082]
0 [0.959796981740008, 0.02072034636162096, 0.019482671898371082]
0 [0.9598769149423004, 0.020548128104318476, 0.0195749569533811]
0 [0.9583053192555514, 0.021164037082801724, 0.020530643661646933]
0 [0.9598769149423004, 0.020548128104318476, 0.0195749569533811]
0 [0.9598769149423004, 0.020548128104318476, 0.0195749569533811]
0 [0.9591845549270528, 0.020962057229598156, 0.01985338784334913]
0 [0.959796981740008, 0.02072034636162096, 0.019482671898371082]
0 [0.9583053192555514, 0.021164037082801724, 0.020530643661646933]
0 [0.9598769149423004, 0.020548128104318476, 0.0195749569533811]
0 [0.9591845549270528, 0.020962057229598156, 0.01985338784334913]
0 [0.9591845549270528, 0.020962057229598156, 0.01985338784334913]
0 [0.9027507265438325, 0.07089506779135216, 0.026354205664815347]
0 [0.911215084168623, 0.0632270620116508, 0.025557853819726185]
0 [0.9583053192555514, 0.021164037082801724, 0.020530643661646933]
0 [0.9599005925578108, 0.0205362892965633, 0.01956311814562593]
0 [0.911215084168623, 0.0632270620116508, 0.025557853819726185]
...
It is also easy to create an endpoint to serve online requests.
$ bin/smile serve --model iris_random_forest.model --probability
[smile-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
Smile online at http://localhost:8080/v1/infer
Press RETURN to stop...
The endpoint is at /v1/infer
. Here is an example how to make an inference request.
$ curl -X POST http://localhost:8080/v1/infer -H "Content-Type: application/json" \
-d '{
"sepallength": 5.1,
"sepalwidth": 3.5,
"petallength": 1.4,
"petalwidth": 0.2
}'
[{"class":0,"probability":[0.9599005925578108,0.0205362892965633,0.01956311814562593]}]
To infer on multiple samples, simply provides JSON array or JSON Lines (JSONL) in the request body. CSV is also supported.
$ curl -X POST http://localhost:8080/v1/infer -H "Content-Type: application/json" \
-d '{"sepallength": 5.1, "sepalwidth": 3.5, "petallength": 1.4,"petalwidth": 0.2}
{"sepallength": 6.3, "sepalwidth": 3.3, "petallength": 6.0,"petalwidth": 2.5}'
[{"class":0,"probability":[0.9599005925578108,0.0205362892965633,0.01956311814562593]},
{"class":2,"probability":[0.023727657781681327,0.051035220743102516,0.9252371214752161]}]
In fact, Smile serving endpoint is an end-to-end streaming API which applies back pressure throughout the entire stack. It can process the request body (e.g., a JSON array or CSV stream) on an element-by-element basis, and render the response immediately without waiting for the rest inference to complete first. Therefore, it is safe to send very large requests (multi-GB) to the endpoint!
$ for i in {1..1000}; do tail -n 153 data/weka/iris.arff | head -n 150 >> iris.txt; done
$ cat iris.txt | curl -H "Content-Type: text/csv" -X POST --data-binary @- http://localhost:8080/v1/infer?format=csv
By default, SmileServe binds at localhost:8080. If you prefer a different port and/or
want to expose the server to other hosts, you may set the binding interface and port
with -J-Dakka.http.server.interface=0.0.0.0
and
-J-Dakka.http.server.port=8000
, for example.
Notebooks
You can also use Smile in your favorite Notebook.
We recommend JupyterLab and provide jupyterlab.sh
to setup the conda environment of Jupyter Lab for Smile with
kernels for Scala and Kotlin. When you run
jupyterlab.sh
the first time, it will set up the environment
automatically. You can update the environment with the option
--update
later when needed.
In Scala notebooks, it is helpful to add the following
code to the notebook. We provide many notebook examples in
the notebooks
directory.
import $ivy.`com.github.haifengl::smile-scala:4.0.0`
import scala.language.postfixOps
import org.apache.commons.csv.CSVFormat
import smile._
import smile.util._
import smile.math._
import smile.math.MathEx.{log2, logistic, factorial, lfactorial, choose, lchoose, random, randomInt, permutate, c, cbind, rbind, sum, mean, median, q1, q3, `var` => variance, sd, mad, min, max, whichMin, whichMax, unique, dot, distance, pdist, KullbackLeiblerDivergence => kld, JensenShannonDivergence => jsd, cov, cor, spearman, kendall, norm, norm1, norm2, normInf, standardize, normalize, scale, unitize, unitize1, unitize2, root}
import smile.math.distance._
import smile.math.kernel._
import smile.math.matrix._
import smile.math.matrix.Matrix._
import smile.math.rbf._
import smile.stat.distribution._
import smile.data._
import smile.data.formula._
import smile.data.measure._
import smile.data.`type`._
import smile.json._
import smile.interpolation._
import smile.validation._
import smile.association._
import smile.base.cart.SplitRule
import smile.base.mlp._
import smile.base.rbf.RBF
import smile.classification._
import smile.regression.{ols, ridge, lasso, svr, gpr}
import smile.feature._
import smile.clustering._
import smile.vq._
import smile.manifold._
import smile.mds._
import smile.sequence._
import smile.projection._
import smile.nlp._
import smile.wavelet._
To plot data with Swing based functions in Notebook, run the below code first.
import smile.plot.swing._
import smile.plot.show
import smile.plot.Render._
To use Vega based plot functions in Notebook, run the below code instead.
import smile.plot.vega._
import smile.plot.show
import smile.plot.Render._
A Gentle Example
This example shows how to use Smile for predictive modeling
from Java and Scala code. First, let's load the data.
Smile provides a couple of parsers for popular data formats,
such as Parquet, Avro, Arrow, SAS7BDAT, Weka's ARFF files, LibSVM's
file format, delimited text files, JSON, and binary sparse data.
These classes are in the package smile.io
. In the
following example, we use the ARFF parser to load the weather dataset:
import smile.io.*;
var weather = Read.arff("data/weka/weather.nominal.arff");
import smile.io._
val weather = read.arff("data/weka/weather.nominal.arff")
Most Smile data parsers return a DataFrame object, which is immutable and contain a fixed number of named columns. We can also parse plain delimited text files and the parser automatically infer the schema. In the below, we load the USPS zip code handwriting dataset in a white space delimitered text file.
import org.apache.commons.csv.CSVFormat;
var format = CSVFormat.DEFAULT.withDelimiter(' ');
var zipTrain = Read.csv("data/usps/zip.train", format);
var zipTest = Read.csv("data/usps/zip.test", format);
val zipTrain = read.csv("data/usps/zip.train", delimiter = " ", header = false)
val zipTest = read.csv("data/usps/zip.test", delimiter = " ", header = false)
Because this data doesn't have a header line, the parser will assign V1, V2, ... as the column names. In particular, the first column (V1) is the class label.
Smile implements a variety of classification and regression algorithms. In what follows, we train a random forest model on the USPS data. Random forest is an ensemble classifier that consists of many decision trees and outputs the majority vote of individual trees. The method combines bagging idea and the random selection of features.
import smile.classification.*;
import smile.data.formula.Formula;
var formula = Formula.lhs("V1");
var prop = new java.util.Properties();
prop.setProperty("smile.random.forest.trees", "200");
var forest = RandomForest.fit(formula, zipTrain, prop);
System.out.println(forest.metrics());
val formula: Formula = "V1" ~ "."
val forest = randomForest(formula, zipTrain, ntrees = 200)
println(forest.metrics())
In the example, we firstly define a Formula
object, which
specifies the model in a symbolic way. The left-hand-side (LHS) of
formula is the response variable, and the right-hand-side (RHS) is
a list of terms as independent variables. When the RHS is not specified,
the rest of columns in the data frame are used by default. In the
simpliest case, the terms (both of LHS and of RHS) are column
names. But they can be functions (e.g. log) and transformations
(e.g. interaction and factor crossing) too. The functions/transformations
are symbolic and thus lazy.
With random forest, we may estimate the model accuracy with out-of-bag (OOB) samples. This is useful especially when we don't have a separate test dataset.
Now let's train a support vector machine (SVM) on the USPS data. As SVM is a kernel learning machine, it can be applied on any type of data as long as we can define a Mercer kernel on the data. Therefore, SVM class doesn't take a DataFrame as input but a generic array. We can leverage the formula object to extract the training samples and labels.
var x = formula.x(zipTrain).toArray();
var y = formula.y(zipTrain).toIntArray();
var testx = formula.x(zipTest).toArray();
var testy = formula.y(zipTest).toIntArray();
val x = formula.x(zipTrain).toArray()
val y = formula.y(zipTrain).toIntArray()
val testx = formula.x(zipTest).toArray()
val testy = formula.y(zipTest).toIntArray()
The SVM employs a Gaussian kernel and one-to-one strategy
as this is a multi-class problem. We also evaluate the model
on the test data with Validation
class, which
provides a variety of model validation methods such as
cross validation, bootstrap, etc.
import smile.math.kernel.GaussianKernel;
import smile.validation.*;
var kernel = new GaussianKernel(8.0);
var svm = OneVersusOne.fit(x, y, (x, y) -> SVM.fit(x, y, kernel, 5, 1E-3));
var pred = svm.predict(testx);
System.out.format("Accuracy = %.2f%%%n", (100.0 * Accuracy.of(testy, pred)));
System.out.format("Confusion Matrix: %s%n", ConfusionMatrix.of(testy, pred));
val kernel = new GaussianKernel(8.0)
val svm = ovo(x, y) { (x, y) =>
SVM.fit(x, y, kernel, 5, 1E-3)
}
Lastly, we will train a 5-layer deep learning model. Deep learning
requires the features properly scaled/standardized. In this example,
we employ the class Standardizer
to transforms features
to 0 mean and unit variance. An alternative is to subtract the median
and divide by the IQR, which is implemented RobustStandardizer
.
import smile.base.mlp.Layer;
import smile.base.mlp.OutputFunction;
import smile.classification.MLP;
import smile.math.MathEx;
var net = new MLP(Layer.input(256),
Layer.sigmoid(768),
Layer.sigmoid(192),
Layer.sigmoid(30),
Layer.mle(10, OutputFunction.SIGMOID)
);
net.setLearningRate(TimeFunction.linear(0.01, 20000, 0.001));
for (int epoch = 0; epoch < 10; epoch++) {
System.out.format("----- epoch %d -----%n", epoch);
for (int i : MathEx.permutate(x.length)) {
net.update(x[i], y[i]);
}
var prediction = net.predict(testx);
System.out.format("Accuracy = %.2f%%%n", (100.0 * Accuracy.of(testy, prediction)));
}
val net = new MLP(Layer.input(256),
Layer.sigmoid(768),
Layer.sigmoid(192),
Layer.sigmoid(30),
Layer.mle(10, OutputFunction.SIGMOID)
)
net.setLearningRate(TimeFunction.linear(0.01, 20000, 0.001));
(0 until 10).foreach(epoch => {
println("----- epoch %d -----" format epoch)
MathEx.permutate(x.length).foreach(i =>
net.update(x(i), y(i))
)
val prediction = net.predict(testx)
println("Accuracy = %.2f%%" format (100.0 * Accuracy.of(testy, prediction)))
})
To use the trained model, we can apply the method predict
on a new sample. Besides just returning class label, many methods
(e.g. neural networks) can also output the posteriori probabilities
of each class.
var posteriori = new double[10];
forest.predict(zipTest.get(0), posteriori);
svm.predict(testx[0]);
net.predict(testx[0], posteriori);
val posteriori = new Array[Double](10)
forest.predict(zipTest.get(0), posteriori)
svm.predict(testx(0))
net.predict(testx(0), posteriori)