Quick Start

Smile is a fast and comprehensive machine learning system. With advanced data structures and algorithms, Smile delivers the state-of-art performance. Smile is self contained and requires only Java standard library. Since v1.4, Smile may optionally leverage native BLAS/LAPACK library too. It also provides high-level operators in Scala and an interactive shell. In practice, data scientists usually build models with high-level tools such as R, Matlab, SAS, etc. However, developers have to spend a lot of time and energy to incorporate these models in the production system that are often implemented in general purpose programming languages such as Java and Scala. With Smile, data scientists and developers can work in the same environment to build machine learning applications quickly!

Download

Get Smile from the releases page of the project website. The universal tarball is also available and can be used on Mac, Linux and Windows.

If you would like to build Smile from source, please first install Java 17, Scala 2.13 and SBT 1.0+. Then clone the repo and build the package:


    $ git clone https://github.com/haifengl/smile.git
    $ cd smile
    $ sbt package
    

To build with Scala 2.12, run


    $ sbt ++2.12.10 scala/package
    

To test the latest code, run the following


    $ git pull
    $ ./smile.sh
    

which will build the system and enter the shell.

Smile runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). Although we require Java9+ to build Smile to enable automatic module, the built library can work with Java8+. All you need is to have Java installed on your system PATH, or the JAVA_HOME environment variable pointing to a Java installation.

Shell

Smile comes with an interactive shell for Scala. In the home directory of Smile, type


    $ bin/smile
    

to enter the shell, which is based on Scala REPL. If you prefer Ammonite REPL, copy its jar to Smile's lib directory. Smile Shell will switch to Ammonite once restarted. In the shell, you can run any valid Scala expressions. In the simplest case, you can use it as a calculator. Besides, all high-level Smile operators are predefined in the shell. By default, the shell uses up to 75% memory. If you need more memory to handle large data, use the option -J-Xmx or -XX:MaxRAMPercentage. For example,


    $ ./bin/smile -J-Xmx30G
    

You can also modify the configuration file ./conf/smile.ini for the memory and other JVM settings.

Basics

When the shell starts, we should see something like the following:


                                                       ..::''''::..
                                                     .;''        ``;.
     ....                                           ::    ::  ::    ::
   ,;' .;:                ()  ..:                  ::     ::  ::     ::
   ::.      ..:,:;.,:;.    .   ::   .::::.         :: .:' ::  :: `:. ::
    '''::,   ::  ::  ::  `::   ::  ;:   .::        ::  :          :  ::
  ,:';  ::;  ::  ::  ::   ::   ::  ::,::''.         :: `:.      .:' ::
  `:,,,,;;' ,;; ,;;, ;;, ,;;, ,;;, `:,,,,:'          `;..``::::''..;'
                                                       ``::,,,,::''

  Welcome to Smile Shell; enter 'help<RETURN>' for list of supported commands.
  Type "exit<RETURN>" to leave the Smile Shell
  Version 2.1.0, Scala 2.13.1, SBT 1.2.8, Built at 2019-11-20 20:04:41.868
===============================================================================
smile>
    

The smile> line is the prompt that the shell is waiting for you to enter expressions. To get help information of Smile high-level operators, type help. You can also get detailed information on each operator by typing help("command"), e.g. help("svm"). To exit the shell, type exit.

In the shell, type demo to bring up the demo window, which shows off various Smile's machine learning capabilities.

You can also type benchmark() to see Smile's performance on a couple of test data. You can run a particular benchmark by bencharm("test name"), where test name could be "airline", "usps", etc.

On startup, the shell analyzes the classpath and creates a database of every visible package and path. This is available via tab-completion analogous to the path-completion available in most shells. If you type a partial path, tab will complete as far as it can and show you your options if there is more than one.


    smile> smile.classification.r
    randomForest   rbfnet   rda
    

Third Party Libraries

It is also possible to use third party libraries from Maven Central. For example,


    smile> import $ivy.`com.google.guava:guava:18.0`, com.google.common.collect._
    

If the library is not available in local ivy cache, Smile Shell will download it automatically. Note that the format org:library:version is similar with the library dependency in SBT. For Scala library, it is recommended to use the format org::library:version, which will choose the library in the same Scala major version (e.g. 2.13 vs 2.12).


    smile> import $ivy.`org.scalaz::scalaz-core:7.2.7`, scalaz._, Scalaz._
    

Beyond the default resolvers, we can add third-party or our own repositories:


    smile> interp.repositories() ++= Seq(coursier.ivy.IvyRepository.fromPattern(
      "https://ambiata-oss.s3-ap-southeast-2.amazonaws.com/" +:
      coursier.ivy.Pattern.default
    ))
    

Calculator

We can run any valid Scala expressions in the shell. In the simplest case, you can use it as a calculator.


    smile> "Hello, World"
    res0: String = Hello, World

    smile> 2
    res1: Int = 2

    smile> 2+3
    res2: Int = 5
    

We can also define variables and reuse them.


    smile> val x = 2 + 3
    x: Int = 5

    smile> print(x)
    5

    smile> val y = 2 * (x + 1)
    z: Int = 12
    

Functions can be defined too. As Scala is a functional language, functions are first class citizen, just like other values.


    smile> def sq(x: Double) = x * x
    sq: (x: Double)Double

    smile> sq(y)
    res4: Double = 441.0
    

Scala is a powerful and complicated language that fuses object-oriented programming and functional programming. Although you don't need to know all the bells and whistles of Scala to use Smile, we strongly recommend you to learn some basics.

Script

We may also run Smile code in a script. The script examples/iris.sc containing the following Smile code


    val data = read.arff(Paths.getTestData("weka/iris.arff"))
    println(data)

    val formula = "class" ~
    val rf = smile.classification.randomForest(formula, data)
    println(s"OOB error = %.2f%%" format 100 * rf.error)
    

It can be run directly from the shell:


    $ bin/smile examples/iris.sc
    

In this example, we use Fisher's Iris data in the data directory (including many open data for research purpose). The data is in Weka's ARFF format. The function read.arff returns an object of DataFrame. The formula "class" ~ defines that the column "class" will be used as the class label while the rest columns are predictors. Finally, we train a random forest with default parameters and print out its OOB (out of bag) error. We can apply the model on new data samples with the method predict.

Smile provides an integration with JShell, which is available from Java 9+. In the home directory of Smile, type


    $ bin/jshell.sh
    

to enter the JShell with Smile libraries in the class path. In the shell, you can run any valid Java expressions. In the simplest case, you can use it as a calculator. If you need more memory to handle large data, use the option -R-Xmx. For example,


    $ ./bin/jshell.sh -R-Xmx30G
    

Basics

When the shell starts, we should see something like the following:


    |  Welcome to JShell -- Version 11.0.6
    |  For an introduction type: /help intro

    jshell>
    

To exit the shell, type /exit.

In the shell, run the below code to bring up the demo window, which shows off various Smile's machine learning capabilities.


    javax.swing.SwingUtilities.invokeLater(() -> smile.demo.SmileDemo.createAndShowGUI(false))
    

Similar to Smile Shell, we pre-import Smile's definitions in JShell.

Calculator

With local variable type inference, it is easy to use JShell as a calculator.


    jshell> "Hello, World"
    $2 ==> "Hello, World"

    jshell> 2
    $3 ==> 2

    jshell> 2+3
    $4 ==> 5
    

We can also define variables and reuse them.


    jshell> var x = 2 + 3
    x ==> 5

    jshell> var y = 2 * (x + 1)
    y ==> 12
    

Script

We may also run Smile code in a script. The script examples/iris.jsh containing the following Smile code


    import smile.classification.RandomForest;
    import smile.data.formula.Formula;
    import smile.io.Read;
    import smile.util.Paths;

    var data = Read.arff(Paths.getTestData("weka/iris.arff"));
    System.out.println(data);

    var formula = Formula.lhs("class");
    var rf = RandomForest.fit(formula, data);
    System.out.format("OOB error = %.2f%%%n", 100 * rf.error());
    

It can be run directly from the shell:


    $ bin/jshell.sh examples/iris.jsh
    

In this example, we use Fisher's Iris data in the data directory (including many open data for research purpose). The data is in Weka's ARFF format. The function Read.arff returns an object of DataFrame. The formula Formula.lhs("class") defines that the column "class" will be used as the class label while the rest columns are predictors. Finally, we train a random forest with default parameters and print out its OOB (out of bag) error. We can apply the model on new data samples with the method predict.

Smile provides an integration with Kotlin REPL. In the home directory of Smile, type


    $ bin/kotlin.sh
    

to enter the Kotlin REPL with Smile libraries in the class path. In the shell, you can run any valid Kotlin expressions. In the simplest case, you can use it as a calculator. If you need more memory to handle large data, use the option -J-Xmx. For example,


    $ ./bin/kotlin.sh -J-Xmx30G
    

Basics

When the shell starts, we should see something like the following:


    Welcome to Kotlin version 1.3.70 (JRE 1.8.0_241-b07)
    Type :help for help, :quit for quit
    >>>
    

To exit the REPL, type :quit. Different from Smile Shell, we don't pre-import any Smile's definitions in Kotlin REPL.

Calculator

With local variable type inference, it is easy to use JShell as a calculator.


    >>> "Hello, World"
    res0: kotlin.String = Hello, World
    >>> 2
    res1: kotlin.Int = 2
    >>> 2+3
    res2: kotlin.Int = 5
    

We can also define variables and reuse them.


    >>> var x = 2 + 3
    >>> var y = 2 * (x + 1)
    >>> y
    res13: kotlin.Int = 12
    

Script

We may also run Smile code in a script. The script examples/iris.kts containing the following Smile code


    import smile.*
    import smile.classification.*
    import smile.data.formula.Formula
    import smile.util.Paths

    val data = read.arff(Paths.getTestData("weka/iris.arff"))
    println(data)

    val formula = Formula.lhs("class")
    val rf = randomForest(formula, data)
    println("OOB error = ${rf.error()}")
    

It can be run directly from the shell:


    $ bin/kotlin.sh examples/iris.kts
    

In this example, we use Fisher's Iris data in the data directory (including many open data for research purpose). The data is in Weka's ARFF format. The function Read.arff returns an object of DataFrame. The formula Formula.lhs("class") defines that the column "class" will be used as the class label while the rest columns are predictors. Finally, we train a random forest with default parameters and print out its OOB (out of bag) error. We can apply the model on new data samples with the method predict.

Notebooks

You can also use Smile in your favorite Notebook. We recommend JupyterLab and provide jupyterlab.sh to setup the conda environment of Jupyter Lab for Smile with kernels for Scala, Kotlin and Clojure. When you run jupyterlab.sh the first time, it will set up the environment automatically. You can update the environment with the option --update later when needed. You may install BeakerX with --install-beakerx for other JVM kernels. However, it may break the default Scala kernel (Almond).

In Scala notebooks, it is helpful to add the following code to the notebook. We provides many notebook examples in the notebooks directory.


    import $ivy.`com.github.haifengl::smile-scala:3.0.3`

    import scala.language.postfixOps
    import org.apache.commons.csv.CSVFormat
    import smile._
    import smile.util._
    import smile.math._
    import smile.math.MathEx.{log2, logistic, factorial, lfactorial, choose, lchoose, random, randomInt, permutate, c, cbind, rbind, sum, mean, median, q1, q3, `var` => variance, sd, mad, min, max, whichMin, whichMax, unique, dot, distance, pdist, KullbackLeiblerDivergence => kld, JensenShannonDivergence => jsd, cov, cor, spearman, kendall, norm, norm1, norm2, normInf, standardize, normalize, scale, unitize, unitize1, unitize2, root}
    import smile.math.distance._
    import smile.math.kernel._
    import smile.math.matrix._
    import smile.math.matrix.Matrix._
    import smile.math.rbf._
    import smile.stat.distribution._
    import smile.data._
    import smile.data.formula._
    import smile.data.measure._
    import smile.data.`type`._
    import smile.json._
    import smile.interpolation._
    import smile.validation._
    import smile.association._
    import smile.base.cart.SplitRule
    import smile.base.mlp._
    import smile.base.rbf.RBF
    import smile.classification._
    import smile.regression.{ols, ridge, lasso, svr, gpr}
    import smile.feature._
    import smile.clustering._
    import smile.vq._
    import smile.manifold._
    import smile.mds._
    import smile.sequence._
    import smile.projection._
    import smile.nlp._
    import smile.wavelet._
    

To plot data with Swing based functions in Notebook, run the below code first.


    import smile.plot.swing._
    import smile.plot.show
    import smile.plot.Render._
    

To use Vega based plot functions in Notebook, run the below code instead.


    import smile.plot.vega._
    import smile.plot.show
    import smile.plot.Render._
    

A Gentle Example

This example shows how to use Smile for predictive modeling from Java and Scala code. First, let's load the data. Smile provides a couple of parsers for popular data formats, such as Parquet, Avro, Arrow, SAS7BDAT, Weka's ARFF files, LibSVM's file format, delimited text files, JSON, and binary sparse data. These classes are in the package smile.io. In the following example, we use the ARFF parser to load the weather dataset:


    import smile.io.*;
    var weather = Read.arff("data/weka/weather.nominal.arff");
          

    import smile.io._
    val weather = read.arff("data/weka/weather.nominal.arff")
    

Most Smile data parsers return a DataFrame object, which is immutable and contain a fixed number of named columns. We can also parse plain delimited text files and the parser automatically infer the schema. In the below, we load the USPS zip code handwriting dataset in a white space delimitered text file.


    import org.apache.commons.csv.CSVFormat;

    var format = CSVFormat.DEFAULT.withDelimiter(' ');
    var zipTrain = Read.csv("data/usps/zip.train", format);
    var zipTest = Read.csv("data/usps/zip.test", format);
          

    val zipTrain = read.csv("data/usps/zip.train", delimiter = ' ', header = false)
    val zipTest = read.csv("data/usps/zip.test", delimiter = ' ', header = false)
    

Because this data doesn't have a header line, the parser will assign V1, V2, ... as the column names. In particular, the first column (V1) is the class label.

Smile implements a variety of classification and regression algorithms. In what follows, we train a random forest model on the USPS data. Random forest is an ensemble classifier that consists of many decision trees and outputs the majority vote of individual trees. The method combines bagging idea and the random selection of features.


    import smile.classification.*;
    import smile.data.formula.Formula;

    var formula = Formula.lhs("V1");
    
    var prop = new java.util.Properties();
    prop.setProperty("smile.random.forest.trees", "200");
    var forest = RandomForest.fit(formula, zipTrain, prop);
    System.out.format("OOB error rate = %.2f%%%n", (100.0 * forest.error()));
          

    val formula: Formula = "V1" ~

    val forest = randomForest(formula, zipTrain, ntrees = 200)
    println("OOB error rate = %.2f%%" format (100.0 * forest.error()))
    

In the example, we firstly define a Formula object, which specifies the model in a symbolic way. The left-hand-side (LHS) of formula is the response variable, and the right-hand-side (RHS) is a list of terms as independent variables. When the RHS is not specified, the rest of columns in the data frame are used by default. In the simpliest case, the terms (both of LHS and of RHS) are column names. But they can be functions (e.g. log) and transformations (e.g. interaction and factor crossing) too. The functions/transformations are symbolic and thus lazy.

With random forest, we may estimate the model accuracy with out-of-bag (OOB) samples. This is useful especially when we don't have a separate test dataset.

Now let's a support vector machine (SVM) on the USPS data. As SVM is a kernel learning machine, it can be applied on any type of data as long as we can define a Mercer kernel on the data. Therefore, SVM class doesn't take a DataFrame as input but a generic array. We can leverage the formula object to extract the training samples and labels.


    var x = formula.x(zipTrain).toArray();
    var y = formula.y(zipTrain).toIntArray();
    var testx = formula.x(zipTest).toArray();
    var testy = formula.y(zipTest).toIntArray();
          

    val x = formula.x(zipTrain).toArray
    val y = formula.y(zipTrain).toIntArray
    val testx = formula.x(zipTest).toArray
    val testy = formula.y(zipTest).toIntArray
    

The SVM employs a Gaussian kernel and one-to-one strategy as this is a multi-class problem. We also evaluate the model on the test data with Validation class, which provides a variety of model validation methods such as cross validation, bootstrap, etc.


    import smile.math.kernel.GaussianKernel;
    import smile.validation.*;

    var kernel = new GaussianKernel(8.0);
    var svm = OneVersusOne.fit(x, y, (x, y) -> SVM.fit(x, y, kernel, 5, 1E-3));
    var pred = Validation.test(svm, testx);
    System.out.format("Accuracy = %.2f%%%n", (100.0 * Accuracy.of(testy, pred)));
    System.out.format("Confusion Matrix: %s%n", ConfusionMatrix.of(testy, pred));
          

    val kernel = new GaussianKernel(8.0)
    val svm = test(x, y, testx, testy) { (x, y) =>
      ovo(x, y) { (x, y) =>
        SVM.fit(x, y, kernel, 5, 1E-3)
      }
    }
    

Lastly, we will train a 5-layer deep learning model. Deep learning requires the features properly scaled/standardized. In this example, we employs the class Standardizer to transforms features to 0 mean and unit variance. A alternative is to subtract the median and divide by the IQR, which is implemented RobustStandardizer.


    import smile.base.mlp.Layer;
    import smile.base.mlp.OutputFunction;
    import smile.feature.Standardizer;
    import smile.math.MathEx;

    var scaler = Standardizer.fit(x);
    var scaledX = scaler.transform(x);
    var scaledTestX = scaler.transform(testx);
    
    var net = new MLP(256,
      Layer.sigmoid(768),
      Layer.sigmoid(192),
      Layer.sigmoid(30),
      Layer.mle(10, OutputFunction.SIGMOID)
    );
    net.setLearningRate(0.1);
    net.setMomentum(0.0);
    
    for (int epoch = 0; epoch < 10; epoch++) {
      System.out.format("----- epoch %d -----%n", epoch);
      for (int i : MathEx.permutate(x.length)) {
        net.update(scaledX[i], y[i]);
      }
      var prediction = Validation.test(net, scaledTestX);
      System.out.format("Accuracy = %.2f%%%n", (100.0 * Accuracy.of(testy, prediction)));
    }
          

    val scaler = Standardizer.fit(x)
    val scaledX = scaler.transform(x)
    val scaledTestX = scaler.transform(testx)
    
    val net = new MLP(256,
      Layer.sigmoid(768),
      Layer.sigmoid(192),
      Layer.sigmoid(30),
      Layer.mle(10, OutputFunction.SIGMOID)
    )
    net.setLearningRate(0.1)
    net.setMomentum(0.0)
    
    (0 until 10).foreach(epoch => {
      println("----- epoch %d -----" format epoch)
      MathEx.permutate(x.length).foreach(i =>
        net.update(scaledX(i), y(i))
      )
      val prediction = Validation.test(net, scaledTestX)
      println("Accuracy = %.2f%%" format (100.0 * Accuracy.of(testy, prediction)))
    })
    

To use the trained model, we can apply the method predict on a new sample. Besides just returning class label, many methods (e.g. neural networks) can also output the posteriori probabilities of each class.


    var posteriori = new double[10];
    forest.predict(zipTest.get(0), posteriori);
    svm.predict(testx[0]);
    net.predict(scaledTestX[0], posteriori);
          

    val posteriori = new Array[Double](10)
    forest.predict(zipTest.get(0), posteriori)
    svm.predict(testx(0))
    net.predict(scaledTestX(0), posteriori)
    
Fork me on GitHub