Class DecisionTree
- All Implemented Interfaces:
Serializable
,ToDoubleFunction<Tuple>
,ToIntFunction<Tuple>
,Classifier<Tuple>
,DataFrameClassifier
,SHAP<Tuple>
The algorithms that are used for constructing decision trees usually work top-down by choosing a variable at each step that is the next best variable to use in splitting the set of items. "Best" is defined by how well the variable splits the set into homogeneous subsets that have the same value of the target variable. Different algorithms use different formulae for measuring "best". Used by the CART algorithm, Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it were randomly labeled according to the distribution of labels in the subset. Gini impurity can be computed by summing the probability of each item being chosen times the probability of a mistake in categorizing that item. It reaches its minimum (zero) when all cases in the node fall into a single target category. Information gain is another popular measure, used by the ID3, C4.5 and C5.0 algorithms. Information gain is based on the concept of entropy used in information theory. For categorical variables with different number of levels, however, information gain are biased in favor of those attributes with more levels. Instead, one may employ the information gain ratio, which solves the drawback of information gain.
Classification and Regression Tree techniques have a number of advantages over many of those alternative techniques.
- Simple to understand and interpret.
- In most cases, the interpretation of results summarized in a tree is very simple. This simplicity is useful not only for purposes of rapid classification of new observations, but can also often yield a much simpler "model" for explaining why observations are classified or predicted in a particular manner.
- Able to handle both numerical and categorical data.
- Other techniques are usually specialized in analyzing datasets that have only one type of variable.
- Tree methods are nonparametric and nonlinear.
- The final results of using tree methods for classification or regression can be summarized in a series of (usually few) logical if-then conditions (tree nodes). Therefore, there is no implicit assumption that the underlying relationships between the predictor variables and the dependent variable are linear, follow some specific non-linear link function, or that they are even monotonic in nature. Thus, tree methods are particularly well suited for data mining tasks, where there is often little a priori knowledge nor any coherent set of theories or predictions regarding which variables are related and how. In those types of data analytics, tree methods can often reveal simple relationships between just a few variables that could have easily gone unnoticed using other analytic techniques.
Some techniques such as bagging, boosting, and random forest use more than one decision tree for their analysis.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface smile.classification.Classifier
Classifier.Trainer<T,
M extends Classifier<T>> Nested classes/interfaces inherited from interface smile.classification.DataFrameClassifier
DataFrameClassifier.Trainer<M extends DataFrameClassifier>
-
Field Summary
-
Constructor Summary
ConstructorDescriptionDecisionTree
(DataFrame x, int[] y, StructField response, int k, SplitRule rule, int maxDepth, int maxNodes, int nodeSize, int mtry, int[] samples, int[][] order) Constructor. -
Method Summary
Modifier and TypeMethodDescriptionint[]
classes()
Returns the class labels.findBestSplit
(LeafNode leaf, int j, double impurity, int lo, int hi) Finds the best split for given column.static DecisionTree
Fits a classification tree.static DecisionTree
fit
(Formula formula, DataFrame data, Properties params) Fits a classification tree.static DecisionTree
Fits a classification tree.formula()
Returns null if the tree is part of ensemble algorithm.protected double
Returns the impurity of node.protected LeafNode
newNode
(int[] nodeSamples) Creates a new leaf node.int
Returns the number of classes.int
Predicts the class label of an instance.int
Predicts the class label of an instance and also calculate a posteriori probabilities.Returns a new decision tree by reduced error pruning.schema()
Returns the predictor schema.boolean
soft()
Returns true if this is a soft classifier that can estimate the posteriori probabilities of classification.Methods inherited from class smile.base.cart.CART
clear, dot, findBestSplit, importance, order, predictors, root, shap, shap, size, split, toString
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface smile.classification.Classifier
applyAsDouble, applyAsInt, online, predict, predict, predict, predict, predict, predict, score, update, update, update
Methods inherited from interface smile.classification.DataFrameClassifier
predict, predict
-
Constructor Details
-
DecisionTree
public DecisionTree(DataFrame x, int[] y, StructField response, int k, SplitRule rule, int maxDepth, int maxNodes, int nodeSize, int mtry, int[] samples, int[][] order) Constructor. Fits a classification tree for AdaBoost and Random Forest.- Parameters:
x
- the data frame of the explanatory variable.y
- the response variables.response
- the metadata of response variable.k
- the number of classes.rule
- the splitting rule.maxDepth
- the maximum depth of the tree.maxNodes
- the maximum number of leaf nodes in the tree.nodeSize
- the minimum size of leaf nodes.mtry
- the number of input variables to pick to split on at each node. It seems that sqrt(p) give generally good performance, where p is the number of variables.samples
- the sample set of instances for stochastic learning. samples[i] is the number of sampling for instance i.order
- the index of training values in ascending order. Note that only numeric attributes need be sorted.
-
-
Method Details
-
impurity
Description copied from class:CART
Returns the impurity of node. -
newNode
Description copied from class:CART
Creates a new leaf node. -
findBestSplit
Description copied from class:CART
Finds the best split for given column.- Specified by:
findBestSplit
in classCART
- Parameters:
leaf
- the node to split.j
- the column to split on.impurity
- the impurity of node.lo
- the lower bound of sample index in the node.hi
- the upper bound of sample index in the node.- Returns:
- the best split.
-
fit
Fits a classification tree.- Parameters:
formula
- a symbolic description of the model to be fitted.data
- the data frame of the explanatory and response variables.- Returns:
- the model.
-
fit
Fits a classification tree. The hyper-parameters inprop
includesmile.cart.split.rule
smile.cart.node.size
smile.cart.max.nodes
- Parameters:
formula
- a symbolic description of the model to be fitted.data
- the data frame of the explanatory and response variables.params
- the hyper-parameters.- Returns:
- the model.
-
fit
public static DecisionTree fit(Formula formula, DataFrame data, SplitRule rule, int maxDepth, int maxNodes, int nodeSize) Fits a classification tree.- Parameters:
formula
- a symbolic description of the model to be fitted.data
- the data frame of the explanatory and response variables.rule
- the splitting rule.maxDepth
- the maximum depth of the tree.maxNodes
- the maximum number of leaf nodes in the tree.nodeSize
- the minimum size of leaf nodes.- Returns:
- the model.
-
numClasses
public int numClasses()Description copied from interface:Classifier
Returns the number of classes.- Specified by:
numClasses
in interfaceClassifier<Tuple>
- Returns:
- the number of classes.
-
classes
public int[] classes()Description copied from interface:Classifier
Returns the class labels.- Specified by:
classes
in interfaceClassifier<Tuple>
- Returns:
- the class labels.
-
predict
Description copied from interface:Classifier
Predicts the class label of an instance.- Specified by:
predict
in interfaceClassifier<Tuple>
- Parameters:
x
- the instance to be classified.- Returns:
- the predicted class label.
-
soft
public boolean soft()Description copied from interface:Classifier
Returns true if this is a soft classifier that can estimate the posteriori probabilities of classification.- Specified by:
soft
in interfaceClassifier<Tuple>
- Returns:
- true if soft classifier.
-
predict
Predicts the class label of an instance and also calculate a posteriori probabilities. The posteriori estimation is based on sample distribution in the leaf node. It is not accurate at all when be used in a single tree. It is mainly used by RandomForest in an ensemble way.- Specified by:
predict
in interfaceClassifier<Tuple>
- Parameters:
x
- an instance to be classified.posteriori
- a posteriori probabilities on output.- Returns:
- the predicted class label
-
formula
Returns null if the tree is part of ensemble algorithm.- Specified by:
formula
in interfaceDataFrameClassifier
- Returns:
- the formula associated with the model.
-
schema
Description copied from interface:DataFrameClassifier
Returns the predictor schema.- Specified by:
schema
in interfaceDataFrameClassifier
- Returns:
- the predictor schema.
-
prune
Returns a new decision tree by reduced error pruning.- Parameters:
test
- the test data set to evaluate the errors of nodes.- Returns:
- a new pruned tree.
-