public class DecisionTree extends java.lang.Object implements scala.Serializable, Logging
Constructor and Description |
---|
DecisionTree(Strategy strategy) |
Modifier and Type | Method and Description |
---|---|
protected static scala.Tuple2<Split[][],org.apache.spark.mllib.tree.model.Bin[][]> |
findSplitsBins(RDD<LabeledPoint> input,
org.apache.spark.mllib.tree.impl.DecisionTreeMetadata metadata)
Returns splits and bins for decision tree calculation.
|
DecisionTreeModel |
run(RDD<LabeledPoint> input)
Method to train a decision tree model over an RDD
|
static DecisionTreeModel |
train(RDD<LabeledPoint> input,
scala.Enumeration.Value algo,
Impurity impurity,
int maxDepth)
Method to train a decision tree model.
|
static DecisionTreeModel |
train(RDD<LabeledPoint> input,
scala.Enumeration.Value algo,
Impurity impurity,
int maxDepth,
int numClasses)
Method to train a decision tree model.
|
static DecisionTreeModel |
train(RDD<LabeledPoint> input,
scala.Enumeration.Value algo,
Impurity impurity,
int maxDepth,
int numClasses,
int maxBins,
scala.Enumeration.Value quantileCalculationStrategy,
scala.collection.immutable.Map<java.lang.Object,java.lang.Object> categoricalFeaturesInfo)
Method to train a decision tree model.
|
static DecisionTreeModel |
train(RDD<LabeledPoint> input,
Strategy strategy)
Method to train a decision tree model.
|
static DecisionTreeModel |
trainClassifier(JavaRDD<LabeledPoint> input,
int numClasses,
java.util.Map<java.lang.Integer,java.lang.Integer> categoricalFeaturesInfo,
java.lang.String impurity,
int maxDepth,
int maxBins)
Java-friendly API for
DecisionTree$.trainClassifier(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, int, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int) |
static DecisionTreeModel |
trainClassifier(RDD<LabeledPoint> input,
int numClasses,
scala.collection.immutable.Map<java.lang.Object,java.lang.Object> categoricalFeaturesInfo,
java.lang.String impurity,
int maxDepth,
int maxBins)
Method to train a decision tree model for binary or multiclass classification.
|
static DecisionTreeModel |
trainRegressor(JavaRDD<LabeledPoint> input,
java.util.Map<java.lang.Integer,java.lang.Integer> categoricalFeaturesInfo,
java.lang.String impurity,
int maxDepth,
int maxBins)
Java-friendly API for
DecisionTree$.trainRegressor(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int) |
static DecisionTreeModel |
trainRegressor(RDD<LabeledPoint> input,
scala.collection.immutable.Map<java.lang.Object,java.lang.Object> categoricalFeaturesInfo,
java.lang.String impurity,
int maxDepth,
int maxBins)
Method to train a decision tree model for regression.
|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public DecisionTree(Strategy strategy)
public static DecisionTreeModel train(RDD<LabeledPoint> input, Strategy strategy)
Note: Using DecisionTree$.trainClassifier(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, int, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
and DecisionTree$.trainRegressor(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
is recommended to clearly separate classification and regression.
input
- Training dataset: RDD of LabeledPoint
.
For classification, labels should take values {0, 1, ..., numClasses-1}.
For regression, labels are real numbers.strategy
- The configuration parameters for the tree algorithm which specify the type
of algorithm (classification, regression, etc.), feature type (continuous,
categorical), depth of the tree, quantile calculation strategy, etc.public static DecisionTreeModel train(RDD<LabeledPoint> input, scala.Enumeration.Value algo, Impurity impurity, int maxDepth)
Note: Using DecisionTree$.trainClassifier(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, int, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
and DecisionTree$.trainRegressor(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
is recommended to clearly separate classification and regression.
input
- Training dataset: RDD of LabeledPoint
.
For classification, labels should take values {0, 1, ..., numClasses-1}.
For regression, labels are real numbers.algo
- algorithm, classification or regressionimpurity
- impurity criterion used for information gain calculationmaxDepth
- Maximum depth of the tree.
E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.public static DecisionTreeModel train(RDD<LabeledPoint> input, scala.Enumeration.Value algo, Impurity impurity, int maxDepth, int numClasses)
Note: Using DecisionTree$.trainClassifier(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, int, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
and DecisionTree$.trainRegressor(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
is recommended to clearly separate classification and regression.
input
- Training dataset: RDD of LabeledPoint
.
For classification, labels should take values {0, 1, ..., numClasses-1}.
For regression, labels are real numbers.algo
- algorithm, classification or regressionimpurity
- impurity criterion used for information gain calculationmaxDepth
- Maximum depth of the tree.
E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.numClasses
- number of classes for classification. Default value of 2.public static DecisionTreeModel train(RDD<LabeledPoint> input, scala.Enumeration.Value algo, Impurity impurity, int maxDepth, int numClasses, int maxBins, scala.Enumeration.Value quantileCalculationStrategy, scala.collection.immutable.Map<java.lang.Object,java.lang.Object> categoricalFeaturesInfo)
Note: Using DecisionTree$.trainClassifier(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, int, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
and DecisionTree$.trainRegressor(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
is recommended to clearly separate classification and regression.
input
- Training dataset: RDD of LabeledPoint
.
For classification, labels should take values {0, 1, ..., numClasses-1}.
For regression, labels are real numbers.algo
- classification or regressionimpurity
- criterion used for information gain calculationmaxDepth
- Maximum depth of the tree.
E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.numClasses
- number of classes for classification. Default value of 2.maxBins
- maximum number of bins used for splitting featuresquantileCalculationStrategy
- algorithm for calculating quantilescategoricalFeaturesInfo
- Map storing arity of categorical features.
E.g., an entry (n -> k) indicates that feature n is categorical
with k categories indexed from 0: {0, 1, ..., k-1}.public static DecisionTreeModel trainClassifier(RDD<LabeledPoint> input, int numClasses, scala.collection.immutable.Map<java.lang.Object,java.lang.Object> categoricalFeaturesInfo, java.lang.String impurity, int maxDepth, int maxBins)
input
- Training dataset: RDD of LabeledPoint
.
Labels should take values {0, 1, ..., numClasses-1}.numClasses
- number of classes for classification.categoricalFeaturesInfo
- Map storing arity of categorical features.
E.g., an entry (n -> k) indicates that feature n is categorical
with k categories indexed from 0: {0, 1, ..., k-1}.impurity
- Criterion used for information gain calculation.
Supported values: "gini" (recommended) or "entropy".maxDepth
- Maximum depth of the tree.
E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.
(suggested value: 5)maxBins
- maximum number of bins used for splitting features
(suggested value: 32)public static DecisionTreeModel trainClassifier(JavaRDD<LabeledPoint> input, int numClasses, java.util.Map<java.lang.Integer,java.lang.Integer> categoricalFeaturesInfo, java.lang.String impurity, int maxDepth, int maxBins)
DecisionTree$.trainClassifier(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, int, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
input
- (undocumented)numClasses
- (undocumented)categoricalFeaturesInfo
- (undocumented)impurity
- (undocumented)maxDepth
- (undocumented)maxBins
- (undocumented)public static DecisionTreeModel trainRegressor(RDD<LabeledPoint> input, scala.collection.immutable.Map<java.lang.Object,java.lang.Object> categoricalFeaturesInfo, java.lang.String impurity, int maxDepth, int maxBins)
input
- Training dataset: RDD of LabeledPoint
.
Labels are real numbers.categoricalFeaturesInfo
- Map storing arity of categorical features.
E.g., an entry (n -> k) indicates that feature n is categorical
with k categories indexed from 0: {0, 1, ..., k-1}.impurity
- Criterion used for information gain calculation.
Supported values: "variance".maxDepth
- Maximum depth of the tree.
E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.
(suggested value: 5)maxBins
- maximum number of bins used for splitting features
(suggested value: 32)public static DecisionTreeModel trainRegressor(JavaRDD<LabeledPoint> input, java.util.Map<java.lang.Integer,java.lang.Integer> categoricalFeaturesInfo, java.lang.String impurity, int maxDepth, int maxBins)
DecisionTree$.trainRegressor(org.apache.spark.rdd.RDD<org.apache.spark.mllib.regression.LabeledPoint>, scala.collection.immutable.Map<java.lang.Object, java.lang.Object>, java.lang.String, int, int)
input
- (undocumented)categoricalFeaturesInfo
- (undocumented)impurity
- (undocumented)maxDepth
- (undocumented)maxBins
- (undocumented)protected static scala.Tuple2<Split[][],org.apache.spark.mllib.tree.model.Bin[][]> findSplitsBins(RDD<LabeledPoint> input, org.apache.spark.mllib.tree.impl.DecisionTreeMetadata metadata)
Continuous features: For each feature, there are numBins - 1 possible splits representing the possible binary decisions at each node in the tree. This finds locations (feature values) for splits using a subsample of the data.
Categorical features: For each feature, there is 1 bin per split. Splits and bins are handled in 2 ways: (a) "unordered features" For multiclass classification with a low-arity feature (i.e., if isMulticlass && isSpaceSufficientForAllCategoricalSplits), the feature is split based on subsets of categories. (b) "ordered features" For regression and binary classification, and for multiclass classification with a high-arity feature, there is one bin per category.
input
- Training data: RDD of LabeledPoint
metadata
- Learning and dataset metadataSplit
of size (numFeatures, numSplits).
Bins is an Array of Bin
of size (numFeatures, numBins).public DecisionTreeModel run(RDD<LabeledPoint> input)
input
- Training data: RDD of LabeledPoint