Returns splits and bins for decision tree calculation.
Returns splits and bins for decision tree calculation. Continuous and categorical features are handled differently.
Continuous features: For each feature, there are numBins - 1 possible splits representing the possible binary decisions at each node in the tree. This finds locations (feature values) for splits using a subsample of the data.
Categorical features: For each feature, there is 1 bin per split. Splits and bins are handled in 2 ways: (a) "unordered features" For multiclass classification with a low-arity feature (i.e., if isMulticlass && isSpaceSufficientForAllCategoricalSplits), the feature is split based on subsets of categories. (b) "ordered features" For regression and binary classification, and for multiclass classification with a high-arity feature, there is one bin per category.
Training data: RDD of org.apache.spark.mllib.regression.LabeledPoint
Learning and dataset metadata
A tuple of (splits, bins). Splits is an Array of org.apache.spark.mllib.tree.model.Split of size (numFeatures, numSplits). Bins is an Array of org.apache.spark.mllib.tree.model.Bin of size (numFeatures, numBins).
Method to train a decision tree model.
Method to train a decision tree model. The method supports binary and multiclass classification and regression.
Note: Using org.apache.spark.mllib.tree.DecisionTree$#trainClassifier and org.apache.spark.mllib.tree.DecisionTree$#trainRegressor is recommended to clearly separate classification and regression.
Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. For classification, labels should take values {0, 1, ..., numClasses-1}. For regression, labels are real numbers.
classification or regression
criterion used for information gain calculation
Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.
number of classes for classification. Default value of 2.
maximum number of bins used for splitting features
algorithm for calculating quantiles
Map storing arity of categorical features. E.g., an entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, ..., k-1}.
DecisionTreeModel that can be used for prediction
Method to train a decision tree model.
Method to train a decision tree model. The method supports binary and multiclass classification and regression.
Note: Using org.apache.spark.mllib.tree.DecisionTree$#trainClassifier and org.apache.spark.mllib.tree.DecisionTree$#trainRegressor is recommended to clearly separate classification and regression.
Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. For classification, labels should take values {0, 1, ..., numClasses-1}. For regression, labels are real numbers.
algorithm, classification or regression
impurity criterion used for information gain calculation
Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.
number of classes for classification. Default value of 2.
DecisionTreeModel that can be used for prediction
Method to train a decision tree model.
Method to train a decision tree model. The method supports binary and multiclass classification and regression.
Note: Using org.apache.spark.mllib.tree.DecisionTree$#trainClassifier and org.apache.spark.mllib.tree.DecisionTree$#trainRegressor is recommended to clearly separate classification and regression.
Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. For classification, labels should take values {0, 1, ..., numClasses-1}. For regression, labels are real numbers.
algorithm, classification or regression
impurity criterion used for information gain calculation
Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes.
DecisionTreeModel that can be used for prediction
Method to train a decision tree model.
Method to train a decision tree model. The method supports binary and multiclass classification and regression.
Note: Using org.apache.spark.mllib.tree.DecisionTree$#trainClassifier and org.apache.spark.mllib.tree.DecisionTree$#trainRegressor is recommended to clearly separate classification and regression.
Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. For classification, labels should take values {0, 1, ..., numClasses-1}. For regression, labels are real numbers.
The configuration parameters for the tree algorithm which specify the type of algorithm (classification, regression, etc.), feature type (continuous, categorical), depth of the tree, quantile calculation strategy, etc.
DecisionTreeModel that can be used for prediction
Java-friendly API for org.apache.spark.mllib.tree.DecisionTree$#trainClassifier
Java-friendly API for org.apache.spark.mllib.tree.DecisionTree$#trainClassifier
Method to train a decision tree model for binary or multiclass classification.
Method to train a decision tree model for binary or multiclass classification.
Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. Labels should take values {0, 1, ..., numClasses-1}.
number of classes for classification.
Map storing arity of categorical features. E.g., an entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, ..., k-1}.
Criterion used for information gain calculation. Supported values: "gini" (recommended) or "entropy".
Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes. (suggested value: 5)
maximum number of bins used for splitting features (suggested value: 32)
DecisionTreeModel that can be used for prediction
Java-friendly API for org.apache.spark.mllib.tree.DecisionTree$#trainRegressor
Java-friendly API for org.apache.spark.mllib.tree.DecisionTree$#trainRegressor
Method to train a decision tree model for regression.
Method to train a decision tree model for regression.
Training dataset: RDD of org.apache.spark.mllib.regression.LabeledPoint. Labels are real numbers.
Map storing arity of categorical features. E.g., an entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, ..., k-1}.
Criterion used for information gain calculation. Supported values: "variance".
Maximum depth of the tree. E.g., depth 0 means 1 leaf node; depth 1 means 1 internal node + 2 leaf nodes. (suggested value: 5)
maximum number of bins used for splitting features (suggested value: 32)
DecisionTreeModel that can be used for prediction