public class DecisionTreeClassificationModel extends ProbabilisticClassificationModel<Vector,DecisionTreeClassificationModel> implements DecisionTreeModel, DecisionTreeClassifierParams, MLWritable, scala.Serializable
Modifier and Type | Method and Description |
---|---|
BooleanParam |
cacheNodeIds()
If false, the algorithm will pass trees to executors to match instances with nodes.
|
IntParam |
checkpointInterval()
Param for set checkpoint interval (>= 1) or disable checkpoint (-1).
|
DecisionTreeClassificationModel |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
int |
depth()
Depth of the tree.
|
Vector |
featureImportances() |
Param<String> |
impurity()
Criterion used for information gain calculation (case-insensitive).
|
Param<String> |
leafCol()
Leaf indices column name.
|
static DecisionTreeClassificationModel |
load(String path) |
IntParam |
maxBins()
Maximum number of bins used for discretizing continuous features and for choosing how to split
on features at each node.
|
IntParam |
maxDepth()
Maximum depth of the tree (nonnegative).
|
IntParam |
maxMemoryInMB()
Maximum memory in MB allocated to histogram aggregation.
|
DoubleParam |
minInfoGain()
Minimum information gain for a split to be considered at a tree node.
|
IntParam |
minInstancesPerNode()
Minimum number of instances each child must have after split.
|
DoubleParam |
minWeightFractionPerNode()
Minimum fraction of the weighted sample count that each child must have after split.
|
int |
numClasses()
Number of classes (values which the label can take).
|
int |
numFeatures()
Returns the number of features the model was trained on.
|
double |
predict(Vector features)
Predict label for the given features.
|
Vector |
predictRaw(Vector features)
Raw prediction for each possible label.
|
static MLReader<DecisionTreeClassificationModel> |
read() |
Node |
rootNode()
Root of the decision tree
|
LongParam |
seed()
Param for random seed.
|
String |
toString()
Summary of the model
|
Dataset<Row> |
transform(Dataset<?> dataset)
Transforms dataset by reading from
featuresCol , and appending new columns as specified by
parameters:
- predicted labels as predictionCol of type Double
- raw predictions (confidences) as rawPredictionCol of type Vector
- probability of each class as probabilityCol of type Vector . |
StructType |
transformSchema(StructType schema)
Check transform validity and derive the output schema from the input schema.
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
Param<String> |
weightCol()
Param for weight column name.
|
MLWriter |
write()
Returns an
MLWriter instance for this ML instance. |
normalizeToProbabilitiesInPlace, predictProbability, probabilityCol, setProbabilityCol, setThresholds, thresholds
rawPredictionCol, setRawPredictionCol, transformImpl
featuresCol, labelCol, predictionCol, setFeaturesCol, setPredictionCol
transform, transform, transform
params
getLeafField, leafIterator, maxSplitFeatureIndex, numNodes, predictLeaf, toDebugString
validateAndTransformSchema
getCacheNodeIds, getLeafCol, getMaxBins, getMaxDepth, getMaxMemoryInMB, getMinInfoGain, getMinInstancesPerNode, getMinWeightFractionPerNode, getOldStrategy, setLeafCol
getCheckpointInterval
getWeightCol
getImpurity, getOldImpurity
extractInstances
extractInstances, extractInstances
getLabelCol, labelCol
featuresCol, getFeaturesCol
getPredictionCol, predictionCol
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
getRawPredictionCol, rawPredictionCol
getProbabilityCol, probabilityCol
getThresholds, thresholds
save
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public static MLReader<DecisionTreeClassificationModel> read()
public static DecisionTreeClassificationModel load(String path)
public final Param<String> impurity()
TreeClassifierParams
impurity
in interface TreeClassifierParams
public final Param<String> leafCol()
DecisionTreeParams
leafCol
in interface DecisionTreeParams
public final IntParam maxDepth()
DecisionTreeParams
maxDepth
in interface DecisionTreeParams
public final IntParam maxBins()
DecisionTreeParams
maxBins
in interface DecisionTreeParams
public final IntParam minInstancesPerNode()
DecisionTreeParams
minInstancesPerNode
in interface DecisionTreeParams
public final DoubleParam minWeightFractionPerNode()
DecisionTreeParams
minWeightFractionPerNode
in interface DecisionTreeParams
public final DoubleParam minInfoGain()
DecisionTreeParams
minInfoGain
in interface DecisionTreeParams
public final IntParam maxMemoryInMB()
DecisionTreeParams
maxMemoryInMB
in interface DecisionTreeParams
public final BooleanParam cacheNodeIds()
DecisionTreeParams
cacheNodeIds
in interface DecisionTreeParams
public final Param<String> weightCol()
HasWeightCol
weightCol
in interface HasWeightCol
public final LongParam seed()
HasSeed
public final IntParam checkpointInterval()
HasCheckpointInterval
checkpointInterval
in interface HasCheckpointInterval
public int depth()
DecisionTreeModel
depth
in interface DecisionTreeModel
public String uid()
Identifiable
uid
in interface Identifiable
public Node rootNode()
DecisionTreeModel
rootNode
in interface DecisionTreeModel
public int numFeatures()
PredictionModel
numFeatures
in class PredictionModel<Vector,DecisionTreeClassificationModel>
public int numClasses()
ClassificationModel
numClasses
in class ClassificationModel<Vector,DecisionTreeClassificationModel>
public double predict(Vector features)
ClassificationModel
transform()
and output predictionCol
.
This default implementation for classification predicts the index of the maximum value
from predictRaw()
.
predict
in class ClassificationModel<Vector,DecisionTreeClassificationModel>
features
- (undocumented)public StructType transformSchema(StructType schema)
PipelineStage
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class ProbabilisticClassificationModel<Vector,DecisionTreeClassificationModel>
schema
- (undocumented)public Dataset<Row> transform(Dataset<?> dataset)
ProbabilisticClassificationModel
featuresCol
, and appending new columns as specified by
parameters:
- predicted labels as predictionCol
of type Double
- raw predictions (confidences) as rawPredictionCol
of type Vector
- probability of each class as probabilityCol
of type Vector
.
transform
in class ProbabilisticClassificationModel<Vector,DecisionTreeClassificationModel>
dataset
- input datasetpublic Vector predictRaw(Vector features)
ClassificationModel
transform()
and output rawPredictionCol
.
predictRaw
in class ClassificationModel<Vector,DecisionTreeClassificationModel>
features
- (undocumented)public DecisionTreeClassificationModel copy(ParamMap extra)
Params
defaultCopy()
.copy
in interface Params
copy
in class Model<DecisionTreeClassificationModel>
extra
- (undocumented)public String toString()
DecisionTreeModel
toString
in interface DecisionTreeModel
toString
in interface Identifiable
toString
in class Object
public Vector featureImportances()
public MLWriter write()
MLWritable
MLWriter
instance for this ML instance.write
in interface MLWritable