Parameters for the tree algorithm. We support regression and binary classification for boosting. Impurity setting will be ignored.
Loss function used for minimization during gradient boosting.
Number of iterations of boosting. In other words, the number of weak hypotheses used in the final model.
Learning rate for shrinking the contribution of each estimator. The learning rate should be between in the interval (0, 1]
Useful when runWithValidation is used. If the error rate on the validation input between two iterations is less than the validationTol then stop. Ignored when run is used.
Learning rate for shrinking the contribution of each estimator.
Learning rate for shrinking the contribution of each estimator. The learning rate should be between in the interval (0, 1]
Loss function used for minimization during gradient boosting.
Number of iterations of boosting.
Number of iterations of boosting. In other words, the number of weak hypotheses used in the final model.
Parameters for the tree algorithm.
Parameters for the tree algorithm. We support regression and binary classification for boosting. Impurity setting will be ignored.
Useful when runWithValidation is used.
Useful when runWithValidation is used. If the error rate on the validation input between two iterations is less than the validationTol then stop. Ignored when run is used.
:: Experimental :: Configuration options for org.apache.spark.mllib.tree.GradientBoostedTrees.
Parameters for the tree algorithm. We support regression and binary classification for boosting. Impurity setting will be ignored.
Loss function used for minimization during gradient boosting.
Number of iterations of boosting. In other words, the number of weak hypotheses used in the final model.
Learning rate for shrinking the contribution of each estimator. The learning rate should be between in the interval (0, 1]
Useful when runWithValidation is used. If the error rate on the validation input between two iterations is less than the validationTol then stop. Ignored when run is used.