0001 ---
0002 layout: global
0003 title: Ensembles - RDD-based API
0004 displayTitle: Ensembles - RDD-based API
0005 license: |
0006 Licensed to the Apache Software Foundation (ASF) under one or more
0007 contributor license agreements. See the NOTICE file distributed with
0008 this work for additional information regarding copyright ownership.
0009 The ASF licenses this file to You under the Apache License, Version 2.0
0010 (the "License"); you may not use this file except in compliance with
0011 the License. You may obtain a copy of the License at
0012
0013 http://www.apache.org/licenses/LICENSE-2.0
0014
0015 Unless required by applicable law or agreed to in writing, software
0016 distributed under the License is distributed on an "AS IS" BASIS,
0017 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
0018 See the License for the specific language governing permissions and
0019 limitations under the License.
0020 ---
0021
0022 * Table of contents
0023 {:toc}
0024
0025 An [ensemble method](http://en.wikipedia.org/wiki/Ensemble_learning)
0026 is a learning algorithm which creates a model composed of a set of other base models.
0027 `spark.mllib` supports two major ensemble algorithms: [`GradientBoostedTrees`](api/scala/org/apache/spark/mllib/tree/GradientBoostedTrees.html) and [`RandomForest`](api/scala/org/apache/spark/mllib/tree/RandomForest$.html).
0028 Both use [decision trees](mllib-decision-tree.html) as their base models.
0029
0030 ## Gradient-Boosted Trees vs. Random Forests
0031
0032 Both [Gradient-Boosted Trees (GBTs)](mllib-ensembles.html#Gradient-Boosted-Trees-(GBTS)) and [Random Forests](mllib-ensembles.html#Random-Forests) are algorithms for learning ensembles of trees, but the training processes are different. There are several practical trade-offs:
0033
0034 * GBTs train one tree at a time, so they can take longer to train than random forests. Random Forests can train multiple trees in parallel.
0035 * On the other hand, it is often reasonable to use smaller (shallower) trees with GBTs than with Random Forests, and training smaller trees takes less time.
0036 * Random Forests can be less prone to overfitting. Training more trees in a Random Forest reduces the likelihood of overfitting, but training more trees with GBTs increases the likelihood of overfitting. (In statistical language, Random Forests reduce variance by using more trees, whereas GBTs reduce bias by using more trees.)
0037 * Random Forests can be easier to tune since performance improves monotonically with the number of trees (whereas performance can start to decrease for GBTs if the number of trees grows too large).
0038
0039 In short, both algorithms can be effective, and the choice should be based on the particular dataset.
0040
0041 ## Random Forests
0042
0043 [Random forests](http://en.wikipedia.org/wiki/Random_forest)
0044 are ensembles of [decision trees](mllib-decision-tree.html).
0045 Random forests are one of the most successful machine learning models for classification and
0046 regression. They combine many decision trees in order to reduce the risk of overfitting.
0047 Like decision trees, random forests handle categorical features,
0048 extend to the multiclass classification setting, do not require
0049 feature scaling, and are able to capture non-linearities and feature interactions.
0050
0051 `spark.mllib` supports random forests for binary and multiclass classification and for regression,
0052 using both continuous and categorical features.
0053 `spark.mllib` implements random forests using the existing [decision tree](mllib-decision-tree.html)
0054 implementation. Please see the decision tree guide for more information on trees.
0055
0056 ### Basic algorithm
0057
0058 Random forests train a set of decision trees separately, so the training can be done in parallel.
0059 The algorithm injects randomness into the training process so that each decision tree is a bit
0060 different. Combining the predictions from each tree reduces the variance of the predictions,
0061 improving the performance on test data.
0062
0063 #### Training
0064
0065 The randomness injected into the training process includes:
0066
0067 * Subsampling the original dataset on each iteration to get a different training set (a.k.a. bootstrapping).
0068 * Considering different random subsets of features to split on at each tree node.
0069
0070 Apart from these randomizations, decision tree training is done in the same way as for individual decision trees.
0071
0072 #### Prediction
0073
0074 To make a prediction on a new instance, a random forest must aggregate the predictions from its set of decision trees. This aggregation is done differently for classification and regression.
0075
0076 *Classification*: Majority vote. Each tree's prediction is counted as a vote for one class. The label is predicted to be the class which receives the most votes.
0077
0078 *Regression*: Averaging. Each tree predicts a real value. The label is predicted to be the average of the tree predictions.
0079
0080 ### Usage tips
0081
0082 We include a few guidelines for using random forests by discussing the various parameters.
0083 We omit some decision tree parameters since those are covered in the [decision tree guide](mllib-decision-tree.html).
0084
0085 The first two parameters we mention are the most important, and tuning them can often improve performance:
0086
0087 * **`numTrees`**: Number of trees in the forest.
0088 * Increasing the number of trees will decrease the variance in predictions, improving the model's test-time accuracy.
0089 * Training time increases roughly linearly in the number of trees.
0090
0091 * **`maxDepth`**: Maximum depth of each tree in the forest.
0092 * Increasing the depth makes the model more expressive and powerful. However, deep trees take longer to train and are also more prone to overfitting.
0093 * In general, it is acceptable to train deeper trees when using random forests than when using a single decision tree. One tree is more likely to overfit than a random forest (because of the variance reduction from averaging multiple trees in the forest).
0094
0095 The next two parameters generally do not require tuning. However, they can be tuned to speed up training.
0096
0097 * **`subsamplingRate`**: This parameter specifies the size of the dataset used for training each tree in the forest, as a fraction of the size of the original dataset. The default (1.0) is recommended, but decreasing this fraction can speed up training.
0098
0099 * **`featureSubsetStrategy`**: Number of features to use as candidates for splitting at each tree node. The number is specified as a fraction or function of the total number of features. Decreasing this number will speed up training, but can sometimes impact performance if too low.
0100
0101 ### Examples
0102
0103 #### Classification
0104
0105 The example below demonstrates how to load a
0106 [LIBSVM data file](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/),
0107 parse it as an RDD of `LabeledPoint` and then
0108 perform classification using a Random Forest.
0109 The test error is calculated to measure the algorithm accuracy.
0110
0111 <div class="codetabs">
0112
0113 <div data-lang="scala" markdown="1">
0114 Refer to the [`RandomForest` Scala docs](api/scala/org/apache/spark/mllib/tree/RandomForest$.html) and [`RandomForestModel` Scala docs](api/scala/org/apache/spark/mllib/tree/model/RandomForestModel.html) for details on the API.
0115
0116 {% include_example scala/org/apache/spark/examples/mllib/RandomForestClassificationExample.scala %}
0117 </div>
0118
0119 <div data-lang="java" markdown="1">
0120 Refer to the [`RandomForest` Java docs](api/java/org/apache/spark/mllib/tree/RandomForest.html) and [`RandomForestModel` Java docs](api/java/org/apache/spark/mllib/tree/model/RandomForestModel.html) for details on the API.
0121
0122 {% include_example java/org/apache/spark/examples/mllib/JavaRandomForestClassificationExample.java %}
0123 </div>
0124
0125 <div data-lang="python" markdown="1">
0126 Refer to the [`RandomForest` Python docs](api/python/pyspark.mllib.html#pyspark.mllib.tree.RandomForest) and [`RandomForest` Python docs](api/python/pyspark.mllib.html#pyspark.mllib.tree.RandomForestModel) for more details on the API.
0127
0128 {% include_example python/mllib/random_forest_classification_example.py %}
0129 </div>
0130
0131 </div>
0132
0133 #### Regression
0134
0135 The example below demonstrates how to load a
0136 [LIBSVM data file](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/),
0137 parse it as an RDD of `LabeledPoint` and then
0138 perform regression using a Random Forest.
0139 The Mean Squared Error (MSE) is computed at the end to evaluate
0140 [goodness of fit](http://en.wikipedia.org/wiki/Goodness_of_fit).
0141
0142 <div class="codetabs">
0143
0144 <div data-lang="scala" markdown="1">
0145 Refer to the [`RandomForest` Scala docs](api/scala/org/apache/spark/mllib/tree/RandomForest$.html) and [`RandomForestModel` Scala docs](api/scala/org/apache/spark/mllib/tree/model/RandomForestModel.html) for details on the API.
0146
0147 {% include_example scala/org/apache/spark/examples/mllib/RandomForestRegressionExample.scala %}
0148 </div>
0149
0150 <div data-lang="java" markdown="1">
0151 Refer to the [`RandomForest` Java docs](api/java/org/apache/spark/mllib/tree/RandomForest.html) and [`RandomForestModel` Java docs](api/java/org/apache/spark/mllib/tree/model/RandomForestModel.html) for details on the API.
0152
0153 {% include_example java/org/apache/spark/examples/mllib/JavaRandomForestRegressionExample.java %}
0154 </div>
0155
0156 <div data-lang="python" markdown="1">
0157 Refer to the [`RandomForest` Python docs](api/python/pyspark.mllib.html#pyspark.mllib.tree.RandomForest) and [`RandomForest` Python docs](api/python/pyspark.mllib.html#pyspark.mllib.tree.RandomForestModel) for more details on the API.
0158
0159 {% include_example python/mllib/random_forest_regression_example.py %}
0160 </div>
0161
0162 </div>
0163
0164 ## Gradient-Boosted Trees (GBTs)
0165
0166 [Gradient-Boosted Trees (GBTs)](http://en.wikipedia.org/wiki/Gradient_boosting)
0167 are ensembles of [decision trees](mllib-decision-tree.html).
0168 GBTs iteratively train decision trees in order to minimize a loss function.
0169 Like decision trees, GBTs handle categorical features,
0170 extend to the multiclass classification setting, do not require
0171 feature scaling, and are able to capture non-linearities and feature interactions.
0172
0173 `spark.mllib` supports GBTs for binary classification and for regression,
0174 using both continuous and categorical features.
0175 `spark.mllib` implements GBTs using the existing [decision tree](mllib-decision-tree.html) implementation. Please see the decision tree guide for more information on trees.
0176
0177 *Note*: GBTs do not yet support multiclass classification. For multiclass problems, please use
0178 [decision trees](mllib-decision-tree.html) or [Random Forests](mllib-ensembles.html#Random-Forest).
0179
0180 ### Basic algorithm
0181
0182 Gradient boosting iteratively trains a sequence of decision trees.
0183 On each iteration, the algorithm uses the current ensemble to predict the label of each training instance and then compares the prediction with the true label. The dataset is re-labeled to put more emphasis on training instances with poor predictions. Thus, in the next iteration, the decision tree will help correct for previous mistakes.
0184
0185 The specific mechanism for re-labeling instances is defined by a loss function (discussed below). With each iteration, GBTs further reduce this loss function on the training data.
0186
0187 #### Losses
0188
0189 The table below lists the losses currently supported by GBTs in `spark.mllib`.
0190 Note that each loss is applicable to one of classification or regression, not both.
0191
0192 Notation: $N$ = number of instances. $y_i$ = label of instance $i$. $x_i$ = features of instance $i$. $F(x_i)$ = model's predicted label for instance $i$.
0193
0194 <table class="table">
0195 <thead>
0196 <tr><th>Loss</th><th>Task</th><th>Formula</th><th>Description</th></tr>
0197 </thead>
0198 <tbody>
0199 <tr>
0200 <td>Log Loss</td>
0201 <td>Classification</td>
0202 <td>$2 \sum_{i=1}^{N} \log(1+\exp(-2 y_i F(x_i)))$</td><td>Twice binomial negative log likelihood.</td>
0203 </tr>
0204 <tr>
0205 <td>Squared Error</td>
0206 <td>Regression</td>
0207 <td>$\sum_{i=1}^{N} (y_i - F(x_i))^2$</td><td>Also called L2 loss. Default loss for regression tasks.</td>
0208 </tr>
0209 <tr>
0210 <td>Absolute Error</td>
0211 <td>Regression</td>
0212 <td>$\sum_{i=1}^{N} |y_i - F(x_i)|$</td><td>Also called L1 loss. Can be more robust to outliers than Squared Error.</td>
0213 </tr>
0214 </tbody>
0215 </table>
0216
0217 ### Usage tips
0218
0219 We include a few guidelines for using GBTs by discussing the various parameters.
0220 We omit some decision tree parameters since those are covered in the [decision tree guide](mllib-decision-tree.html).
0221
0222 * **`loss`**: See the section above for information on losses and their applicability to tasks (classification vs. regression). Different losses can give significantly different results, depending on the dataset.
0223
0224 * **`numIterations`**: This sets the number of trees in the ensemble. Each iteration produces one tree. Increasing this number makes the model more expressive, improving training data accuracy. However, test-time accuracy may suffer if this is too large.
0225
0226 * **`learningRate`**: This parameter should not need to be tuned. If the algorithm behavior seems unstable, decreasing this value may improve stability.
0227
0228 * **`algo`**: The algorithm or task (classification vs. regression) is set using the tree [Strategy] parameter.
0229
0230 #### Validation while training
0231
0232 Gradient boosting can overfit when trained with more trees. In order to prevent overfitting, it is useful to validate while
0233 training. The method runWithValidation has been provided to make use of this option. It takes a pair of RDD's as arguments, the
0234 first one being the training dataset and the second being the validation dataset.
0235
0236 The training is stopped when the improvement in the validation error is not more than a certain tolerance
0237 (supplied by the `validationTol` argument in `BoostingStrategy`). In practice, the validation error
0238 decreases initially and later increases. There might be cases in which the validation error does not change monotonically,
0239 and the user is advised to set a large enough negative tolerance and examine the validation curve using `evaluateEachIteration`
0240 (which gives the error or loss per iteration) to tune the number of iterations.
0241
0242 ### Examples
0243
0244 #### Classification
0245
0246 The example below demonstrates how to load a
0247 [LIBSVM data file](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/),
0248 parse it as an RDD of `LabeledPoint` and then
0249 perform classification using Gradient-Boosted Trees with log loss.
0250 The test error is calculated to measure the algorithm accuracy.
0251
0252 <div class="codetabs">
0253
0254 <div data-lang="scala" markdown="1">
0255 Refer to the [`GradientBoostedTrees` Scala docs](api/scala/org/apache/spark/mllib/tree/GradientBoostedTrees.html) and [`GradientBoostedTreesModel` Scala docs](api/scala/org/apache/spark/mllib/tree/model/GradientBoostedTreesModel.html) for details on the API.
0256
0257 {% include_example scala/org/apache/spark/examples/mllib/GradientBoostingClassificationExample.scala %}
0258 </div>
0259
0260 <div data-lang="java" markdown="1">
0261 Refer to the [`GradientBoostedTrees` Java docs](api/java/org/apache/spark/mllib/tree/GradientBoostedTrees.html) and [`GradientBoostedTreesModel` Java docs](api/java/org/apache/spark/mllib/tree/model/GradientBoostedTreesModel.html) for details on the API.
0262
0263 {% include_example java/org/apache/spark/examples/mllib/JavaGradientBoostingClassificationExample.java %}
0264 </div>
0265
0266 <div data-lang="python" markdown="1">
0267 Refer to the [`GradientBoostedTrees` Python docs](api/python/pyspark.mllib.html#pyspark.mllib.tree.GradientBoostedTrees) and [`GradientBoostedTreesModel` Python docs](api/python/pyspark.mllib.html#pyspark.mllib.tree.GradientBoostedTreesModel) for more details on the API.
0268
0269 {% include_example python/mllib/gradient_boosting_classification_example.py %}
0270 </div>
0271
0272 </div>
0273
0274 #### Regression
0275
0276 The example below demonstrates how to load a
0277 [LIBSVM data file](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/),
0278 parse it as an RDD of `LabeledPoint` and then
0279 perform regression using Gradient-Boosted Trees with Squared Error as the loss.
0280 The Mean Squared Error (MSE) is computed at the end to evaluate
0281 [goodness of fit](http://en.wikipedia.org/wiki/Goodness_of_fit).
0282
0283 <div class="codetabs">
0284
0285 <div data-lang="scala" markdown="1">
0286 Refer to the [`GradientBoostedTrees` Scala docs](api/scala/org/apache/spark/mllib/tree/GradientBoostedTrees.html) and [`GradientBoostedTreesModel` Scala docs](api/scala/org/apache/spark/mllib/tree/model/GradientBoostedTreesModel.html) for details on the API.
0287
0288 {% include_example scala/org/apache/spark/examples/mllib/GradientBoostingRegressionExample.scala %}
0289 </div>
0290
0291 <div data-lang="java" markdown="1">
0292 Refer to the [`GradientBoostedTrees` Java docs](api/java/org/apache/spark/mllib/tree/GradientBoostedTrees.html) and [`GradientBoostedTreesModel` Java docs](api/java/org/apache/spark/mllib/tree/model/GradientBoostedTreesModel.html) for details on the API.
0293
0294 {% include_example java/org/apache/spark/examples/mllib/JavaGradientBoostingRegressionExample.java %}
0295 </div>
0296
0297 <div data-lang="python" markdown="1">
0298 Refer to the [`GradientBoostedTrees` Python docs](api/python/pyspark.mllib.html#pyspark.mllib.tree.GradientBoostedTrees) and [`GradientBoostedTreesModel` Python docs](api/python/pyspark.mllib.html#pyspark.mllib.tree.GradientBoostedTreesModel) for more details on the API.
0299
0300 {% include_example python/mllib/gradient_boosting_regression_example.py %}
0301 </div>
0302
0303 </div>