0001 ---
0002 layout: global
0003 title: Optimization - RDD-based API
0004 displayTitle: Optimization - RDD-based API
0005 license: |
0006 Licensed to the Apache Software Foundation (ASF) under one or more
0007 contributor license agreements. See the NOTICE file distributed with
0008 this work for additional information regarding copyright ownership.
0009 The ASF licenses this file to You under the Apache License, Version 2.0
0010 (the "License"); you may not use this file except in compliance with
0011 the License. You may obtain a copy of the License at
0012
0013 http://www.apache.org/licenses/LICENSE-2.0
0014
0015 Unless required by applicable law or agreed to in writing, software
0016 distributed under the License is distributed on an "AS IS" BASIS,
0017 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
0018 See the License for the specific language governing permissions and
0019 limitations under the License.
0020 ---
0021
0022 * Table of contents
0023 {:toc}
0024
0025 `\[
0026 \newcommand{\R}{\mathbb{R}}
0027 \newcommand{\E}{\mathbb{E}}
0028 \newcommand{\x}{\mathbf{x}}
0029 \newcommand{\y}{\mathbf{y}}
0030 \newcommand{\wv}{\mathbf{w}}
0031 \newcommand{\av}{\mathbf{\alpha}}
0032 \newcommand{\bv}{\mathbf{b}}
0033 \newcommand{\N}{\mathbb{N}}
0034 \newcommand{\id}{\mathbf{I}}
0035 \newcommand{\ind}{\mathbf{1}}
0036 \newcommand{\0}{\mathbf{0}}
0037 \newcommand{\unit}{\mathbf{e}}
0038 \newcommand{\one}{\mathbf{1}}
0039 \newcommand{\zero}{\mathbf{0}}
0040 \]`
0041
0042
0043
0044 ## Mathematical description
0045
0046 ### Gradient descent
0047 The simplest method to solve optimization problems of the form `$\min_{\wv \in\R^d} \; f(\wv)$`
0048 is [gradient descent](http://en.wikipedia.org/wiki/Gradient_descent).
0049 Such first-order optimization methods (including gradient descent and stochastic variants
0050 thereof) are well-suited for large-scale and distributed computation.
0051
0052 Gradient descent methods aim to find a local minimum of a function by iteratively taking steps in
0053 the direction of steepest descent, which is the negative of the derivative (called the
0054 [gradient](http://en.wikipedia.org/wiki/Gradient)) of the function at the current point, i.e., at
0055 the current parameter value.
0056 If the objective function `$f$` is not differentiable at all arguments, but still convex, then a
0057 *sub-gradient*
0058 is the natural generalization of the gradient, and assumes the role of the step direction.
0059 In any case, computing a gradient or sub-gradient of `$f$` is expensive --- it requires a full
0060 pass through the complete dataset, in order to compute the contributions from all loss terms.
0061
0062 ### Stochastic gradient descent (SGD)
0063 Optimization problems whose objective function `$f$` is written as a sum are particularly
0064 suitable to be solved using *stochastic gradient descent (SGD)*.
0065 In our case, for the optimization formulations commonly used in <a
0066 href="mllib-classification-regression.html">supervised machine learning</a>,
0067 `\begin{equation}
0068 f(\wv) :=
0069 \lambda\, R(\wv) +
0070 \frac1n \sum_{i=1}^n L(\wv;\x_i,y_i)
0071 \label{eq:regPrimal}
0072 \ .
0073 \end{equation}`
0074 this is especially natural, because the loss is written as an average of the individual losses
0075 coming from each datapoint.
0076
0077 A stochastic subgradient is a randomized choice of a vector, such that in expectation, we obtain
0078 a true subgradient of the original objective function.
0079 Picking one datapoint `$i\in[1..n]$` uniformly at random, we obtain a stochastic subgradient of
0080 `$\eqref{eq:regPrimal}$`, with respect to `$\wv$` as follows:
0081 `\[
0082 f'_{\wv,i} := L'_{\wv,i} + \lambda\, R'_\wv \ ,
0083 \]`
0084 where `$L'_{\wv,i} \in \R^d$` is a subgradient of the part of the loss function determined by the
0085 `$i$`-th datapoint, that is `$L'_{\wv,i} \in \frac{\partial}{\partial \wv} L(\wv;\x_i,y_i)$`.
0086 Furthermore, `$R'_\wv$` is a subgradient of the regularizer `$R(\wv)$`, i.e. `$R'_\wv \in
0087 \frac{\partial}{\partial \wv} R(\wv)$`. The term `$R'_\wv$` does not depend on which random
0088 datapoint is picked.
0089 Clearly, in expectation over the random choice of `$i\in[1..n]$`, we have that `$f'_{\wv,i}$` is
0090 a subgradient of the original objective `$f$`, meaning that `$\E\left[f'_{\wv,i}\right] \in
0091 \frac{\partial}{\partial \wv} f(\wv)$`.
0092
0093 Running SGD now simply becomes walking in the direction of the negative stochastic subgradient
0094 `$f'_{\wv,i}$`, that is
0095 `\begin{equation}\label{eq:SGDupdate}
0096 \wv^{(t+1)} := \wv^{(t)} - \gamma \; f'_{\wv,i} \ .
0097 \end{equation}`
0098 **Step-size.**
0099 The parameter `$\gamma$` is the step-size, which in the default implementation is chosen
0100 decreasing with the square root of the iteration counter, i.e. `$\gamma := \frac{s}{\sqrt{t}}$`
0101 in the `$t$`-th iteration, with the input parameter `$s=$ stepSize`. Note that selecting the best
0102 step-size for SGD methods can often be delicate in practice and is a topic of active research.
0103
0104 **Gradients.**
0105 A table of (sub)gradients of the machine learning methods implemented in `spark.mllib`, is available in
0106 the <a href="mllib-classification-regression.html">classification and regression</a> section.
0107
0108
0109 **Proximal Updates.**
0110 As an alternative to just use the subgradient `$R'(\wv)$` of the regularizer in the step
0111 direction, an improved update for some cases can be obtained by using the proximal operator
0112 instead.
0113 For the L1-regularizer, the proximal operator is given by soft thresholding, as implemented in
0114 [L1Updater](api/scala/org/apache/spark/mllib/optimization/L1Updater.html).
0115
0116
0117 ### Update schemes for distributed SGD
0118 The SGD implementation in
0119 [GradientDescent](api/scala/org/apache/spark/mllib/optimization/GradientDescent.html) uses
0120 a simple (distributed) sampling of the data examples.
0121 We recall that the loss part of the optimization problem `$\eqref{eq:regPrimal}$` is
0122 `$\frac1n \sum_{i=1}^n L(\wv;\x_i,y_i)$`, and therefore `$\frac1n \sum_{i=1}^n L'_{\wv,i}$` would
0123 be the true (sub)gradient.
0124 Since this would require access to the full data set, the parameter `miniBatchFraction` specifies
0125 which fraction of the full data to use instead.
0126 The average of the gradients over this subset, i.e.
0127 `\[
0128 \frac1{|S|} \sum_{i\in S} L'_{\wv,i} \ ,
0129 \]`
0130 is a stochastic gradient. Here `$S$` is the sampled subset of size `$|S|=$ miniBatchFraction
0131 $\cdot n$`.
0132
0133 In each iteration, the sampling over the distributed dataset
0134 ([RDD](rdd-programming-guide.html#resilient-distributed-datasets-rdds)), as well as the
0135 computation of the sum of the partial results from each worker machine is performed by the
0136 standard spark routines.
0137
0138 If the fraction of points `miniBatchFraction` is set to 1 (default), then the resulting step in
0139 each iteration is exact (sub)gradient descent. In this case, there is no randomness and no
0140 variance in the used step directions.
0141 On the other extreme, if `miniBatchFraction` is chosen very small, such that only a single point
0142 is sampled, i.e. `$|S|=$ miniBatchFraction $\cdot n = 1$`, then the algorithm is equivalent to
0143 standard SGD. In that case, the step direction depends from the uniformly random sampling of the
0144 point.
0145
0146 ### Limited-memory BFGS (L-BFGS)
0147 [L-BFGS](http://en.wikipedia.org/wiki/Limited-memory_BFGS) is an optimization
0148 algorithm in the family of quasi-Newton methods to solve the optimization problems of the form
0149 `$\min_{\wv \in\R^d} \; f(\wv)$`. The L-BFGS method approximates the objective function locally as a
0150 quadratic without evaluating the second partial derivatives of the objective function to construct the
0151 Hessian matrix. The Hessian matrix is approximated by previous gradient evaluations, so there is no
0152 vertical scalability issue (the number of training features) when computing the Hessian matrix
0153 explicitly in Newton's method. As a result, L-BFGS often achieves more rapid convergence compared with
0154 other first-order optimization.
0155
0156 ### Choosing an Optimization Method
0157
0158 [Linear methods](mllib-linear-methods.html) use optimization internally, and some linear methods in `spark.mllib` support both SGD and L-BFGS.
0159 Different optimization methods can have different convergence guarantees depending on the properties of the objective function, and we cannot cover the literature here.
0160 In general, when L-BFGS is available, we recommend using it instead of SGD since L-BFGS tends to converge faster (in fewer iterations).
0161
0162 ## Implementation in MLlib
0163
0164 ### Gradient descent and stochastic gradient descent
0165 Gradient descent methods including stochastic subgradient descent (SGD) as
0166 included as a low-level primitive in `MLlib`, upon which various ML algorithms
0167 are developed, see the
0168 <a href="mllib-linear-methods.html">linear methods</a>
0169 section for example.
0170
0171 The SGD class
0172 [GradientDescent](api/scala/org/apache/spark/mllib/optimization/GradientDescent.html)
0173 sets the following parameters:
0174
0175 * `Gradient` is a class that computes the stochastic gradient of the function
0176 being optimized, i.e., with respect to a single training example, at the
0177 current parameter value. MLlib includes gradient classes for common loss
0178 functions, e.g., hinge, logistic, least-squares. The gradient class takes as
0179 input a training example, its label, and the current parameter value.
0180 * `Updater` is a class that performs the actual gradient descent step, i.e.
0181 updating the weights in each iteration, for a given gradient of the loss part.
0182 The updater is also responsible to perform the update from the regularization
0183 part. MLlib includes updaters for cases without regularization, as well as
0184 L1 and L2 regularizers.
0185 * `stepSize` is a scalar value denoting the initial step size for gradient
0186 descent. All updaters in MLlib use a step size at the t-th step equal to
0187 `stepSize $/ \sqrt{t}$`.
0188 * `numIterations` is the number of iterations to run.
0189 * `regParam` is the regularization parameter when using L1 or L2 regularization.
0190 * `miniBatchFraction` is the fraction of the total data that is sampled in
0191 each iteration, to compute the gradient direction.
0192 * Sampling still requires a pass over the entire RDD, so decreasing `miniBatchFraction` may not speed up optimization much. Users will see the greatest speedup when the gradient is expensive to compute, for only the chosen samples are used for computing the gradient.
0193
0194 ### L-BFGS
0195 L-BFGS is currently only a low-level optimization primitive in `MLlib`. If you want to use L-BFGS in various
0196 ML algorithms such as Linear Regression, and Logistic Regression, you have to pass the gradient of objective
0197 function, and updater into optimizer yourself instead of using the training APIs like
0198 [LogisticRegressionWithSGD](api/scala/org/apache/spark/mllib/classification/LogisticRegressionWithSGD.html).
0199 See the example below. It will be addressed in the next release.
0200
0201 The L1 regularization by using
0202 [L1Updater](api/scala/org/apache/spark/mllib/optimization/L1Updater.html) will not work since the
0203 soft-thresholding logic in L1Updater is designed for gradient descent. See the developer's note.
0204
0205 The L-BFGS method
0206 [LBFGS.runLBFGS](api/scala/org/apache/spark/mllib/optimization/LBFGS.html)
0207 has the following parameters:
0208
0209 * `Gradient` is a class that computes the gradient of the objective function
0210 being optimized, i.e., with respect to a single training example, at the
0211 current parameter value. MLlib includes gradient classes for common loss
0212 functions, e.g., hinge, logistic, least-squares. The gradient class takes as
0213 input a training example, its label, and the current parameter value.
0214 * `Updater` is a class that computes the gradient and loss of objective function
0215 of the regularization part for L-BFGS. MLlib includes updaters for cases without
0216 regularization, as well as L2 regularizer.
0217 * `numCorrections` is the number of corrections used in the L-BFGS update. 10 is
0218 recommended.
0219 * `maxNumIterations` is the maximal number of iterations that L-BFGS can be run.
0220 * `regParam` is the regularization parameter when using regularization.
0221 * `convergenceTol` controls how much relative change is still allowed when L-BFGS
0222 is considered to converge. This must be nonnegative. Lower values are less tolerant and
0223 therefore generally cause more iterations to be run. This value looks at both average
0224 improvement and the norm of gradient inside [Breeze LBFGS](https://github.com/scalanlp/breeze/blob/master/math/src/main/scala/breeze/optimize/LBFGS.scala).
0225
0226 The `return` is a tuple containing two elements. The first element is a column matrix
0227 containing weights for every feature, and the second element is an array containing
0228 the loss computed for every iteration.
0229
0230 Here is an example to train binary logistic regression with L2 regularization using
0231 L-BFGS optimizer.
0232
0233 <div class="codetabs">
0234
0235 <div data-lang="scala" markdown="1">
0236 Refer to the [`LBFGS` Scala docs](api/scala/org/apache/spark/mllib/optimization/LBFGS.html) and [`SquaredL2Updater` Scala docs](api/scala/org/apache/spark/mllib/optimization/SquaredL2Updater.html) for details on the API.
0237
0238 {% include_example scala/org/apache/spark/examples/mllib/LBFGSExample.scala %}
0239 </div>
0240
0241 <div data-lang="java" markdown="1">
0242 Refer to the [`LBFGS` Java docs](api/java/org/apache/spark/mllib/optimization/LBFGS.html) and [`SquaredL2Updater` Java docs](api/java/org/apache/spark/mllib/optimization/SquaredL2Updater.html) for details on the API.
0243
0244 {% include_example java/org/apache/spark/examples/mllib/JavaLBFGSExample.java %}
0245 </div>
0246 </div>
0247
0248 ## Developer's notes
0249
0250 Since the Hessian is constructed approximately from previous gradient evaluations,
0251 the objective function can not be changed during the optimization process.
0252 As a result, Stochastic L-BFGS will not work naively by just using miniBatch;
0253 therefore, we don't provide this until we have better understanding.
0254
0255 `Updater` is a class originally designed for gradient decent which computes
0256 the actual gradient descent step. However, we're able to take the gradient and
0257 loss of objective function of regularization for L-BFGS by ignoring the part of logic
0258 only for gradient decent such as adaptive step size stuff. We will refactorize
0259 this into regularizer to replace updater to separate the logic between
0260 regularization and step update later.