0001 ---
0002 layout: global
0003 title: Advanced topics
0004 displayTitle: Advanced topics
0005 license: |
0006 Licensed to the Apache Software Foundation (ASF) under one or more
0007 contributor license agreements. See the NOTICE file distributed with
0008 this work for additional information regarding copyright ownership.
0009 The ASF licenses this file to You under the Apache License, Version 2.0
0010 (the "License"); you may not use this file except in compliance with
0011 the License. You may obtain a copy of the License at
0012
0013 http://www.apache.org/licenses/LICENSE-2.0
0014
0015 Unless required by applicable law or agreed to in writing, software
0016 distributed under the License is distributed on an "AS IS" BASIS,
0017 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
0018 See the License for the specific language governing permissions and
0019 limitations under the License.
0020 ---
0021
0022 * Table of contents
0023 {:toc}
0024
0025 `\[
0026 \newcommand{\R}{\mathbb{R}}
0027 \newcommand{\E}{\mathbb{E}}
0028 \newcommand{\x}{\mathbf{x}}
0029 \newcommand{\y}{\mathbf{y}}
0030 \newcommand{\wv}{\mathbf{w}}
0031 \newcommand{\av}{\mathbf{\alpha}}
0032 \newcommand{\bv}{\mathbf{b}}
0033 \newcommand{\N}{\mathbb{N}}
0034 \newcommand{\id}{\mathbf{I}}
0035 \newcommand{\ind}{\mathbf{1}}
0036 \newcommand{\0}{\mathbf{0}}
0037 \newcommand{\unit}{\mathbf{e}}
0038 \newcommand{\one}{\mathbf{1}}
0039 \newcommand{\zero}{\mathbf{0}}
0040 \]`
0041
0042 # Optimization of linear methods (developer)
0043
0044 ## Limited-memory BFGS (L-BFGS)
0045 [L-BFGS](http://en.wikipedia.org/wiki/Limited-memory_BFGS) is an optimization
0046 algorithm in the family of quasi-Newton methods to solve the optimization problems of the form
0047 `$\min_{\wv \in\R^d} \; f(\wv)$`. The L-BFGS method approximates the objective function locally as a
0048 quadratic without evaluating the second partial derivatives of the objective function to construct the
0049 Hessian matrix. The Hessian matrix is approximated by previous gradient evaluations, so there is no
0050 vertical scalability issue (the number of training features) unlike computing the Hessian matrix
0051 explicitly in Newton's method. As a result, L-BFGS often achieves faster convergence compared with
0052 other first-order optimizations.
0053
0054 [Orthant-Wise Limited-memory
0055 Quasi-Newton](https://www.microsoft.com/en-us/research/wp-content/uploads/2007/01/andrew07scalable.pdf)
0056 (OWL-QN) is an extension of L-BFGS that can effectively handle L1 and elastic net regularization.
0057
0058 L-BFGS is used as a solver for [LinearRegression](api/scala/org/apache/spark/ml/regression/LinearRegression.html),
0059 [LogisticRegression](api/scala/org/apache/spark/ml/classification/LogisticRegression.html),
0060 [AFTSurvivalRegression](api/scala/org/apache/spark/ml/regression/AFTSurvivalRegression.html)
0061 and [MultilayerPerceptronClassifier](api/scala/org/apache/spark/ml/classification/MultilayerPerceptronClassifier.html).
0062
0063 MLlib L-BFGS solver calls the corresponding implementation in [breeze](https://github.com/scalanlp/breeze/blob/master/math/src/main/scala/breeze/optimize/LBFGS.scala).
0064
0065 ## Normal equation solver for weighted least squares
0066
0067 MLlib implements normal equation solver for [weighted least squares](https://en.wikipedia.org/wiki/Least_squares#Weighted_least_squares) by [WeightedLeastSquares]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/mllib/src/main/scala/org/apache/spark/ml/optim/WeightedLeastSquares.scala).
0068
0069 Given $n$ weighted observations $(w_i, a_i, b_i)$:
0070
0071 * $w_i$ the weight of i-th observation
0072 * $a_i$ the features vector of i-th observation
0073 * $b_i$ the label of i-th observation
0074
0075 The number of features for each observation is $m$. We use the following weighted least squares formulation:
0076 `\[
0077 \min_{\mathbf{x}}\frac{1}{2} \sum_{i=1}^n \frac{w_i(\mathbf{a}_i^T \mathbf{x} -b_i)^2}{\sum_{k=1}^n w_k} + \frac{\lambda}{\delta}\left[\frac{1}{2}(1 - \alpha)\sum_{j=1}^m(\sigma_j x_j)^2 + \alpha\sum_{j=1}^m |\sigma_j x_j|\right]
0078 \]`
0079 where $\lambda$ is the regularization parameter, $\alpha$ is the elastic-net mixing parameter, $\delta$ is the population standard deviation of the label
0080 and $\sigma_j$ is the population standard deviation of the j-th feature column.
0081
0082 This objective function requires only one pass over the data to collect the statistics necessary to solve it. For an
0083 $n \times m$ data matrix, these statistics require only $O(m^2)$ storage and so can be stored on a single machine when $m$ (the number of features) is
0084 relatively small. We can then solve the normal equations on a single machine using local methods like direct Cholesky factorization or iterative optimization programs.
0085
0086 Spark MLlib currently supports two types of solvers for the normal equations: Cholesky factorization and Quasi-Newton methods (L-BFGS/OWL-QN). Cholesky factorization
0087 depends on a positive definite covariance matrix (i.e. columns of the data matrix must be linearly independent) and will fail if this condition is violated. Quasi-Newton methods
0088 are still capable of providing a reasonable solution even when the covariance matrix is not positive definite, so the normal equation solver can also fall back to
0089 Quasi-Newton methods in this case. This fallback is currently always enabled for the `LinearRegression` and `GeneralizedLinearRegression` estimators.
0090
0091 `WeightedLeastSquares` supports L1, L2, and elastic-net regularization and provides options to enable or disable regularization and standardization. In the case where no
0092 L1 regularization is applied (i.e. $\alpha = 0$), there exists an analytical solution and either Cholesky or Quasi-Newton solver may be used. When $\alpha > 0$ no analytical
0093 solution exists and we instead use the Quasi-Newton solver to find the coefficients iteratively.
0094
0095 In order to make the normal equation approach efficient, `WeightedLeastSquares` requires that the number of features is no more than 4096. For larger problems, use L-BFGS instead.
0096
0097 ## Iteratively reweighted least squares (IRLS)
0098
0099 MLlib implements [iteratively reweighted least squares (IRLS)](https://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares) by [IterativelyReweightedLeastSquares]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/mllib/src/main/scala/org/apache/spark/ml/optim/IterativelyReweightedLeastSquares.scala).
0100 It can be used to find the maximum likelihood estimates of a generalized linear model (GLM), find M-estimator in robust regression and other optimization problems.
0101 Refer to [Iteratively Reweighted Least Squares for Maximum Likelihood Estimation, and some Robust and Resistant Alternatives](http://www.jstor.org/stable/2345503) for more information.
0102
0103 It solves certain optimization problems iteratively through the following procedure:
0104
0105 * linearize the objective at current solution and update corresponding weight.
0106 * solve a weighted least squares (WLS) problem by WeightedLeastSquares.
0107 * repeat above steps until convergence.
0108
0109 Since it involves solving a weighted least squares (WLS) problem by `WeightedLeastSquares` in each iteration,
0110 it also requires the number of features to be no more than 4096.
0111 Currently IRLS is used as the default solver of [GeneralizedLinearRegression](api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html).