0001 ---
0002 layout: global
0003 displayTitle: Spark Configuration
0004 title: Configuration
0005 license: |
0006 Licensed to the Apache Software Foundation (ASF) under one or more
0007 contributor license agreements. See the NOTICE file distributed with
0008 this work for additional information regarding copyright ownership.
0009 The ASF licenses this file to You under the Apache License, Version 2.0
0010 (the "License"); you may not use this file except in compliance with
0011 the License. You may obtain a copy of the License at
0012
0013 http://www.apache.org/licenses/LICENSE-2.0
0014
0015 Unless required by applicable law or agreed to in writing, software
0016 distributed under the License is distributed on an "AS IS" BASIS,
0017 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
0018 See the License for the specific language governing permissions and
0019 limitations under the License.
0020 ---
0021 * This will become a table of contents (this text will be scraped).
0022 {:toc}
0023
0024 Spark provides three locations to configure the system:
0025
0026 * [Spark properties](#spark-properties) control most application parameters and can be set by using
0027 a [SparkConf](api/scala/org/apache/spark/SparkConf.html) object, or through Java
0028 system properties.
0029 * [Environment variables](#environment-variables) can be used to set per-machine settings, such as
0030 the IP address, through the `conf/spark-env.sh` script on each node.
0031 * [Logging](#configuring-logging) can be configured through `log4j.properties`.
0032
0033 # Spark Properties
0034
0035 Spark properties control most application settings and are configured separately for each
0036 application. These properties can be set directly on a
0037 [SparkConf](api/scala/org/apache/spark/SparkConf.html) passed to your
0038 `SparkContext`. `SparkConf` allows you to configure some of the common properties
0039 (e.g. master URL and application name), as well as arbitrary key-value pairs through the
0040 `set()` method. For example, we could initialize an application with two threads as follows:
0041
0042 Note that we run with local[2], meaning two threads - which represents "minimal" parallelism,
0043 which can help detect bugs that only exist when we run in a distributed context.
0044
0045 {% highlight scala %}
0046 val conf = new SparkConf()
0047 .setMaster("local[2]")
0048 .setAppName("CountingSheep")
0049 val sc = new SparkContext(conf)
0050 {% endhighlight %}
0051
0052 Note that we can have more than 1 thread in local mode, and in cases like Spark Streaming, we may
0053 actually require more than 1 thread to prevent any sort of starvation issues.
0054
0055 Properties that specify some time duration should be configured with a unit of time.
0056 The following format is accepted:
0057
0058 25ms (milliseconds)
0059 5s (seconds)
0060 10m or 10min (minutes)
0061 3h (hours)
0062 5d (days)
0063 1y (years)
0064
0065
0066 Properties that specify a byte size should be configured with a unit of size.
0067 The following format is accepted:
0068
0069 1b (bytes)
0070 1k or 1kb (kibibytes = 1024 bytes)
0071 1m or 1mb (mebibytes = 1024 kibibytes)
0072 1g or 1gb (gibibytes = 1024 mebibytes)
0073 1t or 1tb (tebibytes = 1024 gibibytes)
0074 1p or 1pb (pebibytes = 1024 tebibytes)
0075
0076 While numbers without units are generally interpreted as bytes, a few are interpreted as KiB or MiB.
0077 See documentation of individual configuration properties. Specifying units is desirable where
0078 possible.
0079
0080 ## Dynamically Loading Spark Properties
0081
0082 In some cases, you may want to avoid hard-coding certain configurations in a `SparkConf`. For
0083 instance, if you'd like to run the same application with different masters or different
0084 amounts of memory. Spark allows you to simply create an empty conf:
0085
0086 {% highlight scala %}
0087 val sc = new SparkContext(new SparkConf())
0088 {% endhighlight %}
0089
0090 Then, you can supply configuration values at runtime:
0091 {% highlight bash %}
0092 ./bin/spark-submit --name "My app" --master local[4] --conf spark.eventLog.enabled=false
0093 --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar
0094 {% endhighlight %}
0095
0096 The Spark shell and [`spark-submit`](submitting-applications.html)
0097 tool support two ways to load configurations dynamically. The first is command line options,
0098 such as `--master`, as shown above. `spark-submit` can accept any Spark property using the `--conf/-c`
0099 flag, but uses special flags for properties that play a part in launching the Spark application.
0100 Running `./bin/spark-submit --help` will show the entire list of these options.
0101
0102 `bin/spark-submit` will also read configuration options from `conf/spark-defaults.conf`, in which
0103 each line consists of a key and a value separated by whitespace. For example:
0104
0105 spark.master spark://5.6.7.8:7077
0106 spark.executor.memory 4g
0107 spark.eventLog.enabled true
0108 spark.serializer org.apache.spark.serializer.KryoSerializer
0109
0110 Any values specified as flags or in the properties file will be passed on to the application
0111 and merged with those specified through SparkConf. Properties set directly on the SparkConf
0112 take highest precedence, then flags passed to `spark-submit` or `spark-shell`, then options
0113 in the `spark-defaults.conf` file. A few configuration keys have been renamed since earlier
0114 versions of Spark; in such cases, the older key names are still accepted, but take lower
0115 precedence than any instance of the newer key.
0116
0117 Spark properties mainly can be divided into two kinds: one is related to deploy, like
0118 "spark.driver.memory", "spark.executor.instances", this kind of properties may not be affected when
0119 setting programmatically through `SparkConf` in runtime, or the behavior is depending on which
0120 cluster manager and deploy mode you choose, so it would be suggested to set through configuration
0121 file or `spark-submit` command line options; another is mainly related to Spark runtime control,
0122 like "spark.task.maxFailures", this kind of properties can be set in either way.
0123
0124 ## Viewing Spark Properties
0125
0126 The application web UI at `http://<driver>:4040` lists Spark properties in the "Environment" tab.
0127 This is a useful place to check to make sure that your properties have been set correctly. Note
0128 that only values explicitly specified through `spark-defaults.conf`, `SparkConf`, or the command
0129 line will appear. For all other configuration properties, you can assume the default value is used.
0130
0131 ## Available Properties
0132
0133 Most of the properties that control internal settings have reasonable default values. Some
0134 of the most common options to set are:
0135
0136 ### Application Properties
0137
0138 <table class="table">
0139 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
0140 <tr>
0141 <td><code>spark.app.name</code></td>
0142 <td>(none)</td>
0143 <td>
0144 The name of your application. This will appear in the UI and in log data.
0145 </td>
0146 <td>0.9.0</td>
0147 </tr>
0148 <tr>
0149 <td><code>spark.driver.cores</code></td>
0150 <td>1</td>
0151 <td>
0152 Number of cores to use for the driver process, only in cluster mode.
0153 </td>
0154 <td>1.3.0</td>
0155 </tr>
0156 <tr>
0157 <td><code>spark.driver.maxResultSize</code></td>
0158 <td>1g</td>
0159 <td>
0160 Limit of total size of serialized results of all partitions for each Spark action (e.g.
0161 collect) in bytes. Should be at least 1M, or 0 for unlimited. Jobs will be aborted if the total
0162 size is above this limit.
0163 Having a high limit may cause out-of-memory errors in driver (depends on spark.driver.memory
0164 and memory overhead of objects in JVM). Setting a proper limit can protect the driver from
0165 out-of-memory errors.
0166 </td>
0167 <td>1.2.0</td>
0168 </tr>
0169 <tr>
0170 <td><code>spark.driver.memory</code></td>
0171 <td>1g</td>
0172 <td>
0173 Amount of memory to use for the driver process, i.e. where SparkContext is initialized, in the
0174 same format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t")
0175 (e.g. <code>512m</code>, <code>2g</code>).
0176 <br />
0177 <em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
0178 directly in your application, because the driver JVM has already started at that point.
0179 Instead, please set this through the <code>--driver-memory</code> command line option
0180 or in your default properties file.
0181 </td>
0182 <td>1.1.1</td>
0183 </tr>
0184 <tr>
0185 <td><code>spark.driver.memoryOverhead</code></td>
0186 <td>driverMemory * 0.10, with minimum of 384 </td>
0187 <td>
0188 Amount of non-heap memory to be allocated per driver process in cluster mode, in MiB unless
0189 otherwise specified. This is memory that accounts for things like VM overheads, interned strings,
0190 other native overheads, etc. This tends to grow with the container size (typically 6-10%).
0191 This option is currently supported on YARN, Mesos and Kubernetes.
0192 <em>Note:</em> Non-heap memory includes off-heap memory
0193 (when <code>spark.memory.offHeap.enabled=true</code>) and memory used by other driver processes
0194 (e.g. python process that goes with a PySpark driver) and memory used by other non-driver
0195 processes running in the same container. The maximum memory size of container to running
0196 driver is determined by the sum of <code>spark.driver.memoryOverhead</code>
0197 and <code>spark.driver.memory</code>.
0198 </td>
0199 <td>2.3.0</td>
0200 </tr>
0201 <tr>
0202 <td><code>spark.driver.resource.{resourceName}.amount</code></td>
0203 <td>0</td>
0204 <td>
0205 Amount of a particular resource type to use on the driver.
0206 If this is used, you must also specify the
0207 <code>spark.driver.resource.{resourceName}.discoveryScript</code>
0208 for the driver to find the resource on startup.
0209 </td>
0210 <td>3.0.0</td>
0211 </tr>
0212 <tr>
0213 <td><code>spark.driver.resource.{resourceName}.discoveryScript</code></td>
0214 <td>None</td>
0215 <td>
0216 A script for the driver to run to discover a particular resource type. This should
0217 write to STDOUT a JSON string in the format of the ResourceInformation class. This has a
0218 name and an array of addresses. For a client-submitted driver, discovery script must assign
0219 different resource addresses to this driver comparing to other drivers on the same host.
0220 </td>
0221 <td>3.0.0</td>
0222 </tr>
0223 <tr>
0224 <td><code>spark.driver.resource.{resourceName}.vendor</code></td>
0225 <td>None</td>
0226 <td>
0227 Vendor of the resources to use for the driver. This option is currently
0228 only supported on Kubernetes and is actually both the vendor and domain following
0229 the Kubernetes device plugin naming convention. (e.g. For GPUs on Kubernetes
0230 this config would be set to nvidia.com or amd.com)
0231 </td>
0232 <td>3.0.0</td>
0233 </tr>
0234 <tr>
0235 <td><code>spark.resources.discoveryPlugin</code></td>
0236 <td>org.apache.spark.resource.ResourceDiscoveryScriptPlugin</td>
0237 <td>
0238 Comma-separated list of class names implementing
0239 org.apache.spark.api.resource.ResourceDiscoveryPlugin to load into the application.
0240 This is for advanced users to replace the resource discovery class with a
0241 custom implementation. Spark will try each class specified until one of them
0242 returns the resource information for that resource. It tries the discovery
0243 script last if none of the plugins return information for that resource.
0244 </td>
0245 <td>3.0.0</td>
0246 </tr>
0247 <tr>
0248 <td><code>spark.executor.memory</code></td>
0249 <td>1g</td>
0250 <td>
0251 Amount of memory to use per executor process, in the same format as JVM memory strings with
0252 a size unit suffix ("k", "m", "g" or "t") (e.g. <code>512m</code>, <code>2g</code>).
0253 </td>
0254 <td>0.7.0</td>
0255 </tr>
0256 <tr>
0257 <td><code>spark.executor.pyspark.memory</code></td>
0258 <td>Not set</td>
0259 <td>
0260 The amount of memory to be allocated to PySpark in each executor, in MiB
0261 unless otherwise specified. If set, PySpark memory for an executor will be
0262 limited to this amount. If not set, Spark will not limit Python's memory use
0263 and it is up to the application to avoid exceeding the overhead memory space
0264 shared with other non-JVM processes. When PySpark is run in YARN or Kubernetes, this memory
0265 is added to executor resource requests.
0266 <br/>
0267 <em>Note:</em> This feature is dependent on Python's `resource` module; therefore, the behaviors and
0268 limitations are inherited. For instance, Windows does not support resource limiting and actual
0269 resource is not limited on MacOS.
0270 </td>
0271 <td>2.4.0</td>
0272 </tr>
0273 <tr>
0274 <td><code>spark.executor.memoryOverhead</code></td>
0275 <td>executorMemory * 0.10, with minimum of 384 </td>
0276 <td>
0277 Amount of additional memory to be allocated per executor process in cluster mode, in MiB unless
0278 otherwise specified. This is memory that accounts for things like VM overheads, interned strings,
0279 other native overheads, etc. This tends to grow with the executor size (typically 6-10%).
0280 This option is currently supported on YARN and Kubernetes.
0281 <br/>
0282 <em>Note:</em> Additional memory includes PySpark executor memory
0283 (when <code>spark.executor.pyspark.memory</code> is not configured) and memory used by other
0284 non-executor processes running in the same container. The maximum memory size of container to
0285 running executor is determined by the sum of <code>spark.executor.memoryOverhead</code>,
0286 <code>spark.executor.memory</code>, <code>spark.memory.offHeap.size</code> and
0287 <code>spark.executor.pyspark.memory</code>.
0288 </td>
0289 <td>2.3.0</td>
0290 </tr>
0291 <tr>
0292 <td><code>spark.executor.resource.{resourceName}.amount</code></td>
0293 <td>0</td>
0294 <td>
0295 Amount of a particular resource type to use per executor process.
0296 If this is used, you must also specify the
0297 <code>spark.executor.resource.{resourceName}.discoveryScript</code>
0298 for the executor to find the resource on startup.
0299 </td>
0300 <td>3.0.0</td>
0301 </tr>
0302 <tr>
0303 <td><code>spark.executor.resource.{resourceName}.discoveryScript</code></td>
0304 <td>None</td>
0305 <td>
0306 A script for the executor to run to discover a particular resource type. This should
0307 write to STDOUT a JSON string in the format of the ResourceInformation class. This has a
0308 name and an array of addresses.
0309 </td>
0310 <td>3.0.0</td>
0311 </tr>
0312 <tr>
0313 <td><code>spark.executor.resource.{resourceName}.vendor</code></td>
0314 <td>None</td>
0315 <td>
0316 Vendor of the resources to use for the executors. This option is currently
0317 only supported on Kubernetes and is actually both the vendor and domain following
0318 the Kubernetes device plugin naming convention. (e.g. For GPUs on Kubernetes
0319 this config would be set to nvidia.com or amd.com)
0320 </td>
0321 <td>3.0.0</td>
0322 </tr>
0323 <tr>
0324 <td><code>spark.extraListeners</code></td>
0325 <td>(none)</td>
0326 <td>
0327 A comma-separated list of classes that implement <code>SparkListener</code>; when initializing
0328 SparkContext, instances of these classes will be created and registered with Spark's listener
0329 bus. If a class has a single-argument constructor that accepts a SparkConf, that constructor
0330 will be called; otherwise, a zero-argument constructor will be called. If no valid constructor
0331 can be found, the SparkContext creation will fail with an exception.
0332 </td>
0333 <td>1.3.0</td>
0334 </tr>
0335 <tr>
0336 <td><code>spark.local.dir</code></td>
0337 <td>/tmp</td>
0338 <td>
0339 Directory to use for "scratch" space in Spark, including map output files and RDDs that get
0340 stored on disk. This should be on a fast, local disk in your system. It can also be a
0341 comma-separated list of multiple directories on different disks.
0342
0343 <br/>
0344 <em>Note:</em> This will be overridden by SPARK_LOCAL_DIRS (Standalone), MESOS_SANDBOX (Mesos) or
0345 LOCAL_DIRS (YARN) environment variables set by the cluster manager.
0346 </td>
0347 <td>0.5.0</td>
0348 </tr>
0349 <tr>
0350 <td><code>spark.logConf</code></td>
0351 <td>false</td>
0352 <td>
0353 Logs the effective SparkConf as INFO when a SparkContext is started.
0354 </td>
0355 <td>0.9.0</td>
0356 </tr>
0357 <tr>
0358 <td><code>spark.master</code></td>
0359 <td>(none)</td>
0360 <td>
0361 The cluster manager to connect to. See the list of
0362 <a href="submitting-applications.html#master-urls"> allowed master URL's</a>.
0363 </td>
0364 <td>0.9.0</td>
0365 </tr>
0366 <tr>
0367 <td><code>spark.submit.deployMode</code></td>
0368 <td>(none)</td>
0369 <td>
0370 The deploy mode of Spark driver program, either "client" or "cluster",
0371 Which means to launch driver program locally ("client")
0372 or remotely ("cluster") on one of the nodes inside the cluster.
0373 </td>
0374 <td>1.5.0</td>
0375 </tr>
0376 <tr>
0377 <td><code>spark.log.callerContext</code></td>
0378 <td>(none)</td>
0379 <td>
0380 Application information that will be written into Yarn RM log/HDFS audit log when running on Yarn/HDFS.
0381 Its length depends on the Hadoop configuration <code>hadoop.caller.context.max.size</code>. It should be concise,
0382 and typically can have up to 50 characters.
0383 </td>
0384 <td>2.2.0</td>
0385 </tr>
0386 <tr>
0387 <td><code>spark.driver.supervise</code></td>
0388 <td>false</td>
0389 <td>
0390 If true, restarts the driver automatically if it fails with a non-zero exit status.
0391 Only has effect in Spark standalone mode or Mesos cluster deploy mode.
0392 </td>
0393 <td>1.3.0</td>
0394 </tr>
0395 <tr>
0396 <td><code>spark.driver.log.dfsDir</code></td>
0397 <td>(none)</td>
0398 <td>
0399 Base directory in which Spark driver logs are synced, if <code>spark.driver.log.persistToDfs.enabled</code>
0400 is true. Within this base directory, each application logs the driver logs to an application specific file.
0401 Users may want to set this to a unified location like an HDFS directory so driver log files can be persisted
0402 for later usage. This directory should allow any Spark user to read/write files and the Spark History Server
0403 user to delete files. Additionally, older logs from this directory are cleaned by the
0404 <a href="monitoring.html#spark-history-server-configuration-options">Spark History Server</a> if
0405 <code>spark.history.fs.driverlog.cleaner.enabled</code> is true and, if they are older than max age configured
0406 by setting <code>spark.history.fs.driverlog.cleaner.maxAge</code>.
0407 </td>
0408 <td>3.0.0</td>
0409 </tr>
0410 <tr>
0411 <td><code>spark.driver.log.persistToDfs.enabled</code></td>
0412 <td>false</td>
0413 <td>
0414 If true, spark application running in client mode will write driver logs to a persistent storage, configured
0415 in <code>spark.driver.log.dfsDir</code>. If <code>spark.driver.log.dfsDir</code> is not configured, driver logs
0416 will not be persisted. Additionally, enable the cleaner by setting <code>spark.history.fs.driverlog.cleaner.enabled</code>
0417 to true in <a href="monitoring.html#spark-history-server-configuration-options">Spark History Server</a>.
0418 </td>
0419 <td>3.0.0</td>
0420 </tr>
0421 <tr>
0422 <td><code>spark.driver.log.layout</code></td>
0423 <td>%d{yy/MM/dd HH:mm:ss.SSS} %t %p %c{1}: %m%n</td>
0424 <td>
0425 The layout for the driver logs that are synced to <code>spark.driver.log.dfsDir</code>. If this is not configured,
0426 it uses the layout for the first appender defined in log4j.properties. If that is also not configured, driver logs
0427 use the default layout.
0428 </td>
0429 <td>3.0.0</td>
0430 </tr>
0431 <tr>
0432 <td><code>spark.driver.log.allowErasureCoding</code></td>
0433 <td>false</td>
0434 <td>
0435 Whether to allow driver logs to use erasure coding. On HDFS, erasure coded files will not
0436 update as quickly as regular replicated files, so they make take longer to reflect changes
0437 written by the application. Note that even if this is true, Spark will still not force the
0438 file to use erasure coding, it will simply use file system defaults.
0439 </td>
0440 <td>3.0.0</td>
0441 </tr>
0442 </table>
0443
0444 Apart from these, the following properties are also available, and may be useful in some situations:
0445
0446 ### Runtime Environment
0447
0448 <table class="table">
0449 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
0450 <tr>
0451 <td><code>spark.driver.extraClassPath</code></td>
0452 <td>(none)</td>
0453 <td>
0454 Extra classpath entries to prepend to the classpath of the driver.
0455
0456 <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
0457 directly in your application, because the driver JVM has already started at that point.
0458 Instead, please set this through the <code>--driver-class-path</code> command line option or in
0459 your default properties file.
0460 </td>
0461 <td>1.0.0</td>
0462 </tr>
0463 <tr>
0464 <td><code>spark.driver.defaultJavaOptions</code></td>
0465 <td>(none)</td>
0466 <td>
0467 A string of default JVM options to prepend to <code>spark.driver.extraJavaOptions</code>.
0468 This is intended to be set by administrators.
0469
0470 For instance, GC settings or other logging.
0471 Note that it is illegal to set maximum heap size (-Xmx) settings with this option. Maximum heap
0472 size settings can be set with <code>spark.driver.memory</code> in the cluster mode and through
0473 the <code>--driver-memory</code> command line option in the client mode.
0474
0475 <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
0476 directly in your application, because the driver JVM has already started at that point.
0477 Instead, please set this through the <code>--driver-java-options</code> command line option or in
0478 your default properties file.
0479 </td>
0480 <td>3.0.0</td>
0481 </tr>
0482 <tr>
0483 <td><code>spark.driver.extraJavaOptions</code></td>
0484 <td>(none)</td>
0485 <td>
0486 A string of extra JVM options to pass to the driver. This is intended to be set by users.
0487
0488 For instance, GC settings or other logging.
0489 Note that it is illegal to set maximum heap size (-Xmx) settings with this option. Maximum heap
0490 size settings can be set with <code>spark.driver.memory</code> in the cluster mode and through
0491 the <code>--driver-memory</code> command line option in the client mode.
0492
0493 <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
0494 directly in your application, because the driver JVM has already started at that point.
0495 Instead, please set this through the <code>--driver-java-options</code> command line option or in
0496 your default properties file.
0497
0498 <code>spark.driver.defaultJavaOptions</code> will be prepended to this configuration.
0499 </td>
0500 <td>1.0.0</td>
0501 </tr>
0502 <tr>
0503 <td><code>spark.driver.extraLibraryPath</code></td>
0504 <td>(none)</td>
0505 <td>
0506 Set a special library path to use when launching the driver JVM.
0507
0508 <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
0509 directly in your application, because the driver JVM has already started at that point.
0510 Instead, please set this through the <code>--driver-library-path</code> command line option or in
0511 your default properties file.
0512 </td>
0513 <td>1.0.0</td>
0514 </tr>
0515 <tr>
0516 <td><code>spark.driver.userClassPathFirst</code></td>
0517 <td>false</td>
0518 <td>
0519 (Experimental) Whether to give user-added jars precedence over Spark's own jars when loading
0520 classes in the driver. This feature can be used to mitigate conflicts between Spark's
0521 dependencies and user dependencies. It is currently an experimental feature.
0522
0523 This is used in cluster mode only.
0524 </td>
0525 <td>1.3.0</td>
0526 </tr>
0527 <tr>
0528 <td><code>spark.executor.extraClassPath</code></td>
0529 <td>(none)</td>
0530 <td>
0531 Extra classpath entries to prepend to the classpath of executors. This exists primarily for
0532 backwards-compatibility with older versions of Spark. Users typically should not need to set
0533 this option.
0534 </td>
0535 <td>1.0.0</td>
0536 </tr>
0537 <tr>
0538 <td><code>spark.executor.defaultJavaOptions</code></td>
0539 <td>(none)</td>
0540 <td>
0541 A string of default JVM options to prepend to <code>spark.executor.extraJavaOptions</code>.
0542 This is intended to be set by administrators.
0543
0544 For instance, GC settings or other logging.
0545 Note that it is illegal to set Spark properties or maximum heap size (-Xmx) settings with this
0546 option. Spark properties should be set using a SparkConf object or the spark-defaults.conf file
0547 used with the spark-submit script. Maximum heap size settings can be set with spark.executor.memory.
0548
0549 The following symbols, if present will be interpolated: {{APP_ID}} will be replaced by
0550 application ID and {{EXECUTOR_ID}} will be replaced by executor ID. For example, to enable
0551 verbose gc logging to a file named for the executor ID of the app in /tmp, pass a 'value' of:
0552 <code>-verbose:gc -Xloggc:/tmp/{{APP_ID}}-{{EXECUTOR_ID}}.gc</code>
0553 </td>
0554 <td>3.0.0</td>
0555 </tr>
0556 <tr>
0557 <td><code>spark.executor.extraJavaOptions</code></td>
0558 <td>(none)</td>
0559 <td>
0560 A string of extra JVM options to pass to executors. This is intended to be set by users.
0561
0562 For instance, GC settings or other logging.
0563 Note that it is illegal to set Spark properties or maximum heap size (-Xmx) settings with this
0564 option. Spark properties should be set using a SparkConf object or the spark-defaults.conf file
0565 used with the spark-submit script. Maximum heap size settings can be set with spark.executor.memory.
0566
0567 The following symbols, if present will be interpolated: {{APP_ID}} will be replaced by
0568 application ID and {{EXECUTOR_ID}} will be replaced by executor ID. For example, to enable
0569 verbose gc logging to a file named for the executor ID of the app in /tmp, pass a 'value' of:
0570 <code>-verbose:gc -Xloggc:/tmp/{{APP_ID}}-{{EXECUTOR_ID}}.gc</code>
0571
0572 <code>spark.executor.defaultJavaOptions</code> will be prepended to this configuration.
0573 </td>
0574 <td>1.0.0</td>
0575 </tr>
0576 <tr>
0577 <td><code>spark.executor.extraLibraryPath</code></td>
0578 <td>(none)</td>
0579 <td>
0580 Set a special library path to use when launching executor JVM's.
0581 </td>
0582 <td>1.0.0</td>
0583 </tr>
0584 <tr>
0585 <td><code>spark.executor.logs.rolling.maxRetainedFiles</code></td>
0586 <td>(none)</td>
0587 <td>
0588 Sets the number of latest rolling log files that are going to be retained by the system.
0589 Older log files will be deleted. Disabled by default.
0590 </td>
0591 <td>1.1.0</td>
0592 </tr>
0593 <tr>
0594 <td><code>spark.executor.logs.rolling.enableCompression</code></td>
0595 <td>false</td>
0596 <td>
0597 Enable executor log compression. If it is enabled, the rolled executor logs will be compressed.
0598 Disabled by default.
0599 </td>
0600 <td>2.0.2</td>
0601 </tr>
0602 <tr>
0603 <td><code>spark.executor.logs.rolling.maxSize</code></td>
0604 <td>(none)</td>
0605 <td>
0606 Set the max size of the file in bytes by which the executor logs will be rolled over.
0607 Rolling is disabled by default. See <code>spark.executor.logs.rolling.maxRetainedFiles</code>
0608 for automatic cleaning of old logs.
0609 </td>
0610 <td>1.4.0</td>
0611 </tr>
0612 <tr>
0613 <td><code>spark.executor.logs.rolling.strategy</code></td>
0614 <td>(none)</td>
0615 <td>
0616 Set the strategy of rolling of executor logs. By default it is disabled. It can
0617 be set to "time" (time-based rolling) or "size" (size-based rolling). For "time",
0618 use <code>spark.executor.logs.rolling.time.interval</code> to set the rolling interval.
0619 For "size", use <code>spark.executor.logs.rolling.maxSize</code> to set
0620 the maximum file size for rolling.
0621 </td>
0622 <td>1.1.0</td>
0623 </tr>
0624 <tr>
0625 <td><code>spark.executor.logs.rolling.time.interval</code></td>
0626 <td>daily</td>
0627 <td>
0628 Set the time interval by which the executor logs will be rolled over.
0629 Rolling is disabled by default. Valid values are <code>daily</code>, <code>hourly</code>, <code>minutely</code> or
0630 any interval in seconds. See <code>spark.executor.logs.rolling.maxRetainedFiles</code>
0631 for automatic cleaning of old logs.
0632 </td>
0633 <td>1.1.0</td>
0634 </tr>
0635 <tr>
0636 <td><code>spark.executor.userClassPathFirst</code></td>
0637 <td>false</td>
0638 <td>
0639 (Experimental) Same functionality as <code>spark.driver.userClassPathFirst</code>, but
0640 applied to executor instances.
0641 </td>
0642 <td>1.3.0</td>
0643 </tr>
0644 <tr>
0645 <td><code>spark.executorEnv.[EnvironmentVariableName]</code></td>
0646 <td>(none)</td>
0647 <td>
0648 Add the environment variable specified by <code>EnvironmentVariableName</code> to the Executor
0649 process. The user can specify multiple of these to set multiple environment variables.
0650 </td>
0651 <td>0.9.0</td>
0652 </tr>
0653 <tr>
0654 <td><code>spark.redaction.regex</code></td>
0655 <td>(?i)secret|password|token</td>
0656 <td>
0657 Regex to decide which Spark configuration properties and environment variables in driver and
0658 executor environments contain sensitive information. When this regex matches a property key or
0659 value, the value is redacted from the environment UI and various logs like YARN and event logs.
0660 </td>
0661 <td>2.1.2</td>
0662 </tr>
0663 <tr>
0664 <td><code>spark.python.profile</code></td>
0665 <td>false</td>
0666 <td>
0667 Enable profiling in Python worker, the profile result will show up by <code>sc.show_profiles()</code>,
0668 or it will be displayed before the driver exits. It also can be dumped into disk by
0669 <code>sc.dump_profiles(path)</code>. If some of the profile results had been displayed manually,
0670 they will not be displayed automatically before driver exiting.
0671
0672 By default the <code>pyspark.profiler.BasicProfiler</code> will be used, but this can be overridden by
0673 passing a profiler class in as a parameter to the <code>SparkContext</code> constructor.
0674 </td>
0675 <td>1.2.0</td>
0676 </tr>
0677 <tr>
0678 <td><code>spark.python.profile.dump</code></td>
0679 <td>(none)</td>
0680 <td>
0681 The directory which is used to dump the profile result before driver exiting.
0682 The results will be dumped as separated file for each RDD. They can be loaded
0683 by <code>pstats.Stats()</code>. If this is specified, the profile result will not be displayed
0684 automatically.
0685 </td>
0686 <td>1.2.0</td>
0687 </tr>
0688 <tr>
0689 <td><code>spark.python.worker.memory</code></td>
0690 <td>512m</td>
0691 <td>
0692 Amount of memory to use per python worker process during aggregation, in the same
0693 format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t")
0694 (e.g. <code>512m</code>, <code>2g</code>).
0695 If the memory used during aggregation goes above this amount, it will spill the data into disks.
0696 </td>
0697 <td>1.1.0</td>
0698 </tr>
0699 <tr>
0700 <td><code>spark.python.worker.reuse</code></td>
0701 <td>true</td>
0702 <td>
0703 Reuse Python worker or not. If yes, it will use a fixed number of Python workers,
0704 does not need to fork() a Python process for every task. It will be very useful
0705 if there is a large broadcast, then the broadcast will not need to be transferred
0706 from JVM to Python worker for every task.
0707 </td>
0708 <td>1.2.0</td>
0709 </tr>
0710 <tr>
0711 <td><code>spark.files</code></td>
0712 <td></td>
0713 <td>
0714 Comma-separated list of files to be placed in the working directory of each executor. Globs are allowed.
0715 </td>
0716 <td>1.0.0</td>
0717 </tr>
0718 <tr>
0719 <td><code>spark.submit.pyFiles</code></td>
0720 <td></td>
0721 <td>
0722 Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. Globs are allowed.
0723 </td>
0724 <td>1.0.1</td>
0725 </tr>
0726 <tr>
0727 <td><code>spark.jars</code></td>
0728 <td></td>
0729 <td>
0730 Comma-separated list of jars to include on the driver and executor classpaths. Globs are allowed.
0731 </td>
0732 <td>0.9.0</td>
0733 </tr>
0734 <tr>
0735 <td><code>spark.jars.packages</code></td>
0736 <td></td>
0737 <td>
0738 Comma-separated list of Maven coordinates of jars to include on the driver and executor
0739 classpaths. The coordinates should be groupId:artifactId:version. If <code>spark.jars.ivySettings</code>
0740 is given artifacts will be resolved according to the configuration in the file, otherwise artifacts
0741 will be searched for in the local maven repo, then maven central and finally any additional remote
0742 repositories given by the command-line option <code>--repositories</code>. For more details, see
0743 <a href="submitting-applications.html#advanced-dependency-management">Advanced Dependency Management</a>.
0744 </td>
0745 <td>1.5.0</td>
0746 </tr>
0747 <tr>
0748 <td><code>spark.jars.excludes</code></td>
0749 <td></td>
0750 <td>
0751 Comma-separated list of groupId:artifactId, to exclude while resolving the dependencies
0752 provided in <code>spark.jars.packages</code> to avoid dependency conflicts.
0753 </td>
0754 <td>1.5.0</td>
0755 </tr>
0756 <tr>
0757 <td><code>spark.jars.ivy</code></td>
0758 <td></td>
0759 <td>
0760 Path to specify the Ivy user directory, used for the local Ivy cache and package files from
0761 <code>spark.jars.packages</code>. This will override the Ivy property <code>ivy.default.ivy.user.dir</code>
0762 which defaults to ~/.ivy2.
0763 </td>
0764 <td>1.3.0</td>
0765 </tr>
0766 <tr>
0767 <td><code>spark.jars.ivySettings</code></td>
0768 <td></td>
0769 <td>
0770 Path to an Ivy settings file to customize resolution of jars specified using <code>spark.jars.packages</code>
0771 instead of the built-in defaults, such as maven central. Additional repositories given by the command-line
0772 option <code>--repositories</code> or <code>spark.jars.repositories</code> will also be included.
0773 Useful for allowing Spark to resolve artifacts from behind a firewall e.g. via an in-house
0774 artifact server like Artifactory. Details on the settings file format can be
0775 found at <a href="http://ant.apache.org/ivy/history/latest-milestone/settings.html">Settings Files</a>
0776 </td>
0777 <td>2.2.0</td>
0778 </tr>
0779 <tr>
0780 <td><code>spark.jars.repositories</code></td>
0781 <td></td>
0782 <td>
0783 Comma-separated list of additional remote repositories to search for the maven coordinates
0784 given with <code>--packages</code> or <code>spark.jars.packages</code>.
0785 </td>
0786 <td>2.3.0</td>
0787 </tr>
0788 <tr>
0789 <td><code>spark.pyspark.driver.python</code></td>
0790 <td></td>
0791 <td>
0792 Python binary executable to use for PySpark in driver.
0793 (default is <code>spark.pyspark.python</code>)
0794 </td>
0795 <td>2.1.0</td>
0796 </tr>
0797 <tr>
0798 <td><code>spark.pyspark.python</code></td>
0799 <td></td>
0800 <td>
0801 Python binary executable to use for PySpark in both driver and executors.
0802 </td>
0803 <td>2.1.0</td>
0804 </tr>
0805 </table>
0806
0807 ### Shuffle Behavior
0808
0809 <table class="table">
0810 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
0811 <tr>
0812 <td><code>spark.reducer.maxSizeInFlight</code></td>
0813 <td>48m</td>
0814 <td>
0815 Maximum size of map outputs to fetch simultaneously from each reduce task, in MiB unless
0816 otherwise specified. Since each output requires us to create a buffer to receive it, this
0817 represents a fixed memory overhead per reduce task, so keep it small unless you have a
0818 large amount of memory.
0819 </td>
0820 <td>1.4.0</td>
0821 </tr>
0822 <tr>
0823 <td><code>spark.reducer.maxReqsInFlight</code></td>
0824 <td>Int.MaxValue</td>
0825 <td>
0826 This configuration limits the number of remote requests to fetch blocks at any given point.
0827 When the number of hosts in the cluster increase, it might lead to very large number
0828 of inbound connections to one or more nodes, causing the workers to fail under load.
0829 By allowing it to limit the number of fetch requests, this scenario can be mitigated.
0830 </td>
0831 <td>2.0.0</td>
0832 </tr>
0833 <tr>
0834 <td><code>spark.reducer.maxBlocksInFlightPerAddress</code></td>
0835 <td>Int.MaxValue</td>
0836 <td>
0837 This configuration limits the number of remote blocks being fetched per reduce task from a
0838 given host port. When a large number of blocks are being requested from a given address in a
0839 single fetch or simultaneously, this could crash the serving executor or Node Manager. This
0840 is especially useful to reduce the load on the Node Manager when external shuffle is enabled.
0841 You can mitigate this issue by setting it to a lower value.
0842 </td>
0843 <td>2.2.1</td>
0844 </tr>
0845 <tr>
0846 <td><code>spark.shuffle.compress</code></td>
0847 <td>true</td>
0848 <td>
0849 Whether to compress map output files. Generally a good idea. Compression will use
0850 <code>spark.io.compression.codec</code>.
0851 </td>
0852 <td>0.6.0</td>
0853 </tr>
0854 <tr>
0855 <td><code>spark.shuffle.file.buffer</code></td>
0856 <td>32k</td>
0857 <td>
0858 Size of the in-memory buffer for each shuffle file output stream, in KiB unless otherwise
0859 specified. These buffers reduce the number of disk seeks and system calls made in creating
0860 intermediate shuffle files.
0861 </td>
0862 <td>1.4.0</td>
0863 </tr>
0864 <tr>
0865 <td><code>spark.shuffle.io.maxRetries</code></td>
0866 <td>3</td>
0867 <td>
0868 (Netty only) Fetches that fail due to IO-related exceptions are automatically retried if this is
0869 set to a non-zero value. This retry logic helps stabilize large shuffles in the face of long GC
0870 pauses or transient network connectivity issues.
0871 </td>
0872 <td>1.2.0</td>
0873 </tr>
0874 <tr>
0875 <td><code>spark.shuffle.io.numConnectionsPerPeer</code></td>
0876 <td>1</td>
0877 <td>
0878 (Netty only) Connections between hosts are reused in order to reduce connection buildup for
0879 large clusters. For clusters with many hard disks and few hosts, this may result in insufficient
0880 concurrency to saturate all disks, and so users may consider increasing this value.
0881 </td>
0882 <td>1.2.1</td>
0883 </tr>
0884 <tr>
0885 <td><code>spark.shuffle.io.preferDirectBufs</code></td>
0886 <td>true</td>
0887 <td>
0888 (Netty only) Off-heap buffers are used to reduce garbage collection during shuffle and cache
0889 block transfer. For environments where off-heap memory is tightly limited, users may wish to
0890 turn this off to force all allocations from Netty to be on-heap.
0891 </td>
0892 <td>1.2.0</td>
0893 </tr>
0894 <tr>
0895 <td><code>spark.shuffle.io.retryWait</code></td>
0896 <td>5s</td>
0897 <td>
0898 (Netty only) How long to wait between retries of fetches. The maximum delay caused by retrying
0899 is 15 seconds by default, calculated as <code>maxRetries * retryWait</code>.
0900 </td>
0901 <td>1.2.1</td>
0902 </tr>
0903 <tr>
0904 <td><code>spark.shuffle.io.backLog</code></td>
0905 <td>-1</td>
0906 <td>
0907 Length of the accept queue for the shuffle service. For large applications, this value may
0908 need to be increased, so that incoming connections are not dropped if the service cannot keep
0909 up with a large number of connections arriving in a short period of time. This needs to
0910 be configured wherever the shuffle service itself is running, which may be outside of the
0911 application (see <code>spark.shuffle.service.enabled</code> option below). If set below 1,
0912 will fallback to OS default defined by Netty's <code>io.netty.util.NetUtil#SOMAXCONN</code>.
0913 </td>
0914 <td>1.1.1</td>
0915 </tr>
0916 <tr>
0917 <td><code>spark.shuffle.service.enabled</code></td>
0918 <td>false</td>
0919 <td>
0920 Enables the external shuffle service. This service preserves the shuffle files written by
0921 executors so the executors can be safely removed. This must be enabled if
0922 <code>spark.dynamicAllocation.enabled</code> is "true". The external shuffle service
0923 must be set up in order to enable it. See
0924 <a href="job-scheduling.html#configuration-and-setup">dynamic allocation
0925 configuration and setup documentation</a> for more information.
0926 </td>
0927 <td>1.2.0</td>
0928 </tr>
0929 <tr>
0930 <td><code>spark.shuffle.service.port</code></td>
0931 <td>7337</td>
0932 <td>
0933 Port on which the external shuffle service will run.
0934 </td>
0935 <td>1.2.0</td>
0936 </tr>
0937 <tr>
0938 <td><code>spark.shuffle.service.index.cache.size</code></td>
0939 <td>100m</td>
0940 <td>
0941 Cache entries limited to the specified memory footprint, in bytes unless otherwise specified.
0942 </td>
0943 <td>2.3.0</td>
0944 </tr>
0945 <tr>
0946 <td><code>spark.shuffle.maxChunksBeingTransferred</code></td>
0947 <td>Long.MAX_VALUE</td>
0948 <td>
0949 The max number of chunks allowed to be transferred at the same time on shuffle service.
0950 Note that new incoming connections will be closed when the max number is hit. The client will
0951 retry according to the shuffle retry configs (see <code>spark.shuffle.io.maxRetries</code> and
0952 <code>spark.shuffle.io.retryWait</code>), if those limits are reached the task will fail with
0953 fetch failure.
0954 </td>
0955 <td>2.3.0</td>
0956 </tr>
0957 <tr>
0958 <td><code>spark.shuffle.sort.bypassMergeThreshold</code></td>
0959 <td>200</td>
0960 <td>
0961 (Advanced) In the sort-based shuffle manager, avoid merge-sorting data if there is no
0962 map-side aggregation and there are at most this many reduce partitions.
0963 </td>
0964 <td>1.1.1</td>
0965 </tr>
0966 <tr>
0967 <td><code>spark.shuffle.spill.compress</code></td>
0968 <td>true</td>
0969 <td>
0970 Whether to compress data spilled during shuffles. Compression will use
0971 <code>spark.io.compression.codec</code>.
0972 </td>
0973 <td>0.9.0</td>
0974 </tr>
0975 <tr>
0976 <td><code>spark.shuffle.accurateBlockThreshold</code></td>
0977 <td>100 * 1024 * 1024</td>
0978 <td>
0979 Threshold in bytes above which the size of shuffle blocks in HighlyCompressedMapStatus is
0980 accurately recorded. This helps to prevent OOM by avoiding underestimating shuffle
0981 block size when fetch shuffle blocks.
0982 </td>
0983 <td>2.2.1</td>
0984 </tr>
0985 <tr>
0986 <td><code>spark.shuffle.registration.timeout</code></td>
0987 <td>5000</td>
0988 <td>
0989 Timeout in milliseconds for registration to the external shuffle service.
0990 </td>
0991 <td>2.3.0</td>
0992 </tr>
0993 <tr>
0994 <td><code>spark.shuffle.registration.maxAttempts</code></td>
0995 <td>3</td>
0996 <td>
0997 When we fail to register to the external shuffle service, we will retry for maxAttempts times.
0998 </td>
0999 <td>2.3.0</td>
1000 </tr>
1001 </table>
1002
1003 ### Spark UI
1004
1005 <table class="table">
1006 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
1007 <tr>
1008 <td><code>spark.eventLog.logBlockUpdates.enabled</code></td>
1009 <td>false</td>
1010 <td>
1011 Whether to log events for every block update, if <code>spark.eventLog.enabled</code> is true.
1012 *Warning*: This will increase the size of the event log considerably.
1013 </td>
1014 <td>2.3.0</td>
1015 </tr>
1016 <tr>
1017 <td><code>spark.eventLog.longForm.enabled</code></td>
1018 <td>false</td>
1019 <td>
1020 If true, use the long form of call sites in the event log. Otherwise use the short form.
1021 </td>
1022 <td>2.4.0</td>
1023 </tr>
1024 <tr>
1025 <td><code>spark.eventLog.compress</code></td>
1026 <td>false</td>
1027 <td>
1028 Whether to compress logged events, if <code>spark.eventLog.enabled</code> is true.
1029 </td>
1030 <td>1.0.0</td>
1031 </tr>
1032 <tr>
1033 <td><code>spark.eventLog.compression.codec</code></td>
1034 <td></td>
1035 <td>
1036 The codec to compress logged events. If this is not given,
1037 <code>spark.io.compression.codec</code> will be used.
1038 </td>
1039 <td>3.0.0</td>
1040 </tr>
1041 <tr>
1042 <td><code>spark.eventLog.erasureCoding.enabled</code></td>
1043 <td>false</td>
1044 <td>
1045 Whether to allow event logs to use erasure coding, or turn erasure coding off, regardless of
1046 filesystem defaults. On HDFS, erasure coded files will not update as quickly as regular
1047 replicated files, so the application updates will take longer to appear in the History Server.
1048 Note that even if this is true, Spark will still not force the file to use erasure coding, it
1049 will simply use filesystem defaults.
1050 </td>
1051 <td>3.0.0</td>
1052 </tr>
1053 <tr>
1054 <td><code>spark.eventLog.dir</code></td>
1055 <td>file:///tmp/spark-events</td>
1056 <td>
1057 Base directory in which Spark events are logged, if <code>spark.eventLog.enabled</code> is true.
1058 Within this base directory, Spark creates a sub-directory for each application, and logs the
1059 events specific to the application in this directory. Users may want to set this to
1060 a unified location like an HDFS directory so history files can be read by the history server.
1061 </td>
1062 <td>1.0.0</td>
1063 </tr>
1064 <tr>
1065 <td><code>spark.eventLog.enabled</code></td>
1066 <td>false</td>
1067 <td>
1068 Whether to log Spark events, useful for reconstructing the Web UI after the application has
1069 finished.
1070 </td>
1071 <td>1.0.0</td>
1072 </tr>
1073 <tr>
1074 <td><code>spark.eventLog.overwrite</code></td>
1075 <td>false</td>
1076 <td>
1077 Whether to overwrite any existing files.
1078 </td>
1079 <td>1.0.0</td>
1080 </tr>
1081 <tr>
1082 <td><code>spark.eventLog.buffer.kb</code></td>
1083 <td>100k</td>
1084 <td>
1085 Buffer size to use when writing to output streams, in KiB unless otherwise specified.
1086 </td>
1087 <td>1.0.0</td>
1088 </tr>
1089 <tr>
1090 <td><code>spark.eventLog.rolling.enabled</code></td>
1091 <td>false</td>
1092 <td>
1093 Whether rolling over event log files is enabled. If set to true, it cuts down each event
1094 log file to the configured size.
1095 </td>
1096 <td>3.0.0</td>
1097 </tr>
1098 <tr>
1099 <td><code>spark.eventLog.rolling.maxFileSize</code></td>
1100 <td>128m</td>
1101 <td>
1102 When <code>spark.eventLog.rolling.enabled=true</code>, specifies the max size of event log file before it's rolled over.
1103 </td>
1104 <td>3.0.0</td>
1105 </tr>
1106 <tr>
1107 <td><code>spark.ui.dagGraph.retainedRootRDDs</code></td>
1108 <td>Int.MaxValue</td>
1109 <td>
1110 How many DAG graph nodes the Spark UI and status APIs remember before garbage collecting.
1111 </td>
1112 <td>2.1.0</td>
1113 </tr>
1114 <tr>
1115 <td><code>spark.ui.enabled</code></td>
1116 <td>true</td>
1117 <td>
1118 Whether to run the web UI for the Spark application.
1119 </td>
1120 <td>1.1.1</td>
1121 </tr>
1122 <tr>
1123 <td><code>spark.ui.killEnabled</code></td>
1124 <td>true</td>
1125 <td>
1126 Allows jobs and stages to be killed from the web UI.
1127 </td>
1128 <td>1.0.0</td>
1129 </tr>
1130 <tr>
1131 <td><code>spark.ui.liveUpdate.period</code></td>
1132 <td>100ms</td>
1133 <td>
1134 How often to update live entities. -1 means "never update" when replaying applications,
1135 meaning only the last write will happen. For live applications, this avoids a few
1136 operations that we can live without when rapidly processing incoming task events.
1137 </td>
1138 <td>2.3.0</td>
1139 </tr>
1140 <tr>
1141 <td><code>spark.ui.liveUpdate.minFlushPeriod</code></td>
1142 <td>1s</td>
1143 <td>
1144 Minimum time elapsed before stale UI data is flushed. This avoids UI staleness when incoming
1145 task events are not fired frequently.
1146 </td>
1147 <td>2.4.2</td>
1148 </tr>
1149 <tr>
1150 <td><code>spark.ui.port</code></td>
1151 <td>4040</td>
1152 <td>
1153 Port for your application's dashboard, which shows memory and workload data.
1154 </td>
1155 <td>0.7.0</td>
1156 </tr>
1157 <tr>
1158 <td><code>spark.ui.retainedJobs</code></td>
1159 <td>1000</td>
1160 <td>
1161 How many jobs the Spark UI and status APIs remember before garbage collecting.
1162 This is a target maximum, and fewer elements may be retained in some circumstances.
1163 </td>
1164 <td>1.2.0</td>
1165 </tr>
1166 <tr>
1167 <td><code>spark.ui.retainedStages</code></td>
1168 <td>1000</td>
1169 <td>
1170 How many stages the Spark UI and status APIs remember before garbage collecting.
1171 This is a target maximum, and fewer elements may be retained in some circumstances.
1172 </td>
1173 <td>0.9.0</td>
1174 </tr>
1175 <tr>
1176 <td><code>spark.ui.retainedTasks</code></td>
1177 <td>100000</td>
1178 <td>
1179 How many tasks in one stage the Spark UI and status APIs remember before garbage collecting.
1180 This is a target maximum, and fewer elements may be retained in some circumstances.
1181 </td>
1182 <td>2.0.1</td>
1183 </tr>
1184 <tr>
1185 <td><code>spark.ui.reverseProxy</code></td>
1186 <td>false</td>
1187 <td>
1188 Enable running Spark Master as reverse proxy for worker and application UIs. In this mode, Spark master will reverse proxy the worker and application UIs to enable access without requiring direct access to their hosts. Use it with caution, as worker and application UI will not be accessible directly, you will only be able to access them through spark master/proxy public URL. This setting affects all the workers and application UIs running in the cluster and must be set on all the workers, drivers and masters.
1189 </td>
1190 <td>2.1.0</td>
1191 </tr>
1192 <tr>
1193 <td><code>spark.ui.reverseProxyUrl</code></td>
1194 <td></td>
1195 <td>
1196 This is the URL where your proxy is running. This URL is for proxy which is running in front of Spark Master. This is useful when running proxy for authentication e.g. OAuth proxy. Make sure this is a complete URL including scheme (http/https) and port to reach your proxy.
1197 </td>
1198 <td>2.1.0</td>
1199 </tr>
1200 <tr>
1201 <td><code>spark.ui.proxyRedirectUri</code></td>
1202 <td></td>
1203 <td>
1204 Where to address redirects when Spark is running behind a proxy. This will make Spark
1205 modify redirect responses so they point to the proxy server, instead of the Spark UI's own
1206 address. This should be only the address of the server, without any prefix paths for the
1207 application; the prefix should be set either by the proxy server itself (by adding the
1208 <code>X-Forwarded-Context</code> request header), or by setting the proxy base in the Spark
1209 app's configuration.
1210 </td>
1211 <td>3.0.0</td>
1212 </tr>
1213 <tr>
1214 <td><code>spark.ui.showConsoleProgress</code></td>
1215 <td>false</td>
1216 <td>
1217 Show the progress bar in the console. The progress bar shows the progress of stages
1218 that run for longer than 500ms. If multiple stages run at the same time, multiple
1219 progress bars will be displayed on the same line.
1220 <br/>
1221 <em>Note:</em> In shell environment, the default value of spark.ui.showConsoleProgress is true.
1222 </td>
1223 <td>1.2.1</td>
1224 </tr>
1225 <tr>
1226 <td><code>spark.ui.custom.executor.log.url</code></td>
1227 <td>(none)</td>
1228 <td>
1229 Specifies custom spark executor log URL for supporting external log service instead of using cluster
1230 managers' application log URLs in Spark UI. Spark will support some path variables via patterns
1231 which can vary on cluster manager. Please check the documentation for your cluster manager to
1232 see which patterns are supported, if any. <p/>
1233 Please note that this configuration also replaces original log urls in event log,
1234 which will be also effective when accessing the application on history server. The new log urls must be
1235 permanent, otherwise you might have dead link for executor log urls.
1236 <p/>
1237 For now, only YARN mode supports this configuration
1238 </td>
1239 <td>3.0.0</td>
1240 </tr>
1241 <tr>
1242 <td><code>spark.worker.ui.retainedExecutors</code></td>
1243 <td>1000</td>
1244 <td>
1245 How many finished executors the Spark UI and status APIs remember before garbage collecting.
1246 </td>
1247 <td>1.5.0</td>
1248 </tr>
1249 <tr>
1250 <td><code>spark.worker.ui.retainedDrivers</code></td>
1251 <td>1000</td>
1252 <td>
1253 How many finished drivers the Spark UI and status APIs remember before garbage collecting.
1254 </td>
1255 <td>1.5.0</td>
1256 </tr>
1257 <tr>
1258 <td><code>spark.sql.ui.retainedExecutions</code></td>
1259 <td>1000</td>
1260 <td>
1261 How many finished executions the Spark UI and status APIs remember before garbage collecting.
1262 </td>
1263 <td>1.5.0</td>
1264 </tr>
1265 <tr>
1266 <td><code>spark.streaming.ui.retainedBatches</code></td>
1267 <td>1000</td>
1268 <td>
1269 How many finished batches the Spark UI and status APIs remember before garbage collecting.
1270 </td>
1271 <td>1.0.0</td>
1272 </tr>
1273 <tr>
1274 <td><code>spark.ui.retainedDeadExecutors</code></td>
1275 <td>100</td>
1276 <td>
1277 How many dead executors the Spark UI and status APIs remember before garbage collecting.
1278 </td>
1279 <td>2.0.0</td>
1280 </tr>
1281 <tr>
1282 <td><code>spark.ui.filters</code></td>
1283 <td>None</td>
1284 <td>
1285 Comma separated list of filter class names to apply to the Spark Web UI. The filter should be a
1286 standard <a href="http://docs.oracle.com/javaee/6/api/javax/servlet/Filter.html">
1287 javax servlet Filter</a>.
1288
1289 <br />Filter parameters can also be specified in the configuration, by setting config entries
1290 of the form <code>spark.<class name of filter>.param.<param name>=<value></code>
1291
1292 <br />For example:
1293 <br /><code>spark.ui.filters=com.test.filter1</code>
1294 <br /><code>spark.com.test.filter1.param.name1=foo</code>
1295 <br /><code>spark.com.test.filter1.param.name2=bar</code>
1296 </td>
1297 <td>1.0.0</td>
1298 </tr>
1299 <tr>
1300 <td><code>spark.ui.requestHeaderSize</code></td>
1301 <td>8k</td>
1302 <td>
1303 The maximum allowed size for a HTTP request header, in bytes unless otherwise specified.
1304 This setting applies for the Spark History Server too.
1305 </td>
1306 <td>2.2.3</td>
1307 </tr>
1308 </table>
1309
1310 ### Compression and Serialization
1311
1312 <table class="table">
1313 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
1314 <tr>
1315 <td><code>spark.broadcast.compress</code></td>
1316 <td>true</td>
1317 <td>
1318 Whether to compress broadcast variables before sending them. Generally a good idea.
1319 Compression will use <code>spark.io.compression.codec</code>.
1320 </td>
1321 <td>0.6.0</td>
1322 </tr>
1323 <tr>
1324 <td><code>spark.checkpoint.compress</code></td>
1325 <td>false</td>
1326 <td>
1327 Whether to compress RDD checkpoints. Generally a good idea.
1328 Compression will use <code>spark.io.compression.codec</code>.
1329 </td>
1330 <td>2.2.0</td>
1331 </tr>
1332 <tr>
1333 <td><code>spark.io.compression.codec</code></td>
1334 <td>lz4</td>
1335 <td>
1336 The codec used to compress internal data such as RDD partitions, event log, broadcast variables
1337 and shuffle outputs. By default, Spark provides four codecs: <code>lz4</code>, <code>lzf</code>,
1338 <code>snappy</code>, and <code>zstd</code>. You can also use fully qualified class names to specify the codec,
1339 e.g.
1340 <code>org.apache.spark.io.LZ4CompressionCodec</code>,
1341 <code>org.apache.spark.io.LZFCompressionCodec</code>,
1342 <code>org.apache.spark.io.SnappyCompressionCodec</code>,
1343 and <code>org.apache.spark.io.ZStdCompressionCodec</code>.
1344 </td>
1345 <td>0.8.0</td>
1346 </tr>
1347 <tr>
1348 <td><code>spark.io.compression.lz4.blockSize</code></td>
1349 <td>32k</td>
1350 <td>
1351 Block size used in LZ4 compression, in the case when LZ4 compression codec
1352 is used. Lowering this block size will also lower shuffle memory usage when LZ4 is used.
1353 Default unit is bytes, unless otherwise specified.
1354 </td>
1355 <td>1.4.0</td>
1356 </tr>
1357 <tr>
1358 <td><code>spark.io.compression.snappy.blockSize</code></td>
1359 <td>32k</td>
1360 <td>
1361 Block size in Snappy compression, in the case when Snappy compression codec is used.
1362 Lowering this block size will also lower shuffle memory usage when Snappy is used.
1363 Default unit is bytes, unless otherwise specified.
1364 </td>
1365 <td>1.4.0</td>
1366 </tr>
1367 <tr>
1368 <td><code>spark.io.compression.zstd.level</code></td>
1369 <td>1</td>
1370 <td>
1371 Compression level for Zstd compression codec. Increasing the compression level will result in better
1372 compression at the expense of more CPU and memory.
1373 </td>
1374 <td>2.3.0</td>
1375 </tr>
1376 <tr>
1377 <td><code>spark.io.compression.zstd.bufferSize</code></td>
1378 <td>32k</td>
1379 <td>
1380 Buffer size in bytes used in Zstd compression, in the case when Zstd compression codec
1381 is used. Lowering this size will lower the shuffle memory usage when Zstd is used, but it
1382 might increase the compression cost because of excessive JNI call overhead.
1383 </td>
1384 <td>2.3.0</td>
1385 </tr>
1386 <tr>
1387 <td><code>spark.kryo.classesToRegister</code></td>
1388 <td>(none)</td>
1389 <td>
1390 If you use Kryo serialization, give a comma-separated list of custom class names to register
1391 with Kryo.
1392 See the <a href="tuning.html#data-serialization">tuning guide</a> for more details.
1393 </td>
1394 <td>1.2.0</td>
1395 </tr>
1396 <tr>
1397 <td><code>spark.kryo.referenceTracking</code></td>
1398 <td>true</td>
1399 <td>
1400 Whether to track references to the same object when serializing data with Kryo, which is
1401 necessary if your object graphs have loops and useful for efficiency if they contain multiple
1402 copies of the same object. Can be disabled to improve performance if you know this is not the
1403 case.
1404 </td>
1405 <td>0.8.0</td>
1406 </tr>
1407 <tr>
1408 <td><code>spark.kryo.registrationRequired</code></td>
1409 <td>false</td>
1410 <td>
1411 Whether to require registration with Kryo. If set to 'true', Kryo will throw an exception
1412 if an unregistered class is serialized. If set to false (the default), Kryo will write
1413 unregistered class names along with each object. Writing class names can cause
1414 significant performance overhead, so enabling this option can enforce strictly that a
1415 user has not omitted classes from registration.
1416 </td>
1417 <td>1.1.0</td>
1418 </tr>
1419 <tr>
1420 <td><code>spark.kryo.registrator</code></td>
1421 <td>(none)</td>
1422 <td>
1423 If you use Kryo serialization, give a comma-separated list of classes that register your custom classes with Kryo. This
1424 property is useful if you need to register your classes in a custom way, e.g. to specify a custom
1425 field serializer. Otherwise <code>spark.kryo.classesToRegister</code> is simpler. It should be
1426 set to classes that extend
1427 <a href="api/scala/org/apache/spark/serializer/KryoRegistrator.html">
1428 <code>KryoRegistrator</code></a>.
1429 See the <a href="tuning.html#data-serialization">tuning guide</a> for more details.
1430 </td>
1431 <td>0.5.0</td>
1432 </tr>
1433 <tr>
1434 <td><code>spark.kryo.unsafe</code></td>
1435 <td>false</td>
1436 <td>
1437 Whether to use unsafe based Kryo serializer. Can be
1438 substantially faster by using Unsafe Based IO.
1439 </td>
1440 <td>2.1.0</td>
1441 </tr>
1442 <tr>
1443 <td><code>spark.kryoserializer.buffer.max</code></td>
1444 <td>64m</td>
1445 <td>
1446 Maximum allowable size of Kryo serialization buffer, in MiB unless otherwise specified.
1447 This must be larger than any object you attempt to serialize and must be less than 2048m.
1448 Increase this if you get a "buffer limit exceeded" exception inside Kryo.
1449 </td>
1450 <td>1.4.0</td>
1451 </tr>
1452 <tr>
1453 <td><code>spark.kryoserializer.buffer</code></td>
1454 <td>64k</td>
1455 <td>
1456 Initial size of Kryo's serialization buffer, in KiB unless otherwise specified.
1457 Note that there will be one buffer <i>per core</i> on each worker. This buffer will grow up to
1458 <code>spark.kryoserializer.buffer.max</code> if needed.
1459 </td>
1460 <td>1.4.0</td>
1461 </tr>
1462 <tr>
1463 <td><code>spark.rdd.compress</code></td>
1464 <td>false</td>
1465 <td>
1466 Whether to compress serialized RDD partitions (e.g. for
1467 <code>StorageLevel.MEMORY_ONLY_SER</code> in Java
1468 and Scala or <code>StorageLevel.MEMORY_ONLY</code> in Python).
1469 Can save substantial space at the cost of some extra CPU time.
1470 Compression will use <code>spark.io.compression.codec</code>.
1471 </td>
1472 <td>0.6.0</td>
1473 </tr>
1474 <tr>
1475 <td><code>spark.serializer</code></td>
1476 <td>
1477 org.apache.spark.serializer.<br />JavaSerializer
1478 </td>
1479 <td>
1480 Class to use for serializing objects that will be sent over the network or need to be cached
1481 in serialized form. The default of Java serialization works with any Serializable Java object
1482 but is quite slow, so we recommend <a href="tuning.html">using
1483 <code>org.apache.spark.serializer.KryoSerializer</code> and configuring Kryo serialization</a>
1484 when speed is necessary. Can be any subclass of
1485 <a href="api/scala/org/apache/spark/serializer/Serializer.html">
1486 <code>org.apache.spark.Serializer</code></a>.
1487 </td>
1488 <td>0.5.0</td>
1489 </tr>
1490 <tr>
1491 <td><code>spark.serializer.objectStreamReset</code></td>
1492 <td>100</td>
1493 <td>
1494 When serializing using org.apache.spark.serializer.JavaSerializer, the serializer caches
1495 objects to prevent writing redundant data, however that stops garbage collection of those
1496 objects. By calling 'reset' you flush that info from the serializer, and allow old
1497 objects to be collected. To turn off this periodic reset set it to -1.
1498 By default it will reset the serializer every 100 objects.
1499 </td>
1500 <td>1.0.0</td>
1501 </tr>
1502 </table>
1503
1504 ### Memory Management
1505
1506 <table class="table">
1507 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
1508 <tr>
1509 <td><code>spark.memory.fraction</code></td>
1510 <td>0.6</td>
1511 <td>
1512 Fraction of (heap space - 300MB) used for execution and storage. The lower this is, the
1513 more frequently spills and cached data eviction occur. The purpose of this config is to set
1514 aside memory for internal metadata, user data structures, and imprecise size estimation
1515 in the case of sparse, unusually large records. Leaving this at the default value is
1516 recommended. For more detail, including important information about correctly tuning JVM
1517 garbage collection when increasing this value, see
1518 <a href="tuning.html#memory-management-overview">this description</a>.
1519 </td>
1520 <td>1.6.0</td>
1521 </tr>
1522 <tr>
1523 <td><code>spark.memory.storageFraction</code></td>
1524 <td>0.5</td>
1525 <td>
1526 Amount of storage memory immune to eviction, expressed as a fraction of the size of the
1527 region set aside by <code>spark.memory.fraction</code>. The higher this is, the less
1528 working memory may be available to execution and tasks may spill to disk more often.
1529 Leaving this at the default value is recommended. For more detail, see
1530 <a href="tuning.html#memory-management-overview">this description</a>.
1531 </td>
1532 <td>1.6.0</td>
1533 </tr>
1534 <tr>
1535 <td><code>spark.memory.offHeap.enabled</code></td>
1536 <td>false</td>
1537 <td>
1538 If true, Spark will attempt to use off-heap memory for certain operations. If off-heap memory
1539 use is enabled, then <code>spark.memory.offHeap.size</code> must be positive.
1540 </td>
1541 <td>1.6.0</td>
1542 </tr>
1543 <tr>
1544 <td><code>spark.memory.offHeap.size</code></td>
1545 <td>0</td>
1546 <td>
1547 The absolute amount of memory which can be used for off-heap allocation, in bytes unless otherwise specified.
1548 This setting has no impact on heap memory usage, so if your executors' total memory consumption
1549 must fit within some hard limit then be sure to shrink your JVM heap size accordingly.
1550 This must be set to a positive value when <code>spark.memory.offHeap.enabled=true</code>.
1551 </td>
1552 <td>1.6.0</td>
1553 </tr>
1554 <tr>
1555 <td><code>spark.storage.replication.proactive</code></td>
1556 <td>false</td>
1557 <td>
1558 Enables proactive block replication for RDD blocks. Cached RDD block replicas lost due to
1559 executor failures are replenished if there are any existing available replicas. This tries
1560 to get the replication level of the block to the initial number.
1561 </td>
1562 <td>2.2.0</td>
1563 </tr>
1564 <tr>
1565 <td><code>spark.cleaner.periodicGC.interval</code></td>
1566 <td>30min</td>
1567 <td>
1568 Controls how often to trigger a garbage collection.<br><br>
1569 This context cleaner triggers cleanups only when weak references are garbage collected.
1570 In long-running applications with large driver JVMs, where there is little memory pressure
1571 on the driver, this may happen very occasionally or not at all. Not cleaning at all may
1572 lead to executors running out of disk space after a while.
1573 </td>
1574 <td>1.6.0</td>
1575 </tr>
1576 <tr>
1577 <td><code>spark.cleaner.referenceTracking</code></td>
1578 <td>true</td>
1579 <td>
1580 Enables or disables context cleaning.
1581 </td>
1582 <td>1.0.0</td>
1583 </tr>
1584 <tr>
1585 <td><code>spark.cleaner.referenceTracking.blocking</code></td>
1586 <td>true</td>
1587 <td>
1588 Controls whether the cleaning thread should block on cleanup tasks (other than shuffle, which is controlled by
1589 <code>spark.cleaner.referenceTracking.blocking.shuffle</code> Spark property).
1590 </td>
1591 <td>1.0.0</td>
1592 </tr>
1593 <tr>
1594 <td><code>spark.cleaner.referenceTracking.blocking.shuffle</code></td>
1595 <td>false</td>
1596 <td>
1597 Controls whether the cleaning thread should block on shuffle cleanup tasks.
1598 </td>
1599 <td>1.1.1</td>
1600 </tr>
1601 <tr>
1602 <td><code>spark.cleaner.referenceTracking.cleanCheckpoints</code></td>
1603 <td>false</td>
1604 <td>
1605 Controls whether to clean checkpoint files if the reference is out of scope.
1606 </td>
1607 <td>1.4.0</td>
1608 </tr>
1609 </table>
1610
1611 ### Execution Behavior
1612
1613 <table class="table">
1614 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
1615 <tr>
1616 <td><code>spark.broadcast.blockSize</code></td>
1617 <td>4m</td>
1618 <td>
1619 Size of each piece of a block for <code>TorrentBroadcastFactory</code>, in KiB unless otherwise
1620 specified. Too large a value decreases parallelism during broadcast (makes it slower); however,
1621 if it is too small, <code>BlockManager</code> might take a performance hit.
1622 </td>
1623 <td>0.5.0</td>
1624 </tr>
1625 <tr>
1626 <td><code>spark.broadcast.checksum</code></td>
1627 <td>true</td>
1628 <td>
1629 Whether to enable checksum for broadcast. If enabled, broadcasts will include a checksum, which can
1630 help detect corrupted blocks, at the cost of computing and sending a little more data. It's possible
1631 to disable it if the network has other mechanisms to guarantee data won't be corrupted during broadcast.
1632 </td>
1633 <td>2.1.1</td>
1634 </tr>
1635 <tr>
1636 <td><code>spark.executor.cores</code></td>
1637 <td>
1638 1 in YARN mode, all the available cores on the worker in
1639 standalone and Mesos coarse-grained modes.
1640 </td>
1641 <td>
1642 The number of cores to use on each executor.
1643
1644 In standalone and Mesos coarse-grained modes, for more detail, see
1645 <a href="spark-standalone.html#Executors Scheduling">this description</a>.
1646 </td>
1647 <td>1.0.0</td>
1648 </tr>
1649 <tr>
1650 <td><code>spark.default.parallelism</code></td>
1651 <td>
1652 For distributed shuffle operations like <code>reduceByKey</code> and <code>join</code>, the
1653 largest number of partitions in a parent RDD. For operations like <code>parallelize</code>
1654 with no parent RDDs, it depends on the cluster manager:
1655 <ul>
1656 <li>Local mode: number of cores on the local machine</li>
1657 <li>Mesos fine grained mode: 8</li>
1658 <li>Others: total number of cores on all executor nodes or 2, whichever is larger</li>
1659 </ul>
1660 </td>
1661 <td>
1662 Default number of partitions in RDDs returned by transformations like <code>join</code>,
1663 <code>reduceByKey</code>, and <code>parallelize</code> when not set by user.
1664 </td>
1665 <td>0.5.0</td>
1666 </tr>
1667 <tr>
1668 <td><code>spark.executor.heartbeatInterval</code></td>
1669 <td>10s</td>
1670 <td>
1671 Interval between each executor's heartbeats to the driver. Heartbeats let
1672 the driver know that the executor is still alive and update it with metrics for in-progress
1673 tasks. spark.executor.heartbeatInterval should be significantly less than
1674 spark.network.timeout
1675 </td>
1676 <td>1.1.0</td>
1677 </tr>
1678 <tr>
1679 <td><code>spark.files.fetchTimeout</code></td>
1680 <td>60s</td>
1681 <td>
1682 Communication timeout to use when fetching files added through SparkContext.addFile() from
1683 the driver.
1684 </td>
1685 <td>1.0.0</td>
1686 </tr>
1687 <tr>
1688 <td><code>spark.files.useFetchCache</code></td>
1689 <td>true</td>
1690 <td>
1691 If set to true (default), file fetching will use a local cache that is shared by executors
1692 that belong to the same application, which can improve task launching performance when
1693 running many executors on the same host. If set to false, these caching optimizations will
1694 be disabled and all executors will fetch their own copies of files. This optimization may be
1695 disabled in order to use Spark local directories that reside on NFS filesystems (see
1696 <a href="https://issues.apache.org/jira/browse/SPARK-6313">SPARK-6313</a> for more details).
1697 </td>
1698 <td>1.2.2</td>
1699 </tr>
1700 <tr>
1701 <td><code>spark.files.overwrite</code></td>
1702 <td>false</td>
1703 <td>
1704 Whether to overwrite files added through SparkContext.addFile() when the target file exists and
1705 its contents do not match those of the source.
1706 </td>
1707 <td>1.0.0</td>
1708 </tr>
1709 <tr>
1710 <td><code>spark.files.maxPartitionBytes</code></td>
1711 <td>134217728 (128 MiB)</td>
1712 <td>
1713 The maximum number of bytes to pack into a single partition when reading files.
1714 </td>
1715 <td>2.1.0</td>
1716 </tr>
1717 <tr>
1718 <td><code>spark.files.openCostInBytes</code></td>
1719 <td>4194304 (4 MiB)</td>
1720 <td>
1721 The estimated cost to open a file, measured by the number of bytes could be scanned at the same
1722 time. This is used when putting multiple files into a partition. It is better to overestimate,
1723 then the partitions with small files will be faster than partitions with bigger files.
1724 </td>
1725 <td>2.1.0</td>
1726 </tr>
1727 <tr>
1728 <td><code>spark.hadoop.cloneConf</code></td>
1729 <td>false</td>
1730 <td>
1731 If set to true, clones a new Hadoop <code>Configuration</code> object for each task. This
1732 option should be enabled to work around <code>Configuration</code> thread-safety issues (see
1733 <a href="https://issues.apache.org/jira/browse/SPARK-2546">SPARK-2546</a> for more details).
1734 This is disabled by default in order to avoid unexpected performance regressions for jobs that
1735 are not affected by these issues.
1736 </td>
1737 <td>1.0.3</td>
1738 </tr>
1739 <tr>
1740 <td><code>spark.hadoop.validateOutputSpecs</code></td>
1741 <td>true</td>
1742 <td>
1743 If set to true, validates the output specification (e.g. checking if the output directory already exists)
1744 used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing
1745 output directories. We recommend that users do not disable this except if trying to achieve compatibility
1746 with previous versions of Spark. Simply use Hadoop's FileSystem API to delete output directories by hand.
1747 This setting is ignored for jobs generated through Spark Streaming's StreamingContext, since data may
1748 need to be rewritten to pre-existing output directories during checkpoint recovery.
1749 </td>
1750 <td>1.0.1</td>
1751 </tr>
1752 <tr>
1753 <td><code>spark.storage.memoryMapThreshold</code></td>
1754 <td>2m</td>
1755 <td>
1756 Size of a block above which Spark memory maps when reading a block from disk. Default unit is bytes,
1757 unless specified otherwise. This prevents Spark from memory mapping very small blocks. In general,
1758 memory mapping has high overhead for blocks close to or below the page size of the operating system.
1759 </td>
1760 <td>0.9.2</td>
1761 </tr>
1762 <tr>
1763 <td><code>spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version</code></td>
1764 <td>1</td>
1765 <td>
1766 The file output committer algorithm version, valid algorithm version number: 1 or 2.
1767 Version 2 may have better performance, but version 1 may handle failures better in certain situations,
1768 as per <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4815">MAPREDUCE-4815</a>.
1769 </td>
1770 <td>2.2.0</td>
1771 </tr>
1772 </table>
1773
1774 ### Executor Metrics
1775
1776 <table class="table">
1777 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
1778 <tr>
1779 <td><code>spark.eventLog.logStageExecutorMetrics</code></td>
1780 <td>false</td>
1781 <td>
1782 Whether to write per-stage peaks of executor metrics (for each executor) to the event log.
1783 <br />
1784 <em>Note:</em> The metrics are polled (collected) and sent in the executor heartbeat,
1785 and this is always done; this configuration is only to determine if aggregated metric peaks
1786 are written to the event log.
1787 </td>
1788 <td>3.0.0</td>
1789 </tr>
1790 <td><code>spark.executor.processTreeMetrics.enabled</code></td>
1791 <td>false</td>
1792 <td>
1793 Whether to collect process tree metrics (from the /proc filesystem) when collecting
1794 executor metrics.
1795 <br />
1796 <em>Note:</em> The process tree metrics are collected only if the /proc filesystem
1797 exists.
1798 </td>
1799 <td>3.0.0</td>
1800 <tr>
1801 <td><code>spark.executor.metrics.pollingInterval</code></td>
1802 <td>0</td>
1803 <td>
1804 How often to collect executor metrics (in milliseconds).
1805 <br />
1806 If 0, the polling is done on executor heartbeats (thus at the heartbeat interval,
1807 specified by <code>spark.executor.heartbeatInterval</code>).
1808 If positive, the polling is done at this interval.
1809 </td>
1810 <td>3.0.0</td>
1811 </tr>
1812 </table>
1813
1814 ### Networking
1815
1816 <table class="table">
1817 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
1818 <tr>
1819 <td><code>spark.rpc.message.maxSize</code></td>
1820 <td>128</td>
1821 <td>
1822 Maximum message size (in MiB) to allow in "control plane" communication; generally only applies to map
1823 output size information sent between executors and the driver. Increase this if you are running
1824 jobs with many thousands of map and reduce tasks and see messages about the RPC message size.
1825 </td>
1826 <td>2.0.0</td>
1827 </tr>
1828 <tr>
1829 <td><code>spark.blockManager.port</code></td>
1830 <td>(random)</td>
1831 <td>
1832 Port for all block managers to listen on. These exist on both the driver and the executors.
1833 </td>
1834 <td>1.1.0</td>
1835 </tr>
1836 <tr>
1837 <td><code>spark.driver.blockManager.port</code></td>
1838 <td>(value of spark.blockManager.port)</td>
1839 <td>
1840 Driver-specific port for the block manager to listen on, for cases where it cannot use the same
1841 configuration as executors.
1842 </td>
1843 <td>2.1.0</td>
1844 </tr>
1845 <tr>
1846 <td><code>spark.driver.bindAddress</code></td>
1847 <td>(value of spark.driver.host)</td>
1848 <td>
1849 Hostname or IP address where to bind listening sockets. This config overrides the SPARK_LOCAL_IP
1850 environment variable (see below).
1851
1852 <br />It also allows a different address from the local one to be advertised to executors or external systems.
1853 This is useful, for example, when running containers with bridged networking. For this to properly work,
1854 the different ports used by the driver (RPC, block manager and UI) need to be forwarded from the
1855 container's host.
1856 </td>
1857 <td>2.1.0</td>
1858 </tr>
1859 <tr>
1860 <td><code>spark.driver.host</code></td>
1861 <td>(local hostname)</td>
1862 <td>
1863 Hostname or IP address for the driver.
1864 This is used for communicating with the executors and the standalone Master.
1865 </td>
1866 <td>0.7.0</td>
1867 </tr>
1868 <tr>
1869 <td><code>spark.driver.port</code></td>
1870 <td>(random)</td>
1871 <td>
1872 Port for the driver to listen on.
1873 This is used for communicating with the executors and the standalone Master.
1874 </td>
1875 <td>0.7.0</td>
1876 </tr>
1877 <tr>
1878 <td><code>spark.rpc.io.backLog</code></td>
1879 <td>64</td>
1880 <td>
1881 Length of the accept queue for the RPC server. For large applications, this value may
1882 need to be increased, so that incoming connections are not dropped when a large number of
1883 connections arrives in a short period of time.
1884 </td>
1885 <td>3.0.0</td>
1886 </tr>
1887 <tr>
1888 <td><code>spark.network.timeout</code></td>
1889 <td>120s</td>
1890 <td>
1891 Default timeout for all network interactions. This config will be used in place of
1892 <code>spark.core.connection.ack.wait.timeout</code>,
1893 <code>spark.storage.blockManagerSlaveTimeoutMs</code>,
1894 <code>spark.shuffle.io.connectionTimeout</code>, <code>spark.rpc.askTimeout</code> or
1895 <code>spark.rpc.lookupTimeout</code> if they are not configured.
1896 </td>
1897 <td>1.3.0</td>
1898 </tr>
1899 <tr>
1900 <td><code>spark.network.io.preferDirectBufs</code></td>
1901 <td>true</td>
1902 <td>
1903 If enabled then off-heap buffer allocations are preferred by the shared allocators.
1904 Off-heap buffers are used to reduce garbage collection during shuffle and cache
1905 block transfer. For environments where off-heap memory is tightly limited, users may wish to
1906 turn this off to force all allocations to be on-heap.
1907 </td>
1908 <td>3.0.0</td>
1909 </tr>
1910 <tr>
1911 <td><code>spark.port.maxRetries</code></td>
1912 <td>16</td>
1913 <td>
1914 Maximum number of retries when binding to a port before giving up.
1915 When a port is given a specific value (non 0), each subsequent retry will
1916 increment the port used in the previous attempt by 1 before retrying. This
1917 essentially allows it to try a range of ports from the start port specified
1918 to port + maxRetries.
1919 </td>
1920 <td>1.1.1</td>
1921 </tr>
1922 <tr>
1923 <td><code>spark.rpc.numRetries</code></td>
1924 <td>3</td>
1925 <td>
1926 Number of times to retry before an RPC task gives up.
1927 An RPC task will run at most times of this number.
1928 </td>
1929 <td>1.4.0</td>
1930 </tr>
1931 <tr>
1932 <td><code>spark.rpc.retry.wait</code></td>
1933 <td>3s</td>
1934 <td>
1935 Duration for an RPC ask operation to wait before retrying.
1936 </td>
1937 <td>1.4.0</td>
1938 </tr>
1939 <tr>
1940 <td><code>spark.rpc.askTimeout</code></td>
1941 <td><code>spark.network.timeout</code></td>
1942 <td>
1943 Duration for an RPC ask operation to wait before timing out.
1944 </td>
1945 <td>1.4.0</td>
1946 </tr>
1947 <tr>
1948 <td><code>spark.rpc.lookupTimeout</code></td>
1949 <td>120s</td>
1950 <td>
1951 Duration for an RPC remote endpoint lookup operation to wait before timing out.
1952 </td>
1953 <td>1.4.0</td>
1954 </tr>
1955 <tr>
1956 <td><code>spark.core.connection.ack.wait.timeout</code></td>
1957 <td><code>spark.network.timeout</code></td>
1958 <td>
1959 How long for the connection to wait for ack to occur before timing
1960 out and giving up. To avoid unwilling timeout caused by long pause like GC,
1961 you can set larger value.
1962 </td>
1963 <td>1.1.1</td>
1964 </tr>
1965 <tr>
1966 <td><code>spark.network.maxRemoteBlockSizeFetchToMem</code></td>
1967 <td>200m</td>
1968 <td>
1969 Remote block will be fetched to disk when size of the block is above this threshold
1970 in bytes. This is to avoid a giant request takes too much memory. Note this
1971 configuration will affect both shuffle fetch and block manager remote block fetch.
1972 For users who enabled external shuffle service, this feature can only work when
1973 external shuffle service is at least 2.3.0.
1974 </td>
1975 <td>3.0.0</td>
1976 </tr>
1977 </table>
1978
1979 ### Scheduling
1980
1981 <table class="table">
1982 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
1983 <tr>
1984 <td><code>spark.cores.max</code></td>
1985 <td>(not set)</td>
1986 <td>
1987 When running on a <a href="spark-standalone.html">standalone deploy cluster</a> or a
1988 <a href="running-on-mesos.html#mesos-run-modes">Mesos cluster in "coarse-grained"
1989 sharing mode</a>, the maximum amount of CPU cores to request for the application from
1990 across the cluster (not from each machine). If not set, the default will be
1991 <code>spark.deploy.defaultCores</code> on Spark's standalone cluster manager, or
1992 infinite (all available cores) on Mesos.
1993 </td>
1994 <td>0.6.0</td>
1995 </tr>
1996 <tr>
1997 <td><code>spark.locality.wait</code></td>
1998 <td>3s</td>
1999 <td>
2000 How long to wait to launch a data-local task before giving up and launching it
2001 on a less-local node. The same wait will be used to step through multiple locality levels
2002 (process-local, node-local, rack-local and then any). It is also possible to customize the
2003 waiting time for each level by setting <code>spark.locality.wait.node</code>, etc.
2004 You should increase this setting if your tasks are long and see poor locality, but the
2005 default usually works well.
2006 </td>
2007 <td>0.5.0</td>
2008 </tr>
2009 <tr>
2010 <td><code>spark.locality.wait.node</code></td>
2011 <td>spark.locality.wait</td>
2012 <td>
2013 Customize the locality wait for node locality. For example, you can set this to 0 to skip
2014 node locality and search immediately for rack locality (if your cluster has rack information).
2015 </td>
2016 <td>0.8.0</td>
2017 </tr>
2018 <tr>
2019 <td><code>spark.locality.wait.process</code></td>
2020 <td>spark.locality.wait</td>
2021 <td>
2022 Customize the locality wait for process locality. This affects tasks that attempt to access
2023 cached data in a particular executor process.
2024 </td>
2025 <td>0.8.0</td>
2026 </tr>
2027 <tr>
2028 <td><code>spark.locality.wait.rack</code></td>
2029 <td>spark.locality.wait</td>
2030 <td>
2031 Customize the locality wait for rack locality.
2032 </td>
2033 <td>0.8.0</td>
2034 </tr>
2035 <tr>
2036 <td><code>spark.scheduler.maxRegisteredResourcesWaitingTime</code></td>
2037 <td>30s</td>
2038 <td>
2039 Maximum amount of time to wait for resources to register before scheduling begins.
2040 </td>
2041 <td>1.1.1</td>
2042 </tr>
2043 <tr>
2044 <td><code>spark.scheduler.minRegisteredResourcesRatio</code></td>
2045 <td>0.8 for KUBERNETES mode; 0.8 for YARN mode; 0.0 for standalone mode and Mesos coarse-grained mode</td>
2046 <td>
2047 The minimum ratio of registered resources (registered resources / total expected resources)
2048 (resources are executors in yarn mode and Kubernetes mode, CPU cores in standalone mode and Mesos coarse-grained
2049 mode ['spark.cores.max' value is total expected resources for Mesos coarse-grained mode] )
2050 to wait for before scheduling begins. Specified as a double between 0.0 and 1.0.
2051 Regardless of whether the minimum ratio of resources has been reached,
2052 the maximum amount of time it will wait before scheduling begins is controlled by config
2053 <code>spark.scheduler.maxRegisteredResourcesWaitingTime</code>.
2054 </td>
2055 <td>1.1.1</td>
2056 </tr>
2057 <tr>
2058 <td><code>spark.scheduler.mode</code></td>
2059 <td>FIFO</td>
2060 <td>
2061 The <a href="job-scheduling.html#scheduling-within-an-application">scheduling mode</a> between
2062 jobs submitted to the same SparkContext. Can be set to <code>FAIR</code>
2063 to use fair sharing instead of queueing jobs one after another. Useful for
2064 multi-user services.
2065 </td>
2066 <td>0.8.0</td>
2067 </tr>
2068 <tr>
2069 <td><code>spark.scheduler.revive.interval</code></td>
2070 <td>1s</td>
2071 <td>
2072 The interval length for the scheduler to revive the worker resource offers to run tasks.
2073 </td>
2074 <td>0.8.1</td>
2075 </tr>
2076 <tr>
2077 <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td>
2078 <td>10000</td>
2079 <td>
2080 The default capacity for event queues. Spark will try to initialize an event queue
2081 using capacity specified by `spark.scheduler.listenerbus.eventqueue.queueName.capacity`
2082 first. If it's not configured, Spark will use the default capacity specified by this
2083 config. Note that capacity must be greater than 0. Consider increasing value (e.g. 20000)
2084 if listener events are dropped. Increasing this value may result in the driver using more memory.
2085 </td>
2086 <td>2.3.0</td>
2087 </tr>
2088 <tr>
2089 <td><code>spark.scheduler.listenerbus.eventqueue.shared.capacity</code></td>
2090 <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td>
2091 <td>
2092 Capacity for shared event queue in Spark listener bus, which hold events for external listener(s)
2093 that register to the listener bus. Consider increasing value, if the listener events corresponding
2094 to shared queue are dropped. Increasing this value may result in the driver using more memory.
2095 </td>
2096 <td>3.0.0</td>
2097 </tr>
2098 <tr>
2099 <td><code>spark.scheduler.listenerbus.eventqueue.appStatus.capacity</code></td>
2100 <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td>
2101 <td>
2102 Capacity for appStatus event queue, which hold events for internal application status listeners.
2103 Consider increasing value, if the listener events corresponding to appStatus queue are dropped.
2104 Increasing this value may result in the driver using more memory.
2105 </td>
2106 <td>3.0.0</td>
2107 </tr>
2108 <tr>
2109 <td><code>spark.scheduler.listenerbus.eventqueue.executorManagement.capacity</code></td>
2110 <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td>
2111 <td>
2112 Capacity for executorManagement event queue in Spark listener bus, which hold events for internal
2113 executor management listeners. Consider increasing value if the listener events corresponding to
2114 executorManagement queue are dropped. Increasing this value may result in the driver using more memory.
2115 </td>
2116 <td>3.0.0</td>
2117 </tr>
2118 <tr>
2119 <td><code>spark.scheduler.listenerbus.eventqueue.eventLog.capacity</code></td>
2120 <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td>
2121 <td>
2122 Capacity for eventLog queue in Spark listener bus, which hold events for Event logging listeners
2123 that write events to eventLogs. Consider increasing value if the listener events corresponding to eventLog queue
2124 are dropped. Increasing this value may result in the driver using more memory.
2125 </td>
2126 <td>3.0.0</td>
2127 </tr>
2128 <tr>
2129 <td><code>spark.scheduler.listenerbus.eventqueue.streams.capacity</code></td>
2130 <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td>
2131 <td>
2132 Capacity for streams queue in Spark listener bus, which hold events for internal streaming listener.
2133 Consider increasing value if the listener events corresponding to streams queue are dropped. Increasing
2134 this value may result in the driver using more memory.
2135 </td>
2136 <td>3.0.0</td>
2137 </tr>
2138 <tr>
2139 <td><code>spark.scheduler.blacklist.unschedulableTaskSetTimeout</code></td>
2140 <td>120s</td>
2141 <td>
2142 The timeout in seconds to wait to acquire a new executor and schedule a task before aborting a
2143 TaskSet which is unschedulable because of being completely blacklisted.
2144 </td>
2145 <td>2.4.1</td>
2146 </tr>
2147 <tr>
2148 <td><code>spark.blacklist.enabled</code></td>
2149 <td>
2150 false
2151 </td>
2152 <td>
2153 If set to "true", prevent Spark from scheduling tasks on executors that have been blacklisted
2154 due to too many task failures. The blacklisting algorithm can be further controlled by the
2155 other "spark.blacklist" configuration options.
2156 </td>
2157 <td>2.1.0</td>
2158 </tr>
2159 <tr>
2160 <td><code>spark.blacklist.timeout</code></td>
2161 <td>1h</td>
2162 <td>
2163 (Experimental) How long a node or executor is blacklisted for the entire application, before it
2164 is unconditionally removed from the blacklist to attempt running new tasks.
2165 </td>
2166 <td>2.1.0</td>
2167 </tr>
2168 <tr>
2169 <td><code>spark.blacklist.task.maxTaskAttemptsPerExecutor</code></td>
2170 <td>1</td>
2171 <td>
2172 (Experimental) For a given task, how many times it can be retried on one executor before the
2173 executor is blacklisted for that task.
2174 </td>
2175 <td>2.1.0</td>
2176 </tr>
2177 <tr>
2178 <td><code>spark.blacklist.task.maxTaskAttemptsPerNode</code></td>
2179 <td>2</td>
2180 <td>
2181 (Experimental) For a given task, how many times it can be retried on one node, before the entire
2182 node is blacklisted for that task.
2183 </td>
2184 <td>2.1.0</td>
2185 </tr>
2186 <tr>
2187 <td><code>spark.blacklist.stage.maxFailedTasksPerExecutor</code></td>
2188 <td>2</td>
2189 <td>
2190 (Experimental) How many different tasks must fail on one executor, within one stage, before the
2191 executor is blacklisted for that stage.
2192 </td>
2193 <td>2.1.0</td>
2194 </tr>
2195 <tr>
2196 <td><code>spark.blacklist.stage.maxFailedExecutorsPerNode</code></td>
2197 <td>2</td>
2198 <td>
2199 (Experimental) How many different executors are marked as blacklisted for a given stage, before
2200 the entire node is marked as failed for the stage.
2201 </td>
2202 <td>2.1.0</td>
2203 </tr>
2204 <tr>
2205 <td><code>spark.blacklist.application.maxFailedTasksPerExecutor</code></td>
2206 <td>2</td>
2207 <td>
2208 (Experimental) How many different tasks must fail on one executor, in successful task sets,
2209 before the executor is blacklisted for the entire application. Blacklisted executors will
2210 be automatically added back to the pool of available resources after the timeout specified by
2211 <code>spark.blacklist.timeout</code>. Note that with dynamic allocation, though, the executors
2212 may get marked as idle and be reclaimed by the cluster manager.
2213 </td>
2214 <td>2.2.0</td>
2215 </tr>
2216 <tr>
2217 <td><code>spark.blacklist.application.maxFailedExecutorsPerNode</code></td>
2218 <td>2</td>
2219 <td>
2220 (Experimental) How many different executors must be blacklisted for the entire application,
2221 before the node is blacklisted for the entire application. Blacklisted nodes will
2222 be automatically added back to the pool of available resources after the timeout specified by
2223 <code>spark.blacklist.timeout</code>. Note that with dynamic allocation, though, the executors
2224 on the node may get marked as idle and be reclaimed by the cluster manager.
2225 </td>
2226 <td>2.2.0</td>
2227 </tr>
2228 <tr>
2229 <td><code>spark.blacklist.killBlacklistedExecutors</code></td>
2230 <td>false</td>
2231 <td>
2232 (Experimental) If set to "true", allow Spark to automatically kill the executors
2233 when they are blacklisted on fetch failure or blacklisted for the entire application,
2234 as controlled by spark.blacklist.application.*. Note that, when an entire node is added
2235 to the blacklist, all of the executors on that node will be killed.
2236 </td>
2237 <td>2.2.0</td>
2238 </tr>
2239 <tr>
2240 <td><code>spark.blacklist.application.fetchFailure.enabled</code></td>
2241 <td>false</td>
2242 <td>
2243 (Experimental) If set to "true", Spark will blacklist the executor immediately when a fetch
2244 failure happens. If external shuffle service is enabled, then the whole node will be
2245 blacklisted.
2246 </td>
2247 <td>2.3.0</td>
2248 </tr>
2249 <tr>
2250 <td><code>spark.speculation</code></td>
2251 <td>false</td>
2252 <td>
2253 If set to "true", performs speculative execution of tasks. This means if one or more tasks are
2254 running slowly in a stage, they will be re-launched.
2255 </td>
2256 <td>0.6.0</td>
2257 </tr>
2258 <tr>
2259 <td><code>spark.speculation.interval</code></td>
2260 <td>100ms</td>
2261 <td>
2262 How often Spark will check for tasks to speculate.
2263 </td>
2264 <td>0.6.0</td>
2265 </tr>
2266 <tr>
2267 <td><code>spark.speculation.multiplier</code></td>
2268 <td>1.5</td>
2269 <td>
2270 How many times slower a task is than the median to be considered for speculation.
2271 </td>
2272 <td>0.6.0</td>
2273 </tr>
2274 <tr>
2275 <td><code>spark.speculation.quantile</code></td>
2276 <td>0.75</td>
2277 <td>
2278 Fraction of tasks which must be complete before speculation is enabled for a particular stage.
2279 </td>
2280 <td>0.6.0</td>
2281 </tr>
2282 <tr>
2283 <td><code>spark.speculation.task.duration.threshold</code></td>
2284 <td>None</td>
2285 <td>
2286 Task duration after which scheduler would try to speculative run the task. If provided, tasks
2287 would be speculatively run if current stage contains less tasks than or equal to the number of
2288 slots on a single executor and the task is taking longer time than the threshold. This config
2289 helps speculate stage with very few tasks. Regular speculation configs may also apply if the
2290 executor slots are large enough. E.g. tasks might be re-launched if there are enough successful
2291 runs even though the threshold hasn't been reached. The number of slots is computed based on
2292 the conf values of spark.executor.cores and spark.task.cpus minimum 1.
2293 Default unit is bytes, unless otherwise specified.
2294 </td>
2295 <td>3.0.0</td>
2296 </tr>
2297 <tr>
2298 <td><code>spark.task.cpus</code></td>
2299 <td>1</td>
2300 <td>
2301 Number of cores to allocate for each task.
2302 </td>
2303 <td>0.5.0</td>
2304 </tr>
2305 <tr>
2306 <td><code>spark.task.resource.{resourceName}.amount</code></td>
2307 <td>1</td>
2308 <td>
2309 Amount of a particular resource type to allocate for each task, note that this can be a double.
2310 If this is specified you must also provide the executor config
2311 <code>spark.executor.resource.{resourceName}.amount</code> and any corresponding discovery configs
2312 so that your executors are created with that resource type. In addition to whole amounts,
2313 a fractional amount (for example, 0.25, which means 1/4th of a resource) may be specified.
2314 Fractional amounts must be less than or equal to 0.5, or in other words, the minimum amount of
2315 resource sharing is 2 tasks per resource. Additionally, fractional amounts are floored
2316 in order to assign resource slots (e.g. a 0.2222 configuration, or 1/0.2222 slots will become
2317 4 tasks/resource, not 5).
2318 </td>
2319 <td>3.0.0</td>
2320 </tr>
2321 <tr>
2322 <td><code>spark.task.maxFailures</code></td>
2323 <td>4</td>
2324 <td>
2325 Number of failures of any particular task before giving up on the job.
2326 The total number of failures spread across different tasks will not cause the job
2327 to fail; a particular task has to fail this number of attempts.
2328 Should be greater than or equal to 1. Number of allowed retries = this value - 1.
2329 </td>
2330 <td>0.8.0</td>
2331 </tr>
2332 <tr>
2333 <td><code>spark.task.reaper.enabled</code></td>
2334 <td>false</td>
2335 <td>
2336 Enables monitoring of killed / interrupted tasks. When set to true, any task which is killed
2337 will be monitored by the executor until that task actually finishes executing. See the other
2338 <code>spark.task.reaper.*</code> configurations for details on how to control the exact behavior
2339 of this monitoring. When set to false (the default), task killing will use an older code
2340 path which lacks such monitoring.
2341 </td>
2342 <td>2.0.3</td>
2343 </tr>
2344 <tr>
2345 <td><code>spark.task.reaper.pollingInterval</code></td>
2346 <td>10s</td>
2347 <td>
2348 When <code>spark.task.reaper.enabled = true</code>, this setting controls the frequency at which
2349 executors will poll the status of killed tasks. If a killed task is still running when polled
2350 then a warning will be logged and, by default, a thread-dump of the task will be logged
2351 (this thread dump can be disabled via the <code>spark.task.reaper.threadDump</code> setting,
2352 which is documented below).
2353 </td>
2354 <td>2.0.3</td>
2355 </tr>
2356 <tr>
2357 <td><code>spark.task.reaper.threadDump</code></td>
2358 <td>true</td>
2359 <td>
2360 When <code>spark.task.reaper.enabled = true</code>, this setting controls whether task thread
2361 dumps are logged during periodic polling of killed tasks. Set this to false to disable
2362 collection of thread dumps.
2363 </td>
2364 <td>2.0.3</td>
2365 </tr>
2366 <tr>
2367 <td><code>spark.task.reaper.killTimeout</code></td>
2368 <td>-1</td>
2369 <td>
2370 When <code>spark.task.reaper.enabled = true</code>, this setting specifies a timeout after
2371 which the executor JVM will kill itself if a killed task has not stopped running. The default
2372 value, -1, disables this mechanism and prevents the executor from self-destructing. The purpose
2373 of this setting is to act as a safety-net to prevent runaway noncancellable tasks from rendering
2374 an executor unusable.
2375 </td>
2376 <td>2.0.3</td>
2377 </tr>
2378 <tr>
2379 <td><code>spark.stage.maxConsecutiveAttempts</code></td>
2380 <td>4</td>
2381 <td>
2382 Number of consecutive stage attempts allowed before a stage is aborted.
2383 </td>
2384 <td>2.2.0</td>
2385 </tr>
2386 </table>
2387
2388 ### Barrier Execution Mode
2389
2390 <table class="table">
2391 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
2392 <tr>
2393 <td><code>spark.barrier.sync.timeout</code></td>
2394 <td>365d</td>
2395 <td>
2396 The timeout in seconds for each <code>barrier()</code> call from a barrier task. If the
2397 coordinator didn't receive all the sync messages from barrier tasks within the
2398 configured time, throw a SparkException to fail all the tasks. The default value is set
2399 to 31536000(3600 * 24 * 365) so the <code>barrier()</code> call shall wait for one year.
2400 </td>
2401 <td>2.4.0</td>
2402 </tr>
2403 <tr>
2404 <td><code>spark.scheduler.barrier.maxConcurrentTasksCheck.interval</code></td>
2405 <td>15s</td>
2406 <td>
2407 Time in seconds to wait between a max concurrent tasks check failure and the next
2408 check. A max concurrent tasks check ensures the cluster can launch more concurrent
2409 tasks than required by a barrier stage on job submitted. The check can fail in case
2410 a cluster has just started and not enough executors have registered, so we wait for a
2411 little while and try to perform the check again. If the check fails more than a
2412 configured max failure times for a job then fail current job submission. Note this
2413 config only applies to jobs that contain one or more barrier stages, we won't perform
2414 the check on non-barrier jobs.
2415 </td>
2416 <td>2.4.0</td>
2417 </tr>
2418 <tr>
2419 <td><code>spark.scheduler.barrier.maxConcurrentTasksCheck.maxFailures</code></td>
2420 <td>40</td>
2421 <td>
2422 Number of max concurrent tasks check failures allowed before fail a job submission.
2423 A max concurrent tasks check ensures the cluster can launch more concurrent tasks than
2424 required by a barrier stage on job submitted. The check can fail in case a cluster
2425 has just started and not enough executors have registered, so we wait for a little
2426 while and try to perform the check again. If the check fails more than a configured
2427 max failure times for a job then fail current job submission. Note this config only
2428 applies to jobs that contain one or more barrier stages, we won't perform the check on
2429 non-barrier jobs.
2430 </td>
2431 <td>2.4.0</td>
2432 </tr>
2433 </table>
2434
2435 ### Dynamic Allocation
2436
2437 <table class="table">
2438 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
2439 <tr>
2440 <td><code>spark.dynamicAllocation.enabled</code></td>
2441 <td>false</td>
2442 <td>
2443 Whether to use dynamic resource allocation, which scales the number of executors registered
2444 with this application up and down based on the workload.
2445 For more detail, see the description
2446 <a href="job-scheduling.html#dynamic-resource-allocation">here</a>.
2447 <br><br>
2448 This requires <code>spark.shuffle.service.enabled</code> or
2449 <code>spark.dynamicAllocation.shuffleTracking.enabled</code> to be set.
2450 The following configurations are also relevant:
2451 <code>spark.dynamicAllocation.minExecutors</code>,
2452 <code>spark.dynamicAllocation.maxExecutors</code>, and
2453 <code>spark.dynamicAllocation.initialExecutors</code>
2454 <code>spark.dynamicAllocation.executorAllocationRatio</code>
2455 </td>
2456 <td>1.2.0</td>
2457 </tr>
2458 <tr>
2459 <td><code>spark.dynamicAllocation.executorIdleTimeout</code></td>
2460 <td>60s</td>
2461 <td>
2462 If dynamic allocation is enabled and an executor has been idle for more than this duration,
2463 the executor will be removed. For more detail, see this
2464 <a href="job-scheduling.html#resource-allocation-policy">description</a>.
2465 </td>
2466 <td>1.2.0</td>
2467 </tr>
2468 <tr>
2469 <td><code>spark.dynamicAllocation.cachedExecutorIdleTimeout</code></td>
2470 <td>infinity</td>
2471 <td>
2472 If dynamic allocation is enabled and an executor which has cached data blocks has been idle for more than this duration,
2473 the executor will be removed. For more details, see this
2474 <a href="job-scheduling.html#resource-allocation-policy">description</a>.
2475 </td>
2476 <td>1.4.0</td>
2477 </tr>
2478 <tr>
2479 <td><code>spark.dynamicAllocation.initialExecutors</code></td>
2480 <td><code>spark.dynamicAllocation.minExecutors</code></td>
2481 <td>
2482 Initial number of executors to run if dynamic allocation is enabled.
2483 <br /><br />
2484 If `--num-executors` (or `spark.executor.instances`) is set and larger than this value, it will
2485 be used as the initial number of executors.
2486 </td>
2487 <td>1.3.0</td>
2488 </tr>
2489 <tr>
2490 <td><code>spark.dynamicAllocation.maxExecutors</code></td>
2491 <td>infinity</td>
2492 <td>
2493 Upper bound for the number of executors if dynamic allocation is enabled.
2494 </td>
2495 <td>1.2.0</td>
2496 </tr>
2497 <tr>
2498 <td><code>spark.dynamicAllocation.minExecutors</code></td>
2499 <td>0</td>
2500 <td>
2501 Lower bound for the number of executors if dynamic allocation is enabled.
2502 </td>
2503 <td>1.2.0</td>
2504 </tr>
2505 <tr>
2506 <td><code>spark.dynamicAllocation.executorAllocationRatio</code></td>
2507 <td>1</td>
2508 <td>
2509 By default, the dynamic allocation will request enough executors to maximize the
2510 parallelism according to the number of tasks to process. While this minimizes the
2511 latency of the job, with small tasks this setting can waste a lot of resources due to
2512 executor allocation overhead, as some executor might not even do any work.
2513 This setting allows to set a ratio that will be used to reduce the number of
2514 executors w.r.t. full parallelism.
2515 Defaults to 1.0 to give maximum parallelism.
2516 0.5 will divide the target number of executors by 2
2517 The target number of executors computed by the dynamicAllocation can still be overridden
2518 by the <code>spark.dynamicAllocation.minExecutors</code> and
2519 <code>spark.dynamicAllocation.maxExecutors</code> settings
2520 </td>
2521 <td>2.4.0</td>
2522 </tr>
2523 <tr>
2524 <td><code>spark.dynamicAllocation.schedulerBacklogTimeout</code></td>
2525 <td>1s</td>
2526 <td>
2527 If dynamic allocation is enabled and there have been pending tasks backlogged for more than
2528 this duration, new executors will be requested. For more detail, see this
2529 <a href="job-scheduling.html#resource-allocation-policy">description</a>.
2530 </td>
2531 <td>1.2.0</td>
2532 </tr>
2533 <tr>
2534 <td><code>spark.dynamicAllocation.sustainedSchedulerBacklogTimeout</code></td>
2535 <td><code>schedulerBacklogTimeout</code></td>
2536 <td>
2537 Same as <code>spark.dynamicAllocation.schedulerBacklogTimeout</code>, but used only for
2538 subsequent executor requests. For more detail, see this
2539 <a href="job-scheduling.html#resource-allocation-policy">description</a>.
2540 </td>
2541 <td>1.2.0</td>
2542 </tr>
2543 <tr>
2544 <td><code>spark.dynamicAllocation.shuffleTracking.enabled</code></td>
2545 <td><code>false</code></td>
2546 <td>
2547 Experimental. Enables shuffle file tracking for executors, which allows dynamic allocation
2548 without the need for an external shuffle service. This option will try to keep alive executors
2549 that are storing shuffle data for active jobs.
2550 </td>
2551 <td>3.0.0</td>
2552 </tr>
2553 <tr>
2554 <td><code>spark.dynamicAllocation.shuffleTracking.timeout</code></td>
2555 <td><code>infinity</code></td>
2556 <td>
2557 When shuffle tracking is enabled, controls the timeout for executors that are holding shuffle
2558 data. The default value means that Spark will rely on the shuffles being garbage collected to be
2559 able to release executors. If for some reason garbage collection is not cleaning up shuffles
2560 quickly enough, this option can be used to control when to time out executors even when they are
2561 storing shuffle data.
2562 </td>
2563 <td>3.0.0</td>
2564 </tr>
2565 </table>
2566
2567 ### Thread Configurations
2568
2569 Depending on jobs and cluster configurations, we can set number of threads in several places in Spark to utilize
2570 available resources efficiently to get better performance. Prior to Spark 3.0, these thread configurations apply
2571 to all roles of Spark, such as driver, executor, worker and master. From Spark 3.0, we can configure threads in
2572 finer granularity starting from driver and executor. Take RPC module as example in below table. For other modules,
2573 like shuffle, just replace "rpc" with "shuffle" in the property names except
2574 <code>spark.{driver|executor}.rpc.netty.dispatcher.numThreads</code>, which is only for RPC module.
2575
2576 <table class="table">
2577 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
2578 <tr>
2579 <td><code>spark.{driver|executor}.rpc.io.serverThreads</code></td>
2580 <td>
2581 Fall back on <code>spark.rpc.io.serverThreads</code>
2582 </td>
2583 <td>Number of threads used in the server thread pool</td>
2584 <td>1.6.0</td>
2585 </tr>
2586 <tr>
2587 <td><code>spark.{driver|executor}.rpc.io.clientThreads</code></td>
2588 <td>
2589 Fall back on <code>spark.rpc.io.clientThreads</code>
2590 </td>
2591 <td>Number of threads used in the client thread pool</td>
2592 <td>1.6.0</td>
2593 </tr>
2594 <tr>
2595 <td><code>spark.{driver|executor}.rpc.netty.dispatcher.numThreads</code></td>
2596 <td>
2597 Fall back on <code>spark.rpc.netty.dispatcher.numThreads</code>
2598 </td>
2599 <td>Number of threads used in RPC message dispatcher thread pool</td>
2600 <td>3.0.0</td>
2601 </tr>
2602 </table>
2603
2604 The default value for number of thread-related config keys is the minimum of the number of cores requested for
2605 the driver or executor, or, in the absence of that value, the number of cores available for the JVM (with a hardcoded upper limit of 8).
2606
2607
2608 ### Security
2609
2610 Please refer to the [Security](security.html) page for available options on how to secure different
2611 Spark subsystems.
2612
2613
2614 ### Spark SQL
2615
2616 {% for static_file in site.static_files %}
2617 {% if static_file.name == 'generated-runtime-sql-config-table.html' %}
2618
2619 #### Runtime SQL Configuration
2620
2621 Runtime SQL configurations are per-session, mutable Spark SQL configurations. They can be set with initial values by the config file
2622 and command-line options with `--conf/-c` prefixed, or by setting `SparkConf` that are used to create `SparkSession`.
2623 Also, they can be set and queried by SET commands and rest to their initial values by RESET command,
2624 or by `SparkSession.conf`'s setter and getter methods in runtime.
2625
2626 {% include_relative generated-runtime-sql-config-table.html %}
2627 {% break %}
2628 {% endif %}
2629 {% endfor %}
2630
2631 {% for static_file in site.static_files %}
2632 {% if static_file.name == 'generated-static-sql-config-table.html' %}
2633
2634 #### Static SQL Configuration
2635
2636 Static SQL configurations are cross-session, immutable Spark SQL configurations. They can be set with final values by the config file
2637 and command-line options with `--conf/-c` prefixed, or by setting `SparkConf` that are used to create `SparkSession`.
2638 External users can query the static sql config values via `SparkSession.conf` or via set command, e.g. `SET spark.sql.extensions;`, but cannot set/unset them.
2639
2640 {% include_relative generated-static-sql-config-table.html %}
2641 {% break %}
2642 {% endif %}
2643 {% endfor %}
2644
2645
2646 ### Spark Streaming
2647
2648 <table class="table">
2649 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
2650 <tr>
2651 <td><code>spark.streaming.backpressure.enabled</code></td>
2652 <td>false</td>
2653 <td>
2654 Enables or disables Spark Streaming's internal backpressure mechanism (since 1.5).
2655 This enables the Spark Streaming to control the receiving rate based on the
2656 current batch scheduling delays and processing times so that the system receives
2657 only as fast as the system can process. Internally, this dynamically sets the
2658 maximum receiving rate of receivers. This rate is upper bounded by the values
2659 <code>spark.streaming.receiver.maxRate</code> and <code>spark.streaming.kafka.maxRatePerPartition</code>
2660 if they are set (see below).
2661 </td>
2662 <td>1.5.0</td>
2663 </tr>
2664 <tr>
2665 <td><code>spark.streaming.backpressure.initialRate</code></td>
2666 <td>not set</td>
2667 <td>
2668 This is the initial maximum receiving rate at which each receiver will receive data for the
2669 first batch when the backpressure mechanism is enabled.
2670 </td>
2671 <td>2.0.0</td>
2672 </tr>
2673 <tr>
2674 <td><code>spark.streaming.blockInterval</code></td>
2675 <td>200ms</td>
2676 <td>
2677 Interval at which data received by Spark Streaming receivers is chunked
2678 into blocks of data before storing them in Spark. Minimum recommended - 50 ms. See the
2679 <a href="streaming-programming-guide.html#level-of-parallelism-in-data-receiving">performance
2680 tuning</a> section in the Spark Streaming programming guide for more details.
2681 </td>
2682 <td>0.8.0</td>
2683 </tr>
2684 <tr>
2685 <td><code>spark.streaming.receiver.maxRate</code></td>
2686 <td>not set</td>
2687 <td>
2688 Maximum rate (number of records per second) at which each receiver will receive data.
2689 Effectively, each stream will consume at most this number of records per second.
2690 Setting this configuration to 0 or a negative number will put no limit on the rate.
2691 See the <a href="streaming-programming-guide.html#deploying-applications">deployment guide</a>
2692 in the Spark Streaming programming guide for mode details.
2693 </td>
2694 <td>1.0.2</td>
2695 </tr>
2696 <tr>
2697 <td><code>spark.streaming.receiver.writeAheadLog.enable</code></td>
2698 <td>false</td>
2699 <td>
2700 Enable write-ahead logs for receivers. All the input data received through receivers
2701 will be saved to write-ahead logs that will allow it to be recovered after driver failures.
2702 See the <a href="streaming-programming-guide.html#deploying-applications">deployment guide</a>
2703 in the Spark Streaming programming guide for more details.
2704 </td>
2705 <td>1.2.1</td>
2706 </tr>
2707 <tr>
2708 <td><code>spark.streaming.unpersist</code></td>
2709 <td>true</td>
2710 <td>
2711 Force RDDs generated and persisted by Spark Streaming to be automatically unpersisted from
2712 Spark's memory. The raw input data received by Spark Streaming is also automatically cleared.
2713 Setting this to false will allow the raw data and persisted RDDs to be accessible outside the
2714 streaming application as they will not be cleared automatically. But it comes at the cost of
2715 higher memory usage in Spark.
2716 </td>
2717 <td>0.9.0</td>
2718 </tr>
2719 <tr>
2720 <td><code>spark.streaming.stopGracefullyOnShutdown</code></td>
2721 <td>false</td>
2722 <td>
2723 If <code>true</code>, Spark shuts down the <code>StreamingContext</code> gracefully on JVM
2724 shutdown rather than immediately.
2725 </td>
2726 <td>1.4.0</td>
2727 </tr>
2728 <tr>
2729 <td><code>spark.streaming.kafka.maxRatePerPartition</code></td>
2730 <td>not set</td>
2731 <td>
2732 Maximum rate (number of records per second) at which data will be read from each Kafka
2733 partition when using the new Kafka direct stream API. See the
2734 <a href="streaming-kafka-0-10-integration.html">Kafka Integration guide</a>
2735 for more details.
2736 </td>
2737 <td>1.3.0</td>
2738 </tr>
2739 <tr>
2740 <td><code>spark.streaming.kafka.minRatePerPartition</code></td>
2741 <td>1</td>
2742 <td>
2743 Minimum rate (number of records per second) at which data will be read from each Kafka
2744 partition when using the new Kafka direct stream API.
2745 </td>
2746 <td>2.4.0</td>
2747 </tr>
2748 <tr>
2749 <td><code>spark.streaming.ui.retainedBatches</code></td>
2750 <td>1000</td>
2751 <td>
2752 How many batches the Spark Streaming UI and status APIs remember before garbage collecting.
2753 </td>
2754 <td>1.0.0</td>
2755 </tr>
2756 <tr>
2757 <td><code>spark.streaming.driver.writeAheadLog.closeFileAfterWrite</code></td>
2758 <td>false</td>
2759 <td>
2760 Whether to close the file after writing a write-ahead log record on the driver. Set this to 'true'
2761 when you want to use S3 (or any file system that does not support flushing) for the metadata WAL
2762 on the driver.
2763 </td>
2764 <td>1.6.0</td>
2765 </tr>
2766 <tr>
2767 <td><code>spark.streaming.receiver.writeAheadLog.closeFileAfterWrite</code></td>
2768 <td>false</td>
2769 <td>
2770 Whether to close the file after writing a write-ahead log record on the receivers. Set this to 'true'
2771 when you want to use S3 (or any file system that does not support flushing) for the data WAL
2772 on the receivers.
2773 </td>
2774 <td>1.6.0</td>
2775 </tr>
2776 </table>
2777
2778 ### SparkR
2779
2780 <table class="table">
2781 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
2782 <tr>
2783 <td><code>spark.r.numRBackendThreads</code></td>
2784 <td>2</td>
2785 <td>
2786 Number of threads used by RBackend to handle RPC calls from SparkR package.
2787 </td>
2788 <td>1.4.0</td>
2789 </tr>
2790 <tr>
2791 <td><code>spark.r.command</code></td>
2792 <td>Rscript</td>
2793 <td>
2794 Executable for executing R scripts in cluster modes for both driver and workers.
2795 </td>
2796 <td>1.5.3</td>
2797 </tr>
2798 <tr>
2799 <td><code>spark.r.driver.command</code></td>
2800 <td>spark.r.command</td>
2801 <td>
2802 Executable for executing R scripts in client modes for driver. Ignored in cluster modes.
2803 </td>
2804 <td>1.5.3</td>
2805 </tr>
2806 <tr>
2807 <td><code>spark.r.shell.command</code></td>
2808 <td>R</td>
2809 <td>
2810 Executable for executing sparkR shell in client modes for driver. Ignored in cluster modes. It is the same as environment variable <code>SPARKR_DRIVER_R</code>, but take precedence over it.
2811 <code>spark.r.shell.command</code> is used for sparkR shell while <code>spark.r.driver.command</code> is used for running R script.
2812 </td>
2813 <td>2.1.0</td>
2814 </tr>
2815 <tr>
2816 <td><code>spark.r.backendConnectionTimeout</code></td>
2817 <td>6000</td>
2818 <td>
2819 Connection timeout set by R process on its connection to RBackend in seconds.
2820 </td>
2821 <td>2.1.0</td>
2822 </tr>
2823 <tr>
2824 <td><code>spark.r.heartBeatInterval</code></td>
2825 <td>100</td>
2826 <td>
2827 Interval for heartbeats sent from SparkR backend to R process to prevent connection timeout.
2828 </td>
2829 <td>2.1.0</td>
2830 </tr>
2831
2832 </table>
2833
2834 ### GraphX
2835
2836 <table class="table">
2837 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
2838 <tr>
2839 <td><code>spark.graphx.pregel.checkpointInterval</code></td>
2840 <td>-1</td>
2841 <td>
2842 Checkpoint interval for graph and message in Pregel. It used to avoid stackOverflowError due to long lineage chains
2843 after lots of iterations. The checkpoint is disabled by default.
2844 </td>
2845 <td>2.2.0</td>
2846 </tr>
2847 </table>
2848
2849 ### Deploy
2850
2851 <table class="table">
2852 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr>
2853 <tr>
2854 <td><code>spark.deploy.recoveryMode</code></td>
2855 <td>NONE</td>
2856 <td>The recovery mode setting to recover submitted Spark jobs with cluster mode when it failed and relaunches.
2857 This is only applicable for cluster mode when running with Standalone or Mesos.</td>
2858 <td>0.8.1</td>
2859 </tr>
2860 <tr>
2861 <td><code>spark.deploy.zookeeper.url</code></td>
2862 <td>None</td>
2863 <td>When `spark.deploy.recoveryMode` is set to ZOOKEEPER, this configuration is used to set the zookeeper URL to connect to.</td>
2864 <td>0.8.1</td>
2865 </tr>
2866 <tr>
2867 <td><code>spark.deploy.zookeeper.dir</code></td>
2868 <td>None</td>
2869 <td>When `spark.deploy.recoveryMode` is set to ZOOKEEPER, this configuration is used to set the zookeeper directory to store recovery state.</td>
2870 <td>0.8.1</td>
2871 </tr>
2872 </table>
2873
2874
2875 ### Cluster Managers
2876
2877 Each cluster manager in Spark has additional configuration options. Configurations
2878 can be found on the pages for each mode:
2879
2880 #### [YARN](running-on-yarn.html#configuration)
2881
2882 #### [Mesos](running-on-mesos.html#configuration)
2883
2884 #### [Kubernetes](running-on-kubernetes.html#configuration)
2885
2886 #### [Standalone Mode](spark-standalone.html#cluster-launch-scripts)
2887
2888 # Environment Variables
2889
2890 Certain Spark settings can be configured through environment variables, which are read from the
2891 `conf/spark-env.sh` script in the directory where Spark is installed (or `conf/spark-env.cmd` on
2892 Windows). In Standalone and Mesos modes, this file can give machine specific information such as
2893 hostnames. It is also sourced when running local Spark applications or submission scripts.
2894
2895 Note that `conf/spark-env.sh` does not exist by default when Spark is installed. However, you can
2896 copy `conf/spark-env.sh.template` to create it. Make sure you make the copy executable.
2897
2898 The following variables can be set in `spark-env.sh`:
2899
2900
2901 <table class="table">
2902 <tr><th style="width:21%">Environment Variable</th><th>Meaning</th></tr>
2903 <tr>
2904 <td><code>JAVA_HOME</code></td>
2905 <td>Location where Java is installed (if it's not on your default <code>PATH</code>).</td>
2906 </tr>
2907 <tr>
2908 <td><code>PYSPARK_PYTHON</code></td>
2909 <td>Python binary executable to use for PySpark in both driver and workers (default is <code>python2.7</code> if available, otherwise <code>python</code>).
2910 Property <code>spark.pyspark.python</code> take precedence if it is set</td>
2911 </tr>
2912 <tr>
2913 <td><code>PYSPARK_DRIVER_PYTHON</code></td>
2914 <td>Python binary executable to use for PySpark in driver only (default is <code>PYSPARK_PYTHON</code>).
2915 Property <code>spark.pyspark.driver.python</code> take precedence if it is set</td>
2916 </tr>
2917 <tr>
2918 <td><code>SPARKR_DRIVER_R</code></td>
2919 <td>R binary executable to use for SparkR shell (default is <code>R</code>).
2920 Property <code>spark.r.shell.command</code> take precedence if it is set</td>
2921 </tr>
2922 <tr>
2923 <td><code>SPARK_LOCAL_IP</code></td>
2924 <td>IP address of the machine to bind to.</td>
2925 </tr>
2926 <tr>
2927 <td><code>SPARK_PUBLIC_DNS</code></td>
2928 <td>Hostname your Spark program will advertise to other machines.</td>
2929 </tr>
2930 </table>
2931
2932 In addition to the above, there are also options for setting up the Spark
2933 [standalone cluster scripts](spark-standalone.html#cluster-launch-scripts), such as number of cores
2934 to use on each machine and maximum memory.
2935
2936 Since `spark-env.sh` is a shell script, some of these can be set programmatically -- for example, you might
2937 compute `SPARK_LOCAL_IP` by looking up the IP of a specific network interface.
2938
2939 Note: When running Spark on YARN in `cluster` mode, environment variables need to be set using the `spark.yarn.appMasterEnv.[EnvironmentVariableName]` property in your `conf/spark-defaults.conf` file. Environment variables that are set in `spark-env.sh` will not be reflected in the YARN Application Master process in `cluster` mode. See the [YARN-related Spark Properties](running-on-yarn.html#spark-properties) for more information.
2940
2941 # Configuring Logging
2942
2943 Spark uses [log4j](http://logging.apache.org/log4j/) for logging. You can configure it by adding a
2944 `log4j.properties` file in the `conf` directory. One way to start is to copy the existing
2945 `log4j.properties.template` located there.
2946
2947 # Overriding configuration directory
2948
2949 To specify a different configuration directory other than the default "SPARK_HOME/conf",
2950 you can set SPARK_CONF_DIR. Spark will use the configuration files (spark-defaults.conf, spark-env.sh, log4j.properties, etc)
2951 from this directory.
2952
2953 # Inheriting Hadoop Cluster Configuration
2954
2955 If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files that
2956 should be included on Spark's classpath:
2957
2958 * `hdfs-site.xml`, which provides default behaviors for the HDFS client.
2959 * `core-site.xml`, which sets the default filesystem name.
2960
2961 The location of these configuration files varies across Hadoop versions, but
2962 a common location is inside of `/etc/hadoop/conf`. Some tools create
2963 configurations on-the-fly, but offer a mechanism to download copies of them.
2964
2965 To make these files visible to Spark, set `HADOOP_CONF_DIR` in `$SPARK_HOME/conf/spark-env.sh`
2966 to a location containing the configuration files.
2967
2968 # Custom Hadoop/Hive Configuration
2969
2970 If your Spark application is interacting with Hadoop, Hive, or both, there are probably Hadoop/Hive
2971 configuration files in Spark's classpath.
2972
2973 Multiple running applications might require different Hadoop/Hive client side configurations.
2974 You can copy and modify `hdfs-site.xml`, `core-site.xml`, `yarn-site.xml`, `hive-site.xml` in
2975 Spark's classpath for each application. In a Spark cluster running on YARN, these configuration
2976 files are set cluster-wide, and cannot safely be changed by the application.
2977
2978 The better choice is to use spark hadoop properties in the form of `spark.hadoop.*`, and use
2979 spark hive properties in the form of `spark.hive.*`.
2980 For example, adding configuration "spark.hadoop.abc.def=xyz" represents adding hadoop property "abc.def=xyz",
2981 and adding configuration "spark.hive.abc=xyz" represents adding hive property "hive.abc=xyz".
2982 They can be considered as same as normal spark properties which can be set in `$SPARK_HOME/conf/spark-defaults.conf`
2983
2984 In some cases, you may want to avoid hard-coding certain configurations in a `SparkConf`. For
2985 instance, Spark allows you to simply create an empty conf and set spark/spark hadoop/spark hive properties.
2986
2987 {% highlight scala %}
2988 val conf = new SparkConf().set("spark.hadoop.abc.def", "xyz")
2989 val sc = new SparkContext(conf)
2990 {% endhighlight %}
2991
2992 Also, you can modify or add configurations at runtime:
2993 {% highlight bash %}
2994 ./bin/spark-submit \
2995 --name "My app" \
2996 --master local[4] \
2997 --conf spark.eventLog.enabled=false \
2998 --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" \
2999 --conf spark.hadoop.abc.def=xyz \
3000 --conf spark.hive.abc=xyz
3001 myApp.jar
3002 {% endhighlight %}
3003
3004 # Custom Resource Scheduling and Configuration Overview
3005
3006 GPUs and other accelerators have been widely used for accelerating special workloads, e.g.,
3007 deep learning and signal processing. Spark now supports requesting and scheduling generic resources, such as GPUs, with a few caveats. The current implementation requires that the resource have addresses that can be allocated by the scheduler. It requires your cluster manager to support and be properly configured with the resources.
3008
3009 There are configurations available to request resources for the driver: <code>spark.driver.resource.{resourceName}.amount</code>, request resources for the executor(s): <code>spark.executor.resource.{resourceName}.amount</code> and specify the requirements for each task: <code>spark.task.resource.{resourceName}.amount</code>. The <code>spark.driver.resource.{resourceName}.discoveryScript</code> config is required on YARN, Kubernetes and a client side Driver on Spark Standalone. <code>spark.executor.resource.{resourceName}.discoveryScript</code> config is required for YARN and Kubernetes. Kubernetes also requires <code>spark.driver.resource.{resourceName}.vendor</code> and/or <code>spark.executor.resource.{resourceName}.vendor</code>. See the config descriptions above for more information on each.
3010
3011 Spark will use the configurations specified to first request containers with the corresponding resources from the cluster manager. Once it gets the container, Spark launches an Executor in that container which will discover what resources the container has and the addresses associated with each resource. The Executor will register with the Driver and report back the resources available to that Executor. The Spark scheduler can then schedule tasks to each Executor and assign specific resource addresses based on the resource requirements the user specified. The user can see the resources assigned to a task using the <code>TaskContext.get().resources</code> api. On the driver, the user can see the resources assigned with the SparkContext <code>resources</code> call. It's then up to the user to use the assignedaddresses to do the processing they want or pass those into the ML/AI framework they are using.
3012
3013 See your cluster manager specific page for requirements and details on each of - [YARN](running-on-yarn.html#resource-allocation-and-configuration-overview), [Kubernetes](running-on-kubernetes.html#resource-allocation-and-configuration-overview) and [Standalone Mode](spark-standalone.html#resource-allocation-and-configuration-overview). It is currently not available with Mesos or local mode. And please also note that local-cluster mode with multiple workers is not supported(see Standalone documentation).