Back to home page

OSCL-LXR

 
 

    


0001 ---
0002 layout: global
0003 title: Cluster Mode Overview
0004 license: |
0005   Licensed to the Apache Software Foundation (ASF) under one or more
0006   contributor license agreements.  See the NOTICE file distributed with
0007   this work for additional information regarding copyright ownership.
0008   The ASF licenses this file to You under the Apache License, Version 2.0
0009   (the "License"); you may not use this file except in compliance with
0010   the License.  You may obtain a copy of the License at
0011  
0012      http://www.apache.org/licenses/LICENSE-2.0
0013  
0014   Unless required by applicable law or agreed to in writing, software
0015   distributed under the License is distributed on an "AS IS" BASIS,
0016   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
0017   See the License for the specific language governing permissions and
0018   limitations under the License.
0019 ---
0020 
0021 This document gives a short overview of how Spark runs on clusters, to make it easier to understand
0022 the components involved. Read through the [application submission guide](submitting-applications.html)
0023 to learn about launching applications on a cluster.
0024 
0025 # Components
0026 
0027 Spark applications run as independent sets of processes on a cluster, coordinated by the `SparkContext`
0028 object in your main program (called the _driver program_).
0029 
0030 Specifically, to run on a cluster, the SparkContext can connect to several types of _cluster managers_
0031 (either Spark's own standalone cluster manager, Mesos or YARN), which allocate resources across
0032 applications. Once connected, Spark acquires *executors* on nodes in the cluster, which are
0033 processes that run computations and store data for your application.
0034 Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to
0035 the executors. Finally, SparkContext sends *tasks* to the executors to run.
0036 
0037 <p style="text-align: center;">
0038   <img src="img/cluster-overview.png" title="Spark cluster components" alt="Spark cluster components" />
0039 </p>
0040 
0041 There are several useful things to note about this architecture:
0042 
0043 1. Each application gets its own executor processes, which stay up for the duration of the whole
0044    application and run tasks in multiple threads. This has the benefit of isolating applications
0045    from each other, on both the scheduling side (each driver schedules its own tasks) and executor
0046    side (tasks from different applications run in different JVMs). However, it also means that
0047    data cannot be shared across different Spark applications (instances of SparkContext) without
0048    writing it to an external storage system.
0049 2. Spark is agnostic to the underlying cluster manager. As long as it can acquire executor
0050    processes, and these communicate with each other, it is relatively easy to run it even on a
0051    cluster manager that also supports other applications (e.g. Mesos/YARN).
0052 3. The driver program must listen for and accept incoming connections from its executors throughout
0053    its lifetime (e.g., see [spark.driver.port in the network config
0054    section](configuration.html#networking)). As such, the driver program must be network
0055    addressable from the worker nodes.
0056 4. Because the driver schedules tasks on the cluster, it should be run close to the worker
0057    nodes, preferably on the same local area network. If you'd like to send requests to the
0058    cluster remotely, it's better to open an RPC to the driver and have it submit operations
0059    from nearby than to run a driver far away from the worker nodes.
0060 
0061 # Cluster Manager Types
0062 
0063 The system currently supports several cluster managers:
0064 
0065 * [Standalone](spark-standalone.html) -- a simple cluster manager included with Spark that makes it
0066   easy to set up a cluster.
0067 * [Apache Mesos](running-on-mesos.html) -- a general cluster manager that can also run Hadoop MapReduce
0068   and service applications.
0069 * [Hadoop YARN](running-on-yarn.html) -- the resource manager in Hadoop 2.
0070 * [Kubernetes](running-on-kubernetes.html) -- an open-source system for automating deployment, scaling,
0071   and management of containerized applications.
0072 
0073 A third-party project (not supported by the Spark project) exists to add support for
0074 [Nomad](https://github.com/hashicorp/nomad-spark) as a cluster manager.
0075 
0076 # Submitting Applications
0077 
0078 Applications can be submitted to a cluster of any type using the `spark-submit` script.
0079 The [application submission guide](submitting-applications.html) describes how to do this.
0080 
0081 # Monitoring
0082 
0083 Each driver program has a web UI, typically on port 4040, that displays information about running
0084 tasks, executors, and storage usage. Simply go to `http://<driver-node>:4040` in a web browser to
0085 access this UI. The [monitoring guide](monitoring.html) also describes other monitoring options.
0086 
0087 # Job Scheduling
0088 
0089 Spark gives control over resource allocation both _across_ applications (at the level of the cluster
0090 manager) and _within_ applications (if multiple computations are happening on the same SparkContext).
0091 The [job scheduling overview](job-scheduling.html) describes this in more detail.
0092 
0093 # Glossary
0094 
0095 The following table summarizes terms you'll see used to refer to cluster concepts:
0096 
0097 <table class="table">
0098   <thead>
0099     <tr><th style="width: 130px;">Term</th><th>Meaning</th></tr>
0100   </thead>
0101   <tbody>
0102     <tr>
0103       <td>Application</td>
0104       <td>User program built on Spark. Consists of a <em>driver program</em> and <em>executors</em> on the cluster.</td>
0105     </tr>
0106     <tr>
0107       <td>Application jar</td>
0108       <td>
0109         A jar containing the user's Spark application. In some cases users will want to create
0110         an "uber jar" containing their application along with its dependencies. The user's jar
0111         should never include Hadoop or Spark libraries, however, these will be added at runtime.
0112       </td>
0113     </tr>
0114     <tr>
0115       <td>Driver program</td>
0116       <td>The process running the main() function of the application and creating the SparkContext</td>
0117     </tr>
0118     <tr>
0119       <td>Cluster manager</td>
0120       <td>An external service for acquiring resources on the cluster (e.g. standalone manager, Mesos, YARN)</td>
0121     </tr>
0122     <tr>
0123       <td>Deploy mode</td>
0124       <td>Distinguishes where the driver process runs. In "cluster" mode, the framework launches
0125         the driver inside of the cluster. In "client" mode, the submitter launches the driver
0126         outside of the cluster.</td>
0127     </tr>
0128     <tr>
0129       <td>Worker node</td>
0130       <td>Any node that can run application code in the cluster</td>
0131     </tr>
0132     <tr>
0133       <td>Executor</td>
0134       <td>A process launched for an application on a worker node, that runs tasks and keeps data in memory
0135         or disk storage across them. Each application has its own executors.</td>
0136     </tr>
0137     <tr>
0138       <td>Task</td>
0139       <td>A unit of work that will be sent to one executor</td>
0140     </tr>
0141     <tr>
0142       <td>Job</td>
0143       <td>A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action
0144         (e.g. <code>save</code>, <code>collect</code>); you'll see this term used in the driver's logs.</td>
0145     </tr>
0146     <tr>
0147       <td>Stage</td>
0148       <td>Each job gets divided into smaller sets of tasks called <em>stages</em> that depend on each other
0149         (similar to the map and reduce stages in MapReduce); you'll see this term used in the driver's logs.</td>
0150     </tr>
0151   </tbody>
0152 </table>