Back to home page

OSCL-LXR

 
 

    


0001 ---
0002 layout: global
0003 title: Web UI
0004 description: Web UI guide for Spark SPARK_VERSION_SHORT
0005 license: |
0006   Licensed to the Apache Software Foundation (ASF) under one or more
0007   contributor license agreements.  See the NOTICE file distributed with
0008   this work for additional information regarding copyright ownership.
0009   The ASF licenses this file to You under the Apache License, Version 2.0
0010   (the "License"); you may not use this file except in compliance with
0011   the License.  You may obtain a copy of the License at
0012  
0013      http://www.apache.org/licenses/LICENSE-2.0
0014  
0015   Unless required by applicable law or agreed to in writing, software
0016   distributed under the License is distributed on an "AS IS" BASIS,
0017   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
0018   See the License for the specific language governing permissions and
0019   limitations under the License.
0020 ---
0021 
0022 Apache Spark provides a suite of web user interfaces (UIs) that you can use
0023 to monitor the status and resource consumption of your Spark cluster.
0024 
0025 
0026 **Table of Contents**
0027 
0028 * This will become a table of contents (this text will be scraped).
0029 {:toc}
0030 
0031 ## Jobs Tab
0032 The Jobs tab displays a summary page of all jobs in the Spark application and a details page
0033 for each job. The summary page shows high-level information, such as the status, duration, and
0034 progress of all jobs and the overall event timeline. When you click on a job on the summary
0035 page, you see the details page for that job. The details page further shows the event timeline,
0036 DAG visualization, and all stages of the job.
0037 
0038 The information that is displayed in this section is
0039 * User: Current Spark user
0040 * Total uptime: Time since Spark application started
0041 * Scheduling mode: See [job scheduling](job-scheduling.html#configuring-pool-properties)
0042 * Number of jobs per status: Active, Completed, Failed
0043 
0044 <p style="text-align: center;">
0045   <img src="img/AllJobsPageDetail1.png" title="Basic info" alt="Basic info" width="20%"/>
0046 </p>
0047 
0048 * Event timeline: Displays in chronological order the events related to the executors (added, removed) and the jobs
0049 
0050 <p style="text-align: center;">
0051   <img src="img/AllJobsPageDetail2.png" title="Event timeline" alt="Event timeline"/>
0052 </p>
0053 
0054 * Details of jobs grouped by status: Displays detailed information of the jobs including Job ID, description (with a link to detailed job page), submitted time, duration, stages summary and tasks progress bar
0055 
0056 <p style="text-align: center;">
0057   <img src="img/AllJobsPageDetail3.png" title="Details of jobs grouped by status" alt="Details of jobs grouped by status"/>
0058 </p>
0059 
0060 
0061 When you click on a specific job, you can see the detailed information of this job.
0062 
0063 ### Jobs detail
0064 
0065 This page displays the details of a specific job identified by its job ID. 
0066 * Job Status: (running, succeeded, failed)
0067 * Number of stages per status (active, pending, completed, skipped, failed)
0068 * Associated SQL Query: Link to the sql tab for this job
0069 * Event timeline: Displays in chronological order the events related to the executors (added, removed) and the stages of the job
0070 
0071 <p style="text-align: center;">
0072   <img src="img/JobPageDetail1.png" title="Event timeline" alt="Event timeline"/>
0073 </p>
0074 
0075 * DAG visualization: Visual representation of the directed acyclic graph of this job where vertices represent the RDDs or DataFrames and the edges represent an operation to be applied on RDD.
0076 * An example of DAG visualization for `sc.parallelize(1 to 100).toDF.count()`
0077  
0078 <p style="text-align: center;">
0079   <img src="img/JobPageDetail2.png" title="DAG" alt="DAG" width="40%">
0080 </p>
0081 
0082 * List of stages (grouped by state active, pending, completed, skipped, and failed)
0083         * Stage ID
0084         * Description of the stage
0085         * Submitted timestamp
0086         * Duration of the stage
0087         * Tasks progress bar
0088         * Input: Bytes read from storage in this stage
0089         * Output: Bytes written in storage in this stage
0090         * Shuffle read: Total shuffle bytes and records read, includes both data read locally and data read from remote executors
0091         * Shuffle write: Bytes and records written to disk in order to be read by a shuffle in a future stage
0092 
0093 <p style="text-align: center;">
0094   <img src="img/JobPageDetail3.png" title="DAG" alt="DAG">
0095 </p>
0096 
0097 ## Stages Tab
0098 
0099 The Stages tab displays a summary page that shows the current state of all stages of all jobs in
0100 the Spark application.
0101 
0102 At the beginning of the page is the summary with the count of all stages by status (active, pending, completed, skipped, and failed)
0103 
0104 <p style="text-align: center;">
0105   <img src="img/AllStagesPageDetail1.png" title="Stages header" alt="Stages header" width="30%">
0106 </p>
0107 
0108 In [Fair scheduling mode](job-scheduling.html#scheduling-within-an-application) there is a table that displays [pools properties](job-scheduling.html#configuring-pool-properties)
0109 
0110 <p style="text-align: center;">
0111   <img src="img/AllStagesPageDetail2.png" title="Pool properties" alt="Pool properties">
0112 </p>
0113 
0114 After that are the details of stages per status (active, pending, completed, skipped, failed). In active stages, it's possible to kill the stage with the kill link. Only in failed stages, failure reason is shown. Task detail can be accessed by clicking on the description.
0115 
0116 <p style="text-align: center;">
0117   <img src="img/AllStagesPageDetail3.png" title="Stages detail" alt="Stages detail">
0118 </p>
0119 
0120 ### Stage detail
0121 The stage detail page begins with information like total time across all tasks, [Locality level summary](tuning.html#data-locality), [Shuffle Read Size / Records](rdd-programming-guide.html#shuffle-operations) and Associated Job IDs.
0122 
0123 <p style="text-align: center;">
0124   <img src="img/AllStagesPageDetail4.png" title="Stage header" alt="Stage header" width="30%">
0125 </p>
0126 
0127 There is also a visual representation of the directed acyclic graph (DAG) of this stage, where vertices represent the RDDs or DataFrames and the edges represent an operation to be applied.
0128 Nodes are grouped by operation scope in the DAG visualization and labelled with the operation scope name (BatchScan, WholeStageCodegen, Exchange, etc).
0129 Notably, Whole Stage Code Generation operations are also annotated with the code generation id. For stages belonging to Spark DataFrame or SQL execution, this allows to cross-reference Stage execution details to the relevant details in the Web-UI SQL Tab page where SQL plan graphs and execution plans are reported.
0130 
0131 <p style="text-align: center;">
0132   <img src="img/AllStagesPageDetail5.png" title="Stage DAG" alt="Stage DAG" width="50%">
0133 </p>
0134 
0135 Summary metrics for all task are represented in a table and in a timeline.
0136 * **[Tasks deserialization time](configuration.html#compression-and-serialization)**
0137 * **Duration of tasks**.
0138 * **GC time** is the total JVM garbage collection time.
0139 * **Result serialization time** is the time spent serializing the task result on an executor before sending it back to the driver.
0140 * **Getting result time** is the time that the driver spends fetching task results from workers.
0141 * **Scheduler delay** is the time the task waits to be scheduled for execution.
0142 * **Peak execution memory** is the maximum memory used by the internal data structures created during shuffles, aggregations and joins.
0143 * **Shuffle Read Size / Records**. Total shuffle bytes read, includes both data read locally and data read from remote executors.
0144 * **Shuffle Read Blocked Time** is the time that tasks spent blocked waiting for shuffle data to be read from remote machines.
0145 * **Shuffle Remote Reads** is the total shuffle bytes read from remote executors.
0146 * **Shuffle spill (memory)** is the size of the deserialized form of the shuffled data in memory.
0147 * **Shuffle spill (disk)** is the size of the serialized form of the data on disk.
0148 
0149 <p style="text-align: center;">
0150   <img src="img/AllStagesPageDetail6.png" title="Stages metrics" alt="Stages metrics">
0151 </p>
0152 
0153 Aggregated metrics by executor show the same information aggregated by executor.
0154 
0155 <p style="text-align: center;">
0156   <img src="img/AllStagesPageDetail7.png" title="Stages metrics per executor" alt="Stages metrics per executors">
0157 </p>
0158 
0159 **[Accumulators](rdd-programming-guide.html#accumulators)** are a type of shared variables. It provides a mutable variable that can be updated inside of a variety of transformations. It is possible to create accumulators with and without name, but only named accumulators are displayed.
0160 
0161 <p style="text-align: center;">
0162   <img src="img/AllStagesPageDetail8.png" title="Stage accumulator" alt="Stage accumulator">
0163 </p>
0164 
0165 Tasks details basically includes the same information as in the summary section but detailed by task. It also includes links to review the logs and the task attempt number if it fails for any reason. If there are named accumulators, here it is possible to see the accumulator value at the end of each task.
0166 
0167 <p style="text-align: center;">
0168   <img src="img/AllStagesPageDetail9.png" title="Tasks" alt="Tasks">
0169 </p>
0170 
0171 ## Storage Tab
0172 The Storage tab displays the persisted RDDs and DataFrames, if any, in the application. The summary
0173 page shows the storage levels, sizes and partitions of all RDDs, and the details page shows the
0174 sizes and using executors for all partitions in an RDD or DataFrame.
0175 
0176 {% highlight scala %}
0177 scala> import org.apache.spark.storage.StorageLevel._
0178 import org.apache.spark.storage.StorageLevel._
0179 
0180 scala> val rdd = sc.range(0, 100, 1, 5).setName("rdd")
0181 rdd: org.apache.spark.rdd.RDD[Long] = rdd MapPartitionsRDD[1] at range at <console>:27
0182 
0183 scala> rdd.persist(MEMORY_ONLY_SER)
0184 res0: rdd.type = rdd MapPartitionsRDD[1] at range at <console>:27
0185 
0186 scala> rdd.count
0187 res1: Long = 100                                                                
0188 
0189 scala> val df = Seq((1, "andy"), (2, "bob"), (2, "andy")).toDF("count", "name")
0190 df: org.apache.spark.sql.DataFrame = [count: int, name: string]
0191 
0192 scala> df.persist(DISK_ONLY)
0193 res2: df.type = [count: int, name: string]
0194 
0195 scala> df.count
0196 res3: Long = 3
0197 {% endhighlight %}
0198 
0199 <p style="text-align: center;">
0200   <img src="img/webui-storage-tab.png"
0201        title="Storage tab"
0202        alt="Storage tab"
0203        width="100%" />
0204   <!-- Images are downsized intentionally to improve quality on retina displays -->
0205 </p>
0206 
0207 After running the above example, we can find two RDDs listed in the Storage tab. Basic information like
0208 storage level, number of partitions and memory overhead are provided. Note that the newly persisted RDDs
0209 or DataFrames are not shown in the tab before they are materialized. To monitor a specific RDD or DataFrame,
0210 make sure an action operation has been triggered.
0211 
0212 <p style="text-align: center;">
0213   <img src="img/webui-storage-detail.png"
0214        title="Storage detail"
0215        alt="Storage detail"
0216        width="100%" />
0217   <!-- Images are downsized intentionally to improve quality on retina displays -->
0218 </p>
0219 
0220 You can click the RDD name 'rdd' for obtaining the details of data persistence, such as the data
0221 distribution on the cluster.
0222 
0223 
0224 ## Environment Tab
0225 The Environment tab displays the values for the different environment and configuration variables,
0226 including JVM, Spark, and system properties.
0227 
0228 <p style="text-align: center;">
0229   <img src="img/webui-env-tab.png"
0230        title="Env tab"
0231        alt="Env tab"
0232        width="100%" />
0233   <!-- Images are downsized intentionally to improve quality on retina displays -->
0234 </p>
0235 
0236 This environment page has five parts. It is a useful place to check whether your properties have
0237 been set correctly.
0238 The first part 'Runtime Information' simply contains the [runtime properties](configuration.html#runtime-environment)
0239 like versions of Java and Scala.
0240 The second part 'Spark Properties' lists the [application properties](configuration.html#application-properties) like
0241 ['spark.app.name'](configuration.html#application-properties) and 'spark.driver.memory'.
0242 
0243 <p style="text-align: center;">
0244   <img src="img/webui-env-hadoop.png"
0245        title="Hadoop Properties"
0246        alt="Hadoop Properties"
0247        width="100%" />
0248   <!-- Images are downsized intentionally to improve quality on retina displays -->
0249 </p>
0250 Clicking the 'Hadoop Properties' link displays properties relative to Hadoop and YARN. Note that properties like
0251 ['spark.hadoop.*'](configuration.html#execution-behavior) are shown not in this part but in 'Spark Properties'.
0252 
0253 <p style="text-align: center;">
0254   <img src="img/webui-env-sys.png"
0255        title="System Properties"
0256        alt="System Properties"
0257        width="100%" />
0258   <!-- Images are downsized intentionally to improve quality on retina displays -->
0259 </p>
0260 'System Properties' shows more details about the JVM.
0261 
0262 <p style="text-align: center;">
0263   <img src="img/webui-env-class.png"
0264        title="Classpath Entries"
0265        alt="Classpath Entries"
0266        width="100%" />
0267   <!-- Images are downsized intentionally to improve quality on retina displays -->
0268 </p>
0269 
0270 The last part 'Classpath Entries' lists the classes loaded from different sources, which is very useful
0271 to resolve class conflicts.
0272 
0273 ## Executors Tab
0274 The Executors tab displays summary information about the executors that were created for the
0275 application, including memory and disk usage and task and shuffle information. The Storage Memory
0276 column shows the amount of memory used and reserved for caching data.
0277 
0278 <p style="text-align: center;">
0279   <img src="img/webui-exe-tab.png"
0280        title="Executors Tab"
0281        alt="Executors Tab"
0282        width="80%" />
0283   <!-- Images are downsized intentionally to improve quality on retina displays -->
0284 </p>
0285 
0286 The Executors tab provides not only resource information (amount of memory, disk, and cores used by each executor)
0287 but also performance information ([GC time](tuning.html#garbage-collection-tuning) and shuffle information).
0288 
0289 <p style="text-align: center;">
0290   <img src="img/webui-exe-err.png"
0291        title="Stderr Log"
0292        alt="Stderr Log"
0293        width="80%" />
0294   <!-- Images are downsized intentionally to improve quality on retina displays -->
0295 </p>
0296 
0297 Clicking the 'stderr' link of executor 0 displays detailed [standard error log](spark-standalone.html#monitoring-and-logging)
0298 in its console.
0299 
0300 <p style="text-align: center;">
0301   <img src="img/webui-exe-thread.png"
0302        title="Thread Dump"
0303        alt="Thread Dump"
0304        width="80%" />
0305   <!-- Images are downsized intentionally to improve quality on retina displays -->
0306 </p>
0307 
0308 Clicking the 'Thread Dump' link of executor 0 displays the thread dump of JVM on executor 0, which is pretty useful
0309 for performance analysis.
0310 
0311 ## SQL Tab
0312 If the application executes Spark SQL queries, the SQL tab displays information, such as the duration,
0313 jobs, and physical and logical plans for the queries. Here we include a basic example to illustrate
0314 this tab:
0315 {% highlight scala %}
0316 scala> val df = Seq((1, "andy"), (2, "bob"), (2, "andy")).toDF("count", "name")
0317 df: org.apache.spark.sql.DataFrame = [count: int, name: string]
0318 
0319 scala> df.count
0320 res0: Long = 3                                                                  
0321 
0322 scala> df.createGlobalTempView("df")
0323 
0324 scala> spark.sql("select name,sum(count) from global_temp.df group by name").show
0325 +----+----------+
0326 |name|sum(count)|
0327 +----+----------+
0328 |andy|         3|
0329 | bob|         2|
0330 +----+----------+
0331 {% endhighlight %}
0332 
0333 <p style="text-align: center;">
0334   <img src="img/webui-sql-tab.png"
0335        title="SQL tab"
0336        alt="SQL tab"
0337        width="80%" />
0338   <!-- Images are downsized intentionally to improve quality on retina displays -->
0339 </p>
0340 
0341 Now the above three dataframe/SQL operators are shown in the list. If we click the
0342 'show at \<console\>: 24' link of the last query, we will see the DAG and details of the query execution.
0343 
0344 <p style="text-align: center;">
0345   <img src="img/webui-sql-dag.png"
0346        title="SQL DAG"
0347        alt="SQL DAG"
0348        width="50%" />
0349   <!-- Images are downsized intentionally to improve quality on retina displays -->
0350 </p>
0351 
0352 The query details page displays information about the query execution time, its duration,
0353 the list of associated jobs, and the query execution DAG.
0354 The first block 'WholeStageCodegen (1)' compiles multiple operators ('LocalTableScan' and 'HashAggregate') together into a single Java
0355 function to improve performance, and metrics like number of rows and spill size are listed in the block.
0356 The annotation '(1)' in the block name is the code generation id.
0357 The second block 'Exchange' shows the metrics on the shuffle exchange, including
0358 number of written shuffle records, total data size, etc.
0359 
0360 
0361 <p style="text-align: center;">
0362   <img src="img/webui-sql-plan.png"
0363        title="logical plans and the physical plan"
0364        alt="logical plans and the physical plan"
0365        width="80%" />
0366   <!-- Images are downsized intentionally to improve quality on retina displays -->
0367 </p>
0368 Clicking the 'Details' link on the bottom displays the logical plans and the physical plan, which
0369 illustrate how Spark parses, analyzes, optimizes and performs the query.
0370 Steps in the physical plan subject to whole stage code generation optimization, are prefixed by a star followed by
0371 the code generation id, for example: '*(1) LocalTableScan'
0372 
0373 ### SQL metrics
0374 
0375 The metrics of SQL operators are shown in the block of physical operators. The SQL metrics can be useful
0376 when we want to dive into the execution details of each operator. For example, "number of output rows"
0377 can answer how many rows are output after a Filter operator, "shuffle bytes written total" in an Exchange
0378 operator shows the number of bytes written by a shuffle.
0379 
0380 Here is the list of SQL metrics:
0381 
0382 <table class="table">
0383 <tr><th>SQL metrics</th><th>Meaning</th><th>Operators</th></tr>
0384 <tr><td> <code>number of output rows</code> </td><td> the number of output rows of the operator </td><td> Aggregate operators, Join operators, Sample, Range, Scan operators, Filter, etc.</td></tr>
0385 <tr><td> <code>data size</code> </td><td> the size of broadcast/shuffled/collected data of the operator </td><td> BroadcastExchange, ShuffleExchange, Subquery </td></tr>
0386 <tr><td> <code>time to collect</code> </td><td> the time spent on collecting data </td><td> BroadcastExchange, Subquery </td></tr>
0387 <tr><td> <code>scan time</code> </td><td> the time spent on scanning data </td><td> ColumnarBatchScan, FileSourceScan </td></tr>
0388 <tr><td> <code>metadata time</code> </td><td> the time spent on getting metadata like number of partitions, number of files </td><td> FileSourceScan </td></tr>
0389 <tr><td> <code>shuffle bytes written</code> </td><td> the number of bytes written </td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange </td></tr>
0390 <tr><td> <code>shuffle records written</code> </td><td> the number of records written </td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange </td></tr>
0391 <tr><td> <code>shuffle write time</code> </td><td> the time spent on shuffle writing </td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange </td></tr>
0392 <tr><td> <code>remote blocks read</code> </td><td> the number of blocks read remotely </td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange</td></tr>
0393 <tr><td> <code>remote bytes read</code> </td><td> the number of bytes read remotely </td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange </td></tr>
0394 <tr><td> <code>remote bytes read to disk</code> </td><td> the number of bytes read from remote to local disk </td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange </td></tr>
0395 <tr><td> <code>local blocks read</code> </td><td> the number of blocks read locally </td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange </td></tr>
0396 <tr><td> <code>local bytes read</code> </td><td> the number of bytes read locally </td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange </td></tr>
0397 <tr><td> <code>fetch wait time</code> </td><td> the time spent on fetching data (local and remote)</td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange </td></tr>
0398 <tr><td> <code>records read</code> </td><td> the number of read records </td><td> CollectLimit, TakeOrderedAndProject, ShuffleExchange </td></tr>
0399 <tr><td> <code>sort time</code> </td><td> the time spent on sorting </td><td> Sort </td></tr>
0400 <tr><td> <code>peak memory</code> </td><td> the peak memory usage in the operator </td><td> Sort, HashAggregate </td></tr>
0401 <tr><td> <code>spill size</code> </td><td> number of bytes spilled to disk from memory in the operator </td><td> Sort, HashAggregate </td></tr>
0402 <tr><td> <code>time in aggregation build</code> </td><td> the time spent on aggregation </td><td> HashAggregate, ObjectHashAggregate </td></tr>
0403 <tr><td> <code>avg hash probe bucket list iters</code> </td><td> the average bucket list iterations per lookup during aggregation </td><td> HashAggregate </td></tr>
0404 <tr><td> <code>data size of build side</code> </td><td> the size of built hash map </td><td> ShuffledHashJoin </td></tr>
0405 <tr><td> <code>time to build hash map</code> </td><td> the time spent on building hash map </td><td> ShuffledHashJoin </td></tr>
0406 
0407 </table>
0408 
0409 ## Structured Streaming Tab
0410 When running Structured Streaming jobs in micro-batch mode, a Structured Streaming tab will be 
0411 available on the Web UI. The overview page displays some brief statistics for running and completed
0412 queries. Also, you can check the latest exception of a failed query. For detailed statistics, please
0413 click a "run id" in the tables.
0414 
0415 <p style="text-align: center;">
0416   <img src="img/webui-structured-streaming-detail.png" title="Structured Streaming Query Statistics" alt="Structured Streaming Query Statistics">
0417 </p>
0418 
0419 The statistics page displays some useful metrics for insight into the status of your streaming 
0420 queries. Currently, it contains the following metrics.
0421 
0422 * **Input Rate.** The aggregate (across all sources) rate of data arriving.
0423 * **Process Rate.** The aggregate (across all sources) rate at which Spark is processing data.
0424 * **Input Rows.** The aggregate (across all sources) number of records processed in a trigger.
0425 * **Batch Duration.** The process duration of each batch. 
0426 * **Operation Duration.** The amount of time taken to perform various operations in milliseconds.
0427 The tracked operations are listed as follows.
0428     * addBatch: Adds result data of the current batch to the sink.
0429     * getBatch: Gets a new batch of data to process.
0430     * latestOffset: Gets the latest offsets for sources. 
0431     * queryPlanning: Generates the execution plan.
0432     * walCommit: Writes the offsets to the metadata log.
0433     
0434 As an early-release version, the statistics page is still under development and will be improved in
0435 future releases.
0436 
0437 ## Streaming Tab
0438 The web UI includes a Streaming tab if the application uses Spark streaming. This tab displays
0439 scheduling delay and processing time for each micro-batch in the data stream, which can be useful
0440 for troubleshooting the streaming application.
0441 
0442 ## JDBC/ODBC Server Tab
0443 We can see this tab when Spark is running as a [distributed SQL engine](sql-distributed-sql-engine.html). It shows information about sessions and submitted SQL operations.
0444 
0445 The first section of the page displays general information about the JDBC/ODBC server: start time and uptime.
0446 
0447 <p style="text-align: center;">
0448   <img src="img/JDBCServer1.png" width="40%" title="JDBC/ODBC Header" alt="JDBC/ODBC Header">
0449 </p>
0450 
0451 The second section contains information about active and finished sessions.
0452 * **User** and **IP** of the connection.
0453 * **Session id** link to access to session info.
0454 * **Start time**, **finish time** and **duration** of the session.
0455 * **Total execute** is the number of operations submitted in this session.
0456 
0457 <p style="text-align: center;">
0458   <img src="img/JDBCServer2.png" title="JDBC/ODBC sessions" alt="JDBC/ODBC sessions">
0459 </p>
0460 
0461 The third section has the SQL statistics of the submitted operations.
0462 * **User** that submit the operation.
0463 * **Job id** link to [jobs tab](web-ui.html#jobs-tab).
0464 * **Group id** of the query that group all jobs together. An application can cancel all running jobs using this group id.
0465 * **Start time** of the operation.
0466 * **Finish time** of the execution, before fetching the results.
0467 * **Close time** of the operation after fetching the results.
0468 * **Execution time** is the difference between finish time and start time.
0469 * **Duration time** is the difference between close time and start time.
0470 * **Statement** is the operation being executed.
0471 * **State** of the process.
0472         * _Started_, first state, when the process begins.
0473         * _Compiled_, execution plan generated.
0474         * _Failed_, final state when the execution failed or finished with error.
0475         * _Canceled_, final state when the execution is canceled.
0476         * _Finished_ processing and waiting to fetch results.
0477         * _Closed_, final state when client closed the statement.
0478 * **Detail** of the execution plan with parsed logical plan, analyzed logical plan, optimized logical plan and physical plan or errors in the SQL statement.
0479 
0480 <p style="text-align: center;">
0481   <img src="img/JDBCServer3.png" title="JDBC/ODBC SQL Statistics" alt="JDBC/ODBC SQL Statistics">
0482 </p>