Back to home page

OSCL-LXR

 
 

    


0001 ---
0002 layout: global
0003 title: JDBC To Other Databases
0004 displayTitle: JDBC To Other Databases
0005 license: |
0006   Licensed to the Apache Software Foundation (ASF) under one or more
0007   contributor license agreements.  See the NOTICE file distributed with
0008   this work for additional information regarding copyright ownership.
0009   The ASF licenses this file to You under the Apache License, Version 2.0
0010   (the "License"); you may not use this file except in compliance with
0011   the License.  You may obtain a copy of the License at
0012  
0013      http://www.apache.org/licenses/LICENSE-2.0
0014  
0015   Unless required by applicable law or agreed to in writing, software
0016   distributed under the License is distributed on an "AS IS" BASIS,
0017   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
0018   See the License for the specific language governing permissions and
0019   limitations under the License.
0020 ---
0021 
0022 * Table of contents
0023 {:toc}
0024 
0025 Spark SQL also includes a data source that can read data from other databases using JDBC. This
0026 functionality should be preferred over using [JdbcRDD](api/scala/org/apache/spark/rdd/JdbcRDD.html).
0027 This is because the results are returned
0028 as a DataFrame and they can easily be processed in Spark SQL or joined with other data sources.
0029 The JDBC data source is also easier to use from Java or Python as it does not require the user to
0030 provide a ClassTag.
0031 (Note that this is different than the Spark SQL JDBC server, which allows other applications to
0032 run queries using Spark SQL).
0033 
0034 To get started you will need to include the JDBC driver for your particular database on the
0035 spark classpath. For example, to connect to postgres from the Spark Shell you would run the
0036 following command:
0037 
0038 {% highlight bash %}
0039 ./bin/spark-shell --driver-class-path postgresql-9.4.1207.jar --jars postgresql-9.4.1207.jar
0040 {% endhighlight %}
0041 
0042 Tables from the remote database can be loaded as a DataFrame or Spark SQL temporary view using
0043 the Data Sources API. Users can specify the JDBC connection properties in the data source options.
0044 <code>user</code> and <code>password</code> are normally provided as connection properties for
0045 logging into the data sources. In addition to the connection properties, Spark also supports
0046 the following case-insensitive options:
0047 
0048 <table class="table">
0049   <tr><th>Property Name</th><th>Meaning</th></tr>
0050   <tr>
0051     <td><code>url</code></td>
0052     <td>
0053       The JDBC URL to connect to. The source-specific connection properties may be specified in the URL. e.g., <code>jdbc:postgresql://localhost/test?user=fred&password=secret</code>
0054     </td>
0055   </tr>
0056 
0057   <tr>
0058     <td><code>dbtable</code></td>
0059     <td>
0060       The JDBC table that should be read from or written into. Note that when using it in the read
0061       path anything that is valid in a <code>FROM</code> clause of a SQL query can be used.
0062       For example, instead of a full table you could also use a subquery in parentheses. It is not
0063       allowed to specify <code>dbtable</code> and <code>query</code> options at the same time.
0064     </td>
0065   </tr>
0066   <tr>
0067     <td><code>query</code></td>
0068     <td>
0069       A query that will be used to read data into Spark. The specified query will be parenthesized and used
0070       as a subquery in the <code>FROM</code> clause. Spark will also assign an alias to the subquery clause.
0071       As an example, spark will issue a query of the following form to the JDBC Source.<br><br>
0072       <code> SELECT &lt;columns&gt; FROM (&lt;user_specified_query&gt;) spark_gen_alias</code><br><br>
0073       Below are a couple of restrictions while using this option.<br>
0074       <ol>
0075          <li> It is not allowed to specify <code>dbtable</code> and <code>query</code> options at the same time. </li>
0076          <li> It is not allowed to specify <code>query</code> and <code>partitionColumn</code> options at the same time. When specifying
0077             <code>partitionColumn</code> option is required, the subquery can be specified using <code>dbtable</code> option instead and
0078             partition columns can be qualified using the subquery alias provided as part of <code>dbtable</code>. <br>
0079             Example:<br>
0080             <code>
0081                spark.read.format("jdbc")<br>
0082                  .option("url", jdbcUrl)<br>
0083                  .option("query", "select c1, c2 from t1")<br>
0084                  .load()
0085             </code></li>
0086       </ol>
0087     </td>
0088   </tr>
0089 
0090   <tr>
0091     <td><code>driver</code></td>
0092     <td>
0093       The class name of the JDBC driver to use to connect to this URL.
0094     </td>
0095   </tr>
0096 
0097   <tr>
0098     <td><code>partitionColumn, lowerBound, upperBound</code></td>
0099     <td>
0100       These options must all be specified if any of them is specified. In addition,
0101       <code>numPartitions</code> must be specified. They describe how to partition the table when
0102       reading in parallel from multiple workers.
0103       <code>partitionColumn</code> must be a numeric, date, or timestamp column from the table in question.
0104       Notice that <code>lowerBound</code> and <code>upperBound</code> are just used to decide the
0105       partition stride, not for filtering the rows in table. So all rows in the table will be
0106       partitioned and returned. This option applies only to reading.
0107     </td>
0108   </tr>
0109 
0110   <tr>
0111      <td><code>numPartitions</code></td>
0112      <td>
0113        The maximum number of partitions that can be used for parallelism in table reading and
0114        writing. This also determines the maximum number of concurrent JDBC connections.
0115        If the number of partitions to write exceeds this limit, we decrease it to this limit by
0116        calling <code>coalesce(numPartitions)</code> before writing.
0117      </td>
0118   </tr>
0119 
0120   <tr>
0121     <td><code>queryTimeout</code></td>
0122     <td>
0123       The number of seconds the driver will wait for a Statement object to execute to the given
0124       number of seconds. Zero means there is no limit. In the write path, this option depends on
0125       how JDBC drivers implement the API <code>setQueryTimeout</code>, e.g., the h2 JDBC driver
0126       checks the timeout of each query instead of an entire JDBC batch.
0127       It defaults to <code>0</code>.
0128     </td>
0129   </tr>
0130 
0131   <tr>
0132     <td><code>fetchsize</code></td>
0133     <td>
0134       The JDBC fetch size, which determines how many rows to fetch per round trip. This can help performance on JDBC drivers which default to low fetch size (eg. Oracle with 10 rows). This option applies only to reading.
0135     </td>
0136   </tr>
0137 
0138   <tr>
0139      <td><code>batchsize</code></td>
0140      <td>
0141        The JDBC batch size, which determines how many rows to insert per round trip. This can help performance on JDBC drivers. This option applies only to writing. It defaults to <code>1000</code>.
0142      </td>
0143   </tr>
0144 
0145   <tr>
0146      <td><code>isolationLevel</code></td>
0147      <td>
0148        The transaction isolation level, which applies to current connection. It can be one of <code>NONE</code>, <code>READ_COMMITTED</code>, <code>READ_UNCOMMITTED</code>, <code>REPEATABLE_READ</code>, or <code>SERIALIZABLE</code>, corresponding to standard transaction isolation levels defined by JDBC's Connection object, with default of <code>READ_UNCOMMITTED</code>. This option applies only to writing. Please refer the documentation in <code>java.sql.Connection</code>.
0149      </td>
0150    </tr>
0151 
0152   <tr>
0153      <td><code>sessionInitStatement</code></td>
0154      <td>
0155        After each database session is opened to the remote DB and before starting to read data, this option executes a custom SQL statement (or a PL/SQL block). Use this to implement session initialization code. Example: <code>option("sessionInitStatement", """BEGIN execute immediate 'alter session set "_serial_direct_read"=true'; END;""")</code>
0156      </td>
0157   </tr>
0158 
0159   <tr>
0160     <td><code>truncate</code></td>
0161     <td>
0162      This is a JDBC writer related option. When <code>SaveMode.Overwrite</code> is enabled, this option causes Spark to truncate an existing table instead of dropping and recreating it. This can be more efficient, and prevents the table metadata (e.g., indices) from being removed. However, it will not work in some cases, such as when the new data has a different schema. It defaults to <code>false</code>. This option applies only to writing.
0163    </td>
0164   </tr>
0165   
0166   <tr>
0167     <td><code>cascadeTruncate</code></td>
0168     <td>
0169         This is a JDBC writer related option. If enabled and supported by the JDBC database (PostgreSQL and Oracle at the moment), this options allows execution of a <code>TRUNCATE TABLE t CASCADE</code> (in the case of PostgreSQL a <code>TRUNCATE TABLE ONLY t CASCADE</code> is executed to prevent inadvertently truncating descendant tables). This will affect other tables, and thus should be used with care. This option applies only to writing. It defaults to the default cascading truncate behaviour of the JDBC database in question, specified in the <code>isCascadeTruncate</code> in each JDBCDialect.
0170     </td>
0171   </tr>
0172 
0173   <tr>
0174     <td><code>createTableOptions</code></td>
0175     <td>
0176      This is a JDBC writer related option. If specified, this option allows setting of database-specific table and partition options when creating a table (e.g., <code>CREATE TABLE t (name string) ENGINE=InnoDB.</code>). This option applies only to writing.
0177    </td>
0178   </tr>
0179 
0180   <tr>
0181     <td><code>createTableColumnTypes</code></td>
0182     <td>
0183      The database column data types to use instead of the defaults, when creating the table. Data type information should be specified in the same format as CREATE TABLE columns syntax (e.g: <code>"name CHAR(64), comments VARCHAR(1024)")</code>. The specified types should be valid spark sql data types. This option applies only to writing.
0184     </td>
0185   </tr>
0186 
0187   <tr>
0188     <td><code>customSchema</code></td>
0189     <td>
0190      The custom schema to use for reading data from JDBC connectors. For example, <code>"id DECIMAL(38, 0), name STRING"</code>. You can also specify partial fields, and the others use the default type mapping. For example, <code>"id DECIMAL(38, 0)"</code>. The column names should be identical to the corresponding column names of JDBC table. Users can specify the corresponding data types of Spark SQL instead of using the defaults. This option applies only to reading.
0191     </td>
0192   </tr>
0193 
0194   <tr>
0195     <td><code>pushDownPredicate</code></td>
0196     <td>
0197      The option to enable or disable predicate push-down into the JDBC data source. The default value is true, in which case Spark will push down filters to the JDBC data source as much as possible. Otherwise, if set to false, no filter will be pushed down to the JDBC data source and thus all filters will be handled by Spark. Predicate push-down is usually turned off when the predicate filtering is performed faster by Spark than by the JDBC data source.
0198     </td>
0199   </tr>
0200 </table>
0201 
0202 <div class="codetabs">
0203 
0204 <div data-lang="scala"  markdown="1">
0205 {% include_example jdbc_dataset scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
0206 </div>
0207 
0208 <div data-lang="java"  markdown="1">
0209 {% include_example jdbc_dataset java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
0210 </div>
0211 
0212 <div data-lang="python"  markdown="1">
0213 {% include_example jdbc_dataset python/sql/datasource.py %}
0214 </div>
0215 
0216 <div data-lang="r"  markdown="1">
0217 {% include_example jdbc_dataset r/RSparkSQLExample.R %}
0218 </div>
0219 
0220 <div data-lang="SQL"  markdown="1">
0221 
0222 {% highlight sql %}
0223 
0224 CREATE TEMPORARY VIEW jdbcTable
0225 USING org.apache.spark.sql.jdbc
0226 OPTIONS (
0227   url "jdbc:postgresql:dbserver",
0228   dbtable "schema.tablename",
0229   user 'username',
0230   password 'password'
0231 )
0232 
0233 INSERT INTO TABLE jdbcTable
0234 SELECT * FROM resultTable
0235 {% endhighlight %}
0236 
0237 </div>
0238 </div>