Back to home page

OSCL-LXR

 
 

    


0001 ---
0002 layout: global
0003 title: Generic File Source Options
0004 displayTitle: Generic File Source Options
0005 license: |
0006   Licensed to the Apache Software Foundation (ASF) under one or more
0007   contributor license agreements.  See the NOTICE file distributed with
0008   this work for additional information regarding copyright ownership.
0009   The ASF licenses this file to You under the Apache License, Version 2.0
0010   (the "License"); you may not use this file except in compliance with
0011   the License.  You may obtain a copy of the License at
0012  
0013      http://www.apache.org/licenses/LICENSE-2.0
0014  
0015   Unless required by applicable law or agreed to in writing, software
0016   distributed under the License is distributed on an "AS IS" BASIS,
0017   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
0018   See the License for the specific language governing permissions and
0019   limitations under the License.
0020 ---
0021 
0022 * Table of contents
0023 {:toc}
0024 
0025 These generic options/configurations are effective only when using file-based sources: parquet, orc, avro, json, csv, text.
0026 
0027 Please note that the hierarchy of directories used in examples below are:
0028 
0029 {% highlight text %}
0030 
0031 dir1/
0032  ├── dir2/
0033  │    └── file2.parquet (schema: <file: string>, content: "file2.parquet")
0034  └── file1.parquet (schema: <file, string>, content: "file1.parquet")
0035  └── file3.json (schema: <file, string>, content: "{'file':'corrupt.json'}")
0036 
0037 {% endhighlight %}
0038 
0039 ### Ignore Corrupt Files
0040 
0041 Spark allows you to use `spark.sql.files.ignoreCorruptFiles` to ignore corrupt files while reading data
0042 from files. When set to true, the Spark jobs will continue to run when encountering corrupted files and
0043 the contents that have been read will still be returned.
0044 
0045 To ignore corrupt files while reading data files, you can use:
0046 
0047 <div class="codetabs">
0048 <div data-lang="scala"  markdown="1">
0049 {% include_example ignore_corrupt_files scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
0050 </div>
0051 
0052 <div data-lang="java"  markdown="1">
0053 {% include_example ignore_corrupt_files java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
0054 </div>
0055 
0056 <div data-lang="python"  markdown="1">
0057 {% include_example ignore_corrupt_files python/sql/datasource.py %}
0058 </div>
0059 
0060 <div data-lang="r"  markdown="1">
0061 {% include_example ignore_corrupt_files r/RSparkSQLExample.R %}
0062 </div>
0063 </div>
0064 
0065 ### Ignore Missing Files
0066 
0067 Spark allows you to use `spark.sql.files.ignoreMissingFiles` to ignore missing files while reading data
0068 from files. Here, missing file really means the deleted file under directory after you construct the
0069 `DataFrame`. When set to true, the Spark jobs will continue to run when encountering missing files and
0070 the contents that have been read will still be returned.
0071 
0072 ### Path Global Filter
0073 
0074 `pathGlobFilter` is used to only include files with file names matching the pattern.
0075 The syntax follows <code>org.apache.hadoop.fs.GlobFilter</code>.
0076 It does not change the behavior of partition discovery.
0077 
0078 To load files with paths matching a given glob pattern while keeping the behavior of partition discovery,
0079 you can use:
0080 
0081 <div class="codetabs">
0082 <div data-lang="scala"  markdown="1">
0083 {% include_example load_with_path_glob_filter scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
0084 </div>
0085 
0086 <div data-lang="java"  markdown="1">
0087 {% include_example load_with_path_glob_filter java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
0088 </div>
0089 
0090 <div data-lang="python"  markdown="1">
0091 {% include_example load_with_path_glob_filter python/sql/datasource.py %}
0092 </div>
0093 
0094 <div data-lang="r"  markdown="1">
0095 {% include_example load_with_path_glob_filter r/RSparkSQLExample.R %}
0096 </div>
0097 </div>
0098 
0099 ### Recursive File Lookup
0100 `recursiveFileLookup` is used to recursively load files and it disables partition inferring. Its default value is `false`.
0101 If data source explicitly specifies the `partitionSpec` when `recursiveFileLookup` is true, exception will be thrown.
0102 
0103 To load all files recursively, you can use:
0104 
0105 <div class="codetabs">
0106 <div data-lang="scala"  markdown="1">
0107 {% include_example recursive_file_lookup scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
0108 </div>
0109 
0110 <div data-lang="java"  markdown="1">
0111 {% include_example recursive_file_lookup java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
0112 </div>
0113 
0114 <div data-lang="python"  markdown="1">
0115 {% include_example recursive_file_lookup python/sql/datasource.py %}
0116 </div>
0117 
0118 <div data-lang="r"  markdown="1">
0119 {% include_example recursive_file_lookup r/RSparkSQLExample.R %}
0120 </div>
0121 </div>