Back to home page

OSCL-LXR

 
 

    


0001 # R on Spark
0002 
0003 SparkR is an R package that provides a light-weight frontend to use Spark from R.
0004 
0005 ### Installing sparkR
0006 
0007 Libraries of sparkR need to be created in `$SPARK_HOME/R/lib`. This can be done by running the script `$SPARK_HOME/R/install-dev.sh`.
0008 By default the above script uses the system wide installation of R. However, this can be changed to any user installed location of R by setting the environment variable `R_HOME` the full path of the base directory where R is installed, before running install-dev.sh script.
0009 Example:
0010 ```bash
0011 # where /home/username/R is where R is installed and /home/username/R/bin contains the files R and RScript
0012 export R_HOME=/home/username/R
0013 ./install-dev.sh
0014 ```
0015 
0016 ### SparkR development
0017 
0018 #### Build Spark
0019 
0020 Build Spark with [Maven](https://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
0021 
0022 ```bash
0023 ./build/mvn -DskipTests -Psparkr package
0024 ```
0025 
0026 #### Running sparkR
0027 
0028 You can start using SparkR by launching the SparkR shell with
0029 
0030     ./bin/sparkR
0031 
0032 The `sparkR` script automatically creates a SparkContext with Spark by default in
0033 local mode. To specify the Spark master of a cluster for the automatically created
0034 SparkContext, you can run
0035 
0036     ./bin/sparkR --master "local[2]"
0037 
0038 To set other options like driver memory, executor memory etc. you can pass in the [spark-submit](https://spark.apache.org/docs/latest/submitting-applications.html) arguments to `./bin/sparkR`
0039 
0040 #### Using SparkR from RStudio
0041 
0042 If you wish to use SparkR from RStudio, please refer [SparkR documentation](https://spark.apache.org/docs/latest/sparkr.html#starting-up-from-rstudio).
0043 
0044 #### Making changes to SparkR
0045 
0046 The [instructions](https://spark.apache.org/contributing.html) for making contributions to Spark also apply to SparkR.
0047 If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using `R/install-dev.sh` and test your changes.
0048 Once you have made your changes, please include unit tests for them and run existing unit tests using the `R/run-tests.sh` script as described below.
0049 
0050 #### Generating documentation
0051 
0052 The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script `R/create-docs.sh`. This script uses `devtools` and `knitr` to generate the docs and these packages need to be installed on the machine before using the script. Also, you may need to install these [prerequisites](https://github.com/apache/spark/tree/master/docs#prerequisites). See also, `R/DOCUMENTATION.md`
0053 
0054 ### Examples, Unit tests
0055 
0056 SparkR comes with several sample programs in the `examples/src/main/r` directory.
0057 To run one of them, use `./bin/spark-submit <filename> <args>`. For example:
0058 ```bash
0059 ./bin/spark-submit examples/src/main/r/dataframe.R
0060 ```
0061 You can run R unit tests by following the instructions under [Running R Tests](https://spark.apache.org/docs/latest/building-spark.html#running-r-tests).
0062 
0063 ### Running on YARN
0064 
0065 The `./bin/spark-submit` can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run
0066 ```bash
0067 export YARN_CONF_DIR=/etc/hadoop/conf
0068 ./bin/spark-submit --master yarn examples/src/main/r/dataframe.R
0069 ```