Spark Submit Command Explained with Examples

The spark-submit command is a powerful utility tool to run or submit a Spark or PySpark application program (or job) locally or in a cluster by specifying options and configurations; the application you are submitting can be written in Scala, Java, or Python (PySpark)

In this comprehensive guide, I will explain the spark-submit syntax, different command options, advanced configurations, and how to use an uber jar or zip file for Scala and Java, use Python .py file, and finally, submit the application on Yarn, Mesos, Kubernetes, and standalone cluster managers. I will cover everything you need to know to run your applications successfully using the spark-submit command.

At a high level, you need to know that the spark-submit command supports the following.

  1. Submitting Spark applications on different cluster managers like Yarn, Kubernetes, Mesos, and Stand-alone.
  2. Submitting Spark application on client or cluster deployment modes.

Related:

Table of contents

1. Spark Submit Command

Spark binary comes with spark-submit.sh script file for Linux, Mac, and spark-submit.cmd command file for windows, these scripts are available at $SPARK_HOME/bin directory.

If you are using Cloudera distribution, you may also find spark2-submit.sh which is used to run Spark 2.x applications. By adding this Cloudera supports both Spark 1.x and Spark 2.x applications to run in parallel.

spark-submit command internally uses org.apache.spark.deploy.SparkSubmit class with the options and command line arguments you specify.

Below is a spark-submit command with the most-used command options.


./bin/spark-submit \
  --master <master-url> \
  --deploy-mode <deploy-mode> \
  --conf <key<=<value> \
  --driver-memory <value>g \
  --executor-memory <value>g \
  --executor-cores <number of cores>  \
  --jars  <comma separated dependencies>
  --class <main-class> \
  <application-jar> \
  [application-arguments]

You can also submit the application like below without using the script.


./bin/spark-class org.apache.spark.deploy.SparkSubmit <options & arguments>

2. Spark Submit Options

Below I have explained some of the common options, configurations, and specific options to use with Scala and Python. You can also get all options available by running the below command.


./bin/spark-submit --help

2. 1 Deployment Modes (–deploy-mode)

Using --deploy-mode, you specify where to run the Spark application driver program. Spark support cluster and client deployment modes.

ValueDescription
clusterIn cluster mode, the driver runs on one of the worker nodes, and this node shows as a driver on the Spark Web UI of your application. cluster mode is used to run production jobs.
clientIn client mode, the driver runs locally where you are submitting your application from. client mode is majorly used for interactive and debugging purposes. Note that in client mode only the driver runs locally and all other executors run on different nodes on the cluster.

2.2 Cluster Managers (–master)

Using --master option, you specify what cluster manager to use to run your application. Spark currently supports Yarn, Mesos, Kubernetes, Stand-alone, and local. The uses of these are explained below.

Cluster ManagerValueDescription
YarnyarnUse yarn if your cluster resources are managed by Hadoop Yarn.
Mesosmesos://HOST:PORTuse mesos://HOST:PORT for Mesos cluster manager, replace the host and port of Mesos cluster manager.
Standalonespark://HOST:PORTUse spark://HOST:PORT for Standalone cluster, replace the host and port of stand-alone cluster.
Kubernetesk8s://HOST:PORT
k8s://https://HOST:PORT
Use k8s://HOST:PORT for Kubernetes, replace the host and port of Kubernetes. This by default connects with https, but if you wanted to use unsecured use k8s://https://HOST:PORT
locallocal
local[k]
local[K,F]
Use local to run locally with a one worker thread.
Use local[k] and specify k with the number of cores you have locally, this runs application with k worker threads.
use local[k,F] and specify F with number of attempts it should run when failed.
Spark submit cluster managers

Example: Below submits applications to yarn managed cluster.


./bin/spark-submit \
    --deploy-mode cluster \
    --master yarn \
    --class org.apache.spark.examples.SparkPi \
    /spark-home/examples/jars/spark-examples_versionxx.jar 80

Value 80 on the above example is a command-line argument for the spark program SparkPi. The above example calculates a PI value of 80.

2.3 Driver and Executor Resources (Cores & Memory)

While submitting an application, you can also specify how much memory and cores you wanted to give for driver and executors.

OptionDescription
–driver-memoryMemory to be used by the Spark driver.
–driver-coresCPU cores to be used by the Spark driver
–num-executorsThe total number of executors to use.
–executor-memoryAmount of memory to use for the executor process.
–executor-coresNumber of CPU cores to use for the executor process.
–total-executor-coresThe total number of executor cores to use.

Example:


./bin/spark2-submit \
   --master yarn \
   --deploy-mode cluster \
   --driver-memory 8g \
   --executor-memory 16g \
   --executor-cores 2  \
   --class org.apache.spark.examples.SparkPi \
   /spark-home/examples/jars/spark-examples_versionxx.jar 80

2.4 Other Options

OptionsDescription
–filesUse comma-separated files you wanted to use.
Usually, these can be files from your resource folder.
Using this option, Spark submits all these files to cluster.
–verboseDisplays the verbose information. For example, writes all configurations spark application uses to the log file.

Note: Files specified with --files are uploaded to the cluster.

Example: Below example submits the application to yarn cluster manager by using cluster deployment mode and with 8g driver memory, 16g, and 2 cores for each executor.


./bin/spark2-submit \
   --verbose
   --master yarn \
   --deploy-mode cluster \
   --driver-memory 8g \
   --executor-memory 16g \
   --executor-cores 2  \
   --files /path/log4j.properties,/path/file2.conf,/path/file3.json
   --class org.apache.spark.examples.SparkPi \
   /spark-home/examples/jars/spark-examples_versionxx.jar 80

3. Spark Submit Configurations

Spark submit supports several configurations using --config, these configurations are used to specify Application configurations, shuffle parameters, runtime configurations.

Most of these configurations are the same for Spark applications written in Java, Scala, and Python(PySpark)

Configuration keyConfiguration Description
spark.sql.shuffle.partitionsNumber of partitions to create for wider shuffle transformations (joins and aggregations).
spark.executor.memoryOverheadThe amount of additional memory to be allocated per executor process in cluster mode, it is typically memory for JVM overheads. (Not supported for PySpark)
spark.serializerorg.apache.spark.serializer.<br>JavaSerializer (default)
org.apache.spark.serializer.KryoSerializer
spark.sql.files.maxPartitionBytesThe maximum number of bytes to be used for every partition when reading files. Default 128MB.
spark.dynamicAllocation.enabledSpecifies whether to dynamically increase or decrease the number of executors based on the workload. Default true.
spark.dynamicAllocation
.minExecutors
A minimum number of executors to use when dynamic allocation is enabled.
spark.dynamicAllocation
.maxExecutors
A maximum number of executors to use when dynamic allocation is enabled.
spark.executor.extraJavaOptionsSpecify JVM options (see example below)

Besides these, Spark also supports many more configurations.

Example :


./bin/spark2-submit \
--master yarn \
--deploy-mode cluster \
--conf "spark.sql.shuffle.partitions=20000" \
--conf "spark.executor.memoryOverhead=5244" \
--conf "spark.memory.fraction=0.8" \
--conf "spark.memory.storageFraction=0.2" \
--conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" \
--conf "spark.sql.files.maxPartitionBytes=168435456" \
--conf "spark.dynamicAllocation.minExecutors=1" \
--conf "spark.dynamicAllocation.maxExecutors=200" \
--conf "spark.dynamicAllocation.enabled=true" \
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" \ 
--files /path/log4j.properties,/path/file2.conf,/path/file3.json \
--class org.apache.spark.examples.SparkPi \
/spark-home/examples/jars/spark-examples_repace-spark-version.jar 80

Alternatively, you can also set these globally @ $SPARK_HOME/conf/spark-defaults.conf to apply for every Spark application. And you can also set using SparkConf programmatically.


val config = new SparkConf()
config.set("spark.sql.shuffle.partitions","300")
val spark=SparkSession.builder().config(config)

First preference goes to SparkConf, then spark-submit –config and then configs mentioned in spark-defaults.conf

4. Submit Scala or Java Application

Regardless of which language you use, most of the options are the same however, there are few options that are specific to a language, for example, to run a Spark application written in Scala or Java, you need to use the additional following options.

OptionDescription
–jarsIf you have all dependency jar’s in a folder, you can pass all these jars using this spark submit –jars option. All your jar files should be comma-separated.
for example –jars jar1.jar,jar2.jar, jar3.jar.
–packagesAll transitive dependencies will be handled when using this command.
–classScala or Java class you wanted to run.
This should be a fully qualified name with the package
for example org.apache.spark.examples.SparkPi.

Note: Files specified with --jars and --packages are uploaded to the cluster.

Example :


./bin/spark-submit \
--master yarn \
--deploy-mode cluster \
--conf "spark.sql.shuffle.partitions=20000" \
--jars "dependency1.jar,dependency2.jar"
--class com.sparkbyexamples.WordCountExample \
spark-by-examples.jar 

5. Spark Submit PySpark (Python) Application

When you wanted to spark-submit a PySpark application, you need to specify the .py file you wanted to run and specify the .egg file or .zip file for dependency libraries.

Below are some of the options & configurations specific to PySpark application. besides these you can also use most of the options & configs that are covered above.

PySpark Specific ConfigurationsDescription
–py-filesUse --py-files to add .py.zip or .egg files.
–config spark.executor.pyspark.memoryThe amount of memory to be used by PySpark for each executor.
–config spark.pyspark.driver.pythonPython binary executable to use for PySpark in driver.
–config spark.pyspark.pythonPython binary executable to use for PySpark in both driver and executors.

Note: Files specified with --py-files are uploaded to the cluster before it runs the application. You also upload these files ahead and refer them in your PySpark application.

Example 1 :


./bin/spark-submit \
   --master yarn \
   --deploy-mode cluster \
   wordByExample.py

Example 2 : Below example uses other python files as dependencies.


./bin/spark-submit \
   --master yarn \
   --deploy-mode cluster \
   --py-files file1.py,file2.py,file3.zip
   wordByExample.py

6. Submitting Application to Mesos

Here, we are submitting spark application on a Mesos-managed cluster using deployment mode with 5G memory and 8 cores for each executor.


# Running Spark application on Mesos cluster manager
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master mesos://192.168.231.132:7077 \
  --deploy-mode cluster \
  --executor-memory 5G \
  --executor-cores 8 \
   http://examples/jars/spark-examples_versionxx.jar 80

7. Submitting Application to Kubernetes

The below example runs Spark application on a Kubernetes managed cluster using cluster deployment mode with 5G memory and 8 cores for each executor.


# Running Spark application on Kubernetes cluster
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master k8s://192.168.231.132:443 \
  --deploy-mode cluster \
  --executor-memory 5G \
  --executor-cores 8 \
  /spark-home/examples/jars/spark-examples_versionxx.jar 80

8. Submitting Application to Standalone

The below example runs Spark application on a Standalone cluster using cluster deployment mode with 5G memory and 8 cores for each executor.


# Running Spark application on standalone cluster
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://192.168.231.132:7077 \
  --deploy-mode cluster \
  --executor-memory 5G \
  --executor-cores 8 \
  /spark-home/examples/jars/spark-examples_versionxx.jar 80

Happy Learning !!

Administrative Tasks

In addition to submitting Spark applications, Spark Submit also provides several other functionalities. It can be used to run Spark’s built-in examples, test Spark applications, and perform other administrative tasks such as deploying and managing Spark clusters. 

Conclusion

In conclusion, Spark Submit is a command-line tool that is an integral part of the Spark ecosystem. It allows users to submit Spark applications to a cluster for execution and provides various functionalities such as running examples, testing applications, and performing administrative tasks. By being able to specify configuration parameters and handling the complexities of the distributed environment, Spark Submit makes it easier for users to develop, test, and deploy their Spark applications.

Naveen Nelamali

Naveen Nelamali (NNK) is a Data Engineer with 20+ years of experience in transforming data into actionable insights. Over the years, He has honed his expertise in designing, implementing, and maintaining data pipelines with frameworks like Apache Spark, PySpark, Pandas, R, Hive and Machine Learning. Naveen journey in the field of data engineering has been a continuous learning, innovation, and a strong commitment to data integrity. In this blog, he shares his experiences with the data as he come across. Follow Naveen @ LinkedIn and Medium

Leave a Reply

This Post Has 3 Comments

  1. Ram

    this page helps in learning new info about spark with simplified samples. kudos

  2. oscar

    What is the meaning of last number (80) on after jar file?

    1. NNK

      80 is a command-line argument. This program calculates PI value for 80.