We are often required to check what version of Apache Spark is installed on our environment, depending on the OS (Mac, Linux, Windows, CentOS) Spark installs in different locations hence it’s challenging to find the Spark version.
In this article, I will quickly cover different ways to check the Spark installed version through the command line and in runtime. You can use the options explained here to find the spark version when you are using Hadoop (CDH), Aws Glue, Anaconda, Jupyter notebook e.t.c.
1. Spark Version Check from Command Line
Like any other tools or language, you can use –version option with spark-submit
, spark-shell
, and spark-sql
to find the version.
spark-submit --version
spark-shell --version
spark-sql --version
All above spark-submit command, spark-shell command, and spark-sql
return the below output where you can find Spark installed version.

As you see it displays the spark version along with Scala version 2.12.10 and Java version. For Java, I am using OpenJDK hence it shows the version as OpenJDK 64-Bit Server VM, 11.0-13
.
2. Version Check From Spark Shell
Additionally, you are in spark-shell and you wanted to find out the spark version without exiting spark-shell, you can achieve this by using the sc.version
. sc
is a SparkContect variable that default exists in spark-shell
. Use the below steps to find the spark version.
- cd to
$SPARK_HOME/bin
- Launch
spark-shell
command - Enter
sc.version
orspark.version

sc.version
returns a version as a String type. When you use the spark.version
from the shell, it also returns the same output.
3. Find Version from IntelliJ or any IDE
Imagine you are writing a Spark application and you wanted to find the spark version during runtime, you can get it by accessing the version
property from the SparkSession object which returns a String type.
val spark = SparkSession.builder()
.master("local[1]")
.appName("SparkByExamples.com")
.getOrCreate();
print('Apache Spark Version :'+spark.version)
print('Apache Spark Version :'+spark.sparkContext.version)
In this simple article, you have learned to find a spark version from the command line, spark-shell, and runtime, you can use these from Hadoop (CDH), Aws Glue, Anaconda, Jupyter notebook e.t.c
Happy Learning !!
Related Articles
- Spark Check String Column Has Numeric Values
- Spark Check Column Data Type is Integer or String
- Spark Check Column Present in DataFrame
- Spark – Initial job has not accepted any resources; check your cluster UI
- Spark – Check if DataFrame or Dataset is empty?
- What is Apache Spark and Why It Is Ultimate for Working with Big Data