How to install PySpark on an Ubuntu server running a Linux-based operating system? This article will walk you through the installation process of PySpark on Ubuntu, and the same instructions can be applied to other Linux distributions like CentOS and Debian. Since most PySpark applications run on Linux-based operating systems in real-time, it’s essential to understand how to install and run PySpark applications on Unix-based systems such as the Ubuntu server.
While this article focuses on Ubuntu, the steps outlined can be adapted to install PySpark on any Linux-based operating system, such as CentOS and Debian. I used the following steps to establish my PySpark cluster on an Ubuntu server.
Prerequisites:
- Ubuntu Server running
- Root access to the Ubuntu server
- If you want to install PySpark on Hadoop & Yarn installation, please Install and Setup Hadoop cluster and setup Yarn on Cluster before proceeding with this article.
- If you just want to Install PySpark and run it in standalone, proceed with this article.
1. Java Install On Ubuntu
Java or the Java Development Kit (JDK) is required to run PySpark because PySpark is a Python library that provides an interface to the Apache Spark platform, which is primarily implemented in Scala and runs on the Java Virtual Machine (JVM).
Since Oracle Java is licensed here I am using openJDK Java. If you want to use Java from other vendors or Oracle please do so. Here I will be using JDK 8.
# Install JDK
sudo apt-get -y install openjdk-8-jdk-headless
Run java -version
to check the Java version installed.
2. Python Install On Ubuntu
Python Installation is needed to run PySpark examples (Spark with Python) on the Ubuntu server. Most Ubuntu systems come with Python pre-installed. You can check your Python version using python --version
or python3 --version
. If it’s not installed, you can install Python 3 with the following command.
# Install Python3
sudo apt install python3
3. Install PySpark on Linux Ubuntu
PySpark relies on Apache Spark, which you can download from the official Apache Spark website or use a package manager. I recommend using the spark
package from the Apache Spark website for the latest version. In order to install PySpark on Linux based Ubuntu, access Apache Spark Download site go to the Download Apache Spark section, and click on the link from point 3, this takes you to the page with mirror URLs to download. copy the link from one of the mirror site.
Use wget
command to download the PySpark to your Ubuntu server.
# Download Apache Spark
wget https://www.apache.org/dyn/closer.lua/spark/spark-3.5.0/spark-3.5.0-bin-hadoop3.tgz
This downloads the tar archive file. Now untar the archive file contents using tar command, tar is a file archiving tool. Once untar complete, rename the folder to spark.
# Untar the downloaded file
tar -xzf spark-3.5.0-bin-hadoop3.tgz
# Rename folder to spark
mv spark-3.5.0-bin-hadoop3 spark
Alternatively, you can also use pip to install PySpark
# Install PySpark using pip
pip install pyspark
4. PySpark Environment Variables
To make PySpark accessible from the command line, add the following lines to your ~/.bashrc
file or ~/.zshrc
file. I am using .bashrc file, so I am adding the following lines.
sparkuser@sparknode:~$ vi ~/.bashrc
# Add below lines at the end of the .bashrc file.
export SPARK_HOME=/home/sparkuser/spark
export PATH=$PATH:$SPARK_HOME/bin
Now load the environment variables to the opened session by running the below command
# Source the bashrc file to reload
sparkuser@sparknode:~$ source ~/.bashrc
In case you added to the .profile file then restart your session by closing and re-opening the session.
5. Validate PySpark Installation on Ubuntu
With this, the Apache Spark install on Linux Ubuntu completes. Now let’s run a sample example that comes with Spark binary distribution.
Here I will be using Spark-Submit to submit example Python file which calculates PI value for 10 places. You can find spark-submit at $SPARK_HOME/bin
directory.
# Run spark example
cd $SPARK_HOME
./bin/spark-submit examples/src/main/python/pi.py 10
6. PySpark Shell
Go to $SPARK_HOME/bin
directory and run “pyspark
“. This command starts the PySpark in a shell.
In order to run PySpark, you need to open pyspark shell by running $SPARK_HOME/bin/pyspark
. Make sure you have Python installed before running pyspark shell.
By default, it creates a Spark context web UI and you can access it using the URL http://ip-address:4040.
7. Spark Web UI
On Spark Web UI, you can see how the Spark Actions and Transformation operations are executed.
8. Spark History server
Start the Spark History server, which is another UI interface to monitor all your previous jobs.
Create $SPARK_HOME/conf/spark-defaults.conf
file and add the below configurations.
# Enable to store the event log
spark.eventLog.enabled true
# Location where to store event log
spark.eventLog.dir file:///tmp/spark-events
# Location from where history server to read event log
spark.history.fs.logDirectory file:///tmp/spark-events
Create an Event Log directory. Spark keeps logs for all applications you submit.
sparkuser@sparknode:~$ mkdir /tmp/spark-events
Run $SPARK_HOME/sbin/start-history-server.sh
to start the history server.
sparkuser@sparknode:~$ $SPARK_HOME/sbin/start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to /home/sparkuser/spark/logs/spark-sparkuser-org.apache.spark.deploy.history.HistoryServer-1-sparknode.out
As per the configuration, the history server by default runs on the 18080 port.
Run PI example again by using spark-submit Python file, and refresh the History server which should show the recent run.
Conclusion
You’ve now successfully installed PySpark on your Ubuntu system. You can start using it for distributed data processing and analysis. In Summary, you have learned the steps to install PySpark on a Linux-based Ubuntu Server and also learned how to start the History Server and access web UI.
Related Articles
- Apache Spark Setup with Scala and IntelliJ
- Apache Spark Installation on Windows
- Spark Installation on Mac OS
- Spark Hello World Example in IntelliJ IDEA
- Spark Setup on Hadoop Cluster with Yarn
- Spark Start History Server
- Spark Shell Command Usage with Examples
- What is SparkSession and How to create it?
- What is SparkContext and How to create it?
- How to Check Spark Version
- Install PySpark in Jupyter on Mac using Homebrew
- Install PySpark in Anaconda & Jupyter Notebook
- How to Install PySpark on Mac
- How to Install PySpark on Windows
- Install Pyspark using pip or condo
- Update Jupyter Notebook or Jupyterlab
- Ways to Install Jupyter Notebook on Mac OS
- Update pip to the latest version
- PySpark RDD Actions with examples