You are currently viewing How to Convert Pandas to PySpark DataFrame

Converting a Pandas DataFrame to a PySpark DataFrame is necessary when dealing with large datasets that cannot fit into memory on a single machine. PySpark DataFrames leverage distributed computing capabilities, enabling processing of massive datasets across clusters of machines. By utilizing PySpark DataFrames, users can take advantage of scalability, parallel processing, and fault tolerance provided by Spark, ensuring efficient handling of large-scale data processing tasks.

Advertisements

In this article, I will explain the steps in converting pandas to PySpark DataFrame and how to Optimize the pandas to PySpark DataFrame Conversion by enabling Apache Arrow.

Key Points –

  • Ensure PySpark is installed on your system to utilize its DataFrame functionalities.
  • Import necessary modules such as pandas and pyspark.sql to facilitate the conversion process.
  • Utilize the createDataFrame() method to convert the Pandas DataFrame into a PySpark DataFrame.
  • Create a SparkSession object to interact with Spark and handle DataFrame operations.
  • Be mindful of potential differences in data handling and performance between Pandas and PySpark when converting, ensuring compatibility with your workflow and computational requirements.

Create Pandas DataFrame

To convert pandas to PySpark DataFrame first, let’s create Pandas DataFrame with some test data. To use pandas you have to import it first using import pandas as pd


import pandas as pd    
data = [['Scott', 50], ['Jeff', 45], ['Thomas', 54],['Ann',34]] 
 
# Create the pandas DataFrame 
pandasDF = pd.DataFrame(data, columns = ['Name', 'Age']) 
  
# print dataframe. 
print(pandasDF)

# Prints below Pandas DataFrame
     Name  Age
0   Scott   50
1    Jeff   45
2  Thomas   54
3     Ann   34

Convert Pandas to PySpark (Spark) DataFrame

Spark provides a createDataFrame(pandas_dataframe) method to convert pandas to Spark DataFrame, Spark by default infers the schema based on the pandas data types to PySpark data types.


from pyspark.sql import SparkSession
#Create PySpark SparkSession
spark = SparkSession.builder \
    .master("local[1]") \
    .appName("SparkByExamples.com") \
    .getOrCreate()
#Create PySpark DataFrame from Pandas
sparkDF=spark.createDataFrame(pandasDF) 
sparkDF.printSchema()
sparkDF.show()

#Outputs below schema & DataFrame

root
 |-- Name: string (nullable = true)
 |-- Age: long (nullable = true)

+------+---+
|  Name|Age|
+------+---+
| Scott| 50|
|  Jeff| 45|
|Thomas| 54|
|   Ann| 34|
+------+---+

If you want all data types to String use spark.createDataFrame(pandasDF.astype(str)).

Change Column Names & DataTypes while Converting

If you want to change the schema (column name & data type) while converting pandas to PySpark DataFrame, create a PySpark Schema using StructType and use it for the schema.


from pyspark.sql.types import StructType,StructField, StringType, IntegerType
#Create User defined Custom Schema using StructType
mySchema = StructType([ StructField("First Name", StringType(), True)\
                       ,StructField("Age", IntegerType(), True)])

#Create DataFrame by changing schema
sparkDF2 = spark.createDataFrame(pandasDF,schema=mySchema)
sparkDF2.printSchema()
sparkDF2.show()

#Outputs below schema & DataFrame

root
 |-- First Name: string (nullable = true)
 |-- Age: integer (nullable = true)

+----------+---+
|First Name|Age|
+----------+---+
|     Scott| 50|
|      Jeff| 45|
|    Thomas| 54|
|       Ann| 34|
+----------+---+

Use Apache Arrow to Convert pandas to Spark DataFrame

Using Apache Arrow to convert a Pandas DataFrame to a Spark DataFrame involves leveraging Arrow’s efficient in-memory columnar representation for data interchange between Pandas and Spark. This process enhances performance by minimizing data serialization and deserialization overhead.

To accomplish this conversion, first, ensure that both Pandas and PySpark are Arrow-enabled. Then, utilize Arrow’s capabilities to directly convert Pandas DataFrame to Arrow format, followed by converting Arrow format to Spark DataFrame using PyArrow.

Install pip install pyspark[sql] or by directly downloading from Apache Arrow for Python to work with Arrow.


spark.conf.set("spark.sql.execution.arrow.enabled","true")
sparkDF=spark.createDataFrame(pandasDF) 
sparkDF.printSchema()
sparkDF.show()

To utilize the above approach, it’s necessary to have Apache Arrow installed and compatible with Spark. If Apache Arrow is not installed, you will encounter the following error message.


\apps\Anaconda3\lib\site-packages\pyspark\sql\pandas\conversion.py:289: UserWarning: createDataFrame attempted Arrow optimization because 'spark.sql.execution.arrow.pyspark.enabled' is set to true; however, failed by the reason below:
  PyArrow >= 0.15.1 must be installed; however, it was not found.
Attempting non-optimization as 'spark.sql.execution.arrow.pyspark.fallback.enabled' is set to true.

In the event of an error, Spark will automatically revert to its non-Arrow optimization implementation. This behavior can be managed through the spark.sql.execution.arrow.pyspark.fallback.enabled parameter.


spark.conf.set("spark.sql.execution.arrow.pyspark.fallback.enabled","true")

Note that Apache Arrow does not support complex types like  MapTypeArrayType of TimestampType, and nested StructType.

Complete Example of Convert Pandas to Spark DataFrame


import pandas as pd    
data = [['Scott', 50], ['Jeff', 45], ['Thomas', 54],['Ann',34]] 
  
# Create the pandas DataFrame 
pandasDF = pd.DataFrame(data, columns = ['Name', 'Age']) 
  
# print dataframe. 
print(pandasDF)

from pyspark.sql import SparkSession

spark = SparkSession.builder \
    .master("local[1]") \
    .appName("SparkByExamples.com") \
    .getOrCreate()

sparkDF=spark.createDataFrame(pandasDF) 
sparkDF.printSchema()
sparkDF.show()

#sparkDF=spark.createDataFrame(pandasDF.astype(str)) 
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
mySchema = StructType([ StructField("First Name", StringType(), True)\
                       ,StructField("Age", IntegerType(), True)])

sparkDF2 = spark.createDataFrame(pandasDF,schema=mySchema)
sparkDF2.printSchema()
sparkDF2.show()

# Enable Apache Arrow to convert Pandas to PySpark DataFrame
spark.conf.set("spark.sql.execution.arrow.enabled","true")
sparkDF2=spark.createDataFrame(pandasDF) 
sparkDF2.printSchema()
sparkDF2.show()

#Convert PySpark DataFrame to Pandas
pandasDF2=sparkDF2.select("*").toPandas
print(pandasDF2)

Frequently Asked Questions

Why should I convert Pandas DataFrame to PySpark DataFrame?

Converting Pandas DataFrame to PySpark DataFrame allows you to leverage the distributed computing capabilities of PySpark, enabling the processing of large datasets that may not fit into memory on a single machine.

What are the key differences between Pandas and PySpark DataFrames?

Pandas DataFrame operates on a single machine, suitable for smaller datasets, while PySpark DataFrame distributes data across a cluster, making it efficient for handling large-scale datasets. Additionally, PySpark DataFrame supports parallel processing, enabling faster computations compared to Pandas.

Are there any performance considerations when converting Pandas to PySpark DataFrame?

It’s essential to consider the performance implications, especially when dealing with large datasets. PySpark DataFrame operations are optimized for distributed computing, whereas Pandas operations are primarily designed for single-machine processing. Be mindful of the potential differences in performance and resource utilization.

Conclusion

Utilizing Apache Arrow for converting Pandas to PySpark DataFrame offers several advantages. Firstly, Apache Arrow facilitates high-performance data interchange between Pandas and Spark by leveraging a common in-memory columnar format. This minimizes serialization and deserialization overhead, resulting in faster data processing. Additionally, Arrow’s compatibility with both Pandas and Spark ensures seamless integration, enabling efficient data transfer across the two frameworks.

Happy Learning!!

Leave a Reply

This Post Has One Comment

  1. Anonymous

    Thank you ma’am/sir. This is exactly what I needed.