You are currently viewing Spark Parquet file to CSV format
Photo by Alondra Olivas on Unsplash

In this Spark article, you will learn how to convert Parquet file to CSV file format with Scala example, In order to convert first, we will read a Parquet file into DataFrame and write it in a CSV file.

What is Apache Parquet

Apache Parquet is a columnar file format that provides optimizations to speed up queries and is a far more efficient file format than CSV or JSON, supported by many data processing systems.

It is compatible with most of the data processing frameworks in the Hadoop echo systems. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.

Spark SQL provides support for both reading and writing Parquet files that automatically capture the schema of the original data, It also reduces data storage by 75% on average. Below are some advantages of storing data in a parquet format. Spark by default supports Parquet in its library hence we don’t need to add any dependency libraries.

Apache Parquet Advantages:

Below are some of the advantages of using Apache Parquet. combining these benefits with Spark improves performance and gives the ability to work with structure files.

  • Reduces IO operations.
  • Fetches specific columns that you need to access.
  • It consumes less space.
  • Support type-specific encoding.

Reading Parquet file into DataFrame

Spark DataFrameReader provides parquet() function (spark.read.parquet) to read the parquet files and creates a Spark DataFrame. In this example, we are reading data from an apache parquet.


val df = spark.read.parquet("src/main/resources/zipcodes.parquet")

Alternatively, you can also write the above statement as


  //read parquet file
  val df = spark.read.format("parquet")
    .load("src/main/resources/zipcodes.parquet")
  df.show()
  df.printSchema()

If you want to read more on Parquet, I would recommend checking how to Read and Write Parquet file with a specific schema along with the dependencies and how to use partitions.

Spark Convert Parquet to CSV file

In the previous section, we have read the Parquet file into DataFrame now let’s convert it to CSV by saving it to CSV file format using dataframe.write.csv("path") .


df.write
.option("header","true")
.csv("/tmp/csv/zipcodes.csv")

In this example, we have used the head option to write the CSV file with the header, Spark also supports multiple options to read and write CSV files

Complete Example to convert Parquet file to CSV file format


package com.sparkbyexamples.spark.dataframe

import org.apache.spark.sql.{SaveMode, SparkSession}

object ParquetToCsv extends App {

  val spark: SparkSession = SparkSession.builder()
    .master("local[1]")
    .appName("SparkByExamples.com")
    .getOrCreate()

  spark.sparkContext.setLogLevel("ERROR")

  //read parquet file
  val df = spark.read.format("parquet")
    .load("src/main/resources/zipcodes.parquet")
  df.show()
  df.printSchema()

  //convert to csv
  df.write.mode(SaveMode.Overwrite)
    .csv("/tmp/csv/zipcodes.csv")
}

Conclusion

In this Spark article, you have learned how to convert a Parquet file to a CSV file format with Scala examples. Though we literally don’t convert from Parquet format to CSV straight, first we convert it to DataFrame and then DataFrame can be saved to any format Spark supports.

Happy Learning !!

Naveen Nelamali

Naveen Nelamali (NNK) is a Data Engineer with 20+ years of experience in transforming data into actionable insights. Over the years, He has honed his expertise in designing, implementing, and maintaining data pipelines with frameworks like Apache Spark, PySpark, Pandas, R, Hive and Machine Learning. Naveen journey in the field of data engineering has been a continuous learning, innovation, and a strong commitment to data integrity. In this blog, he shares his experiences with the data as he come across. Follow Naveen @ LinkedIn and Medium