You are currently viewing Spark Convert Parquet file to Avro
Photo by David Kiriakidis on Unsplash

In this Spark article, you will learn how to convert Parquet file to Avro file format with Scala example, In order to convert first, we will read a Parquet file into DataFrame and write it in a Avro file.

1. What is Apache Parquet

Apache Parquet is a columnar file format that provides optimizations to speed up queries and is a far more efficient file format than CSV or JSON, supported by many data processing systems.

It is compatible with most of the data processing frameworks in the Hadoop echo systems. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.

Spark SQL provides support for both reading and writing Parquet files that automatically capture the schema of the original data, It also reduces data storage by 75% on average. Below are some advantages of storing data in a parquet format. Spark by default supports Parquet in its library hence we don’t need to add any dependency libraries.

2. Apache Parquet Advantages:

Below are some of the advantages of using Apache Parquet. combining these benefits with Spark improves performance and gives the ability to work with structure files.

  • Reduces IO operations.
  • Fetches specific columns that you need to access.
  • It consumes less space.
  • Support type-specific encoding.

3. Reading Parquet file into DataFrame

Spark DataFrameReader provides parquet() function (spark.read.parquet) to read the parquet files and creates a Spark DataFrame. In this example, we are reading data from an apache parquet.


// Reading Parquet file into DataFrame
val df = spark.read.parquet("src/main/resources/zipcodes.parquet")

Alternatively, you can also write above statement as


  // Read parquet file
  val df = spark.read.format("parquet")
    .load("src/main/resources/zipcodes.parquet")
  df.show()
  df.printSchema()

If you want to read more on Parquet, I would recommend checking how to Read and Write Parquet file with a specific schema along with the dependencies and how to use partitions.

4. Spark Convert Parquet to Avro file

In the previous section, we have read the Parquet file into DataFrame now let’s convert it to Avro by saving it to Avro file format. before we start, first let’s learn what is Avro and it’s advantages.

4.1 What is Apache Avro

Apache Avro is an open-source, row-based, data serialization and data exchange framework for Hadoop projects, originally developed by databricks as an open-source library that supports reading and writing data in Avro file format. it is mostly used in Apache Spark especially for Kafka-based data pipelines. When Avro data is stored in a file, its schema is stored with it, so that files may be processed later by any program.

It has build to serialize and exchange big data between different Hadoop based projects. It serializes data in a compact binary format and schema is in JSON format that defines the field names and data types.

4.2 Avro Advantages

  • Supports complex data structures like Arrays, Map, Array of map and map of array elements.
  • A compact, binary serialization format which provides fast while transferring data.
  • row-based data serialization system.
  • Support multi-languages, meaning data written by one language can be read by different languages.
  • Code generation is not required to read or write data files.
  • Simple integration with dynamic languages.

Since Avro library is external to Spark, it doesn’t provide avro() function on DataFrameWriter , hence we should use DataSource “avro” or “org.apache.spark.sql.avro” to write Spark DataFrame to Avro file.


 df.write.format("avro").save("/tmp/avro/zipcodes.avro")

Spark DataFrameWriter provides partitionBy() function to partition the Avro at the time of writing. Partition improves performance on reading by reducing Disk I/O.


df.write.partitionBy("State","Zipcode")
        .format("avro").save("/tmp/avro/zipcodes_partition.avro")

If you want to read more on Avro, I would recommend checking how to Read and Write Avro file with a specific schema along with the dependencies it needed.

5. Complete Example to convert Parquet file to Avro file format


package com.sparkbyexamples.spark.dataframe

import org.apache.spark.sql.{SaveMode, SparkSession}

object ParquetToAvro extends App {

  val spark: SparkSession = SparkSession.builder()
    .master("local[1]")
    .appName("SparkByExample")
    .getOrCreate()

  spark.sparkContext.setLogLevel("ERROR")

  // Read parquet file
  val df = spark.read.format("parquet")
    .load("src/main/resources/zipcodes.parquet")
  df.show()
  df.printSchema()

  // Convert to avro
  df.write.format("avro")
    .mode(SaveMode.Overwrite)
    .save("/tmp/avro/zipcodes.avro")
}

Conclusion

In this Spark article, you have learned how to convert a Parquet file to an Avro file format with Scala examples. Though we literally don’t convert from Parquet format to Avro straight, first we convert it to DataFrame and then DataFrame can be saved to any format Spark supports.

Happy Learning !!

Naveen Nelamali

Naveen Nelamali (NNK) is a Data Engineer with 20+ years of experience in transforming data into actionable insights. Over the years, He has honed his expertise in designing, implementing, and maintaining data pipelines with frameworks like Apache Spark, PySpark, Pandas, R, Hive and Machine Learning. Naveen journey in the field of data engineering has been a continuous learning, innovation, and a strong commitment to data integrity. In this blog, he shares his experiences with the data as he come across. Follow Naveen @ LinkedIn and Medium