Kafka Delete Topic and its messages
Kafka Delete Topic - Every message Apache Kafka receives stores it in log and by…
This article provides step by step instructions on how to install, setup, and run Apache…
This post explains how to setup Apache Spark and run Spark applications on the Hadoop…
Kafka allows us to create our own serializer and deserializer so that we can produce and consume different data types like Json, POJO e.t.c. In this post will see how to produce and consumer User pojo object. To stream pojo objects one need to create custom serializer and deserializer.
This article explains how to write Kafka Producer and Consumer example in Scala. Producer sends messages to Kafka topics in the form of records, a record is a key-value pair along with topic name and consumer receives a messages from a topic.
If you are getting below exception while setting up Cassandra cluster, please follow below steps…
In Spark or PySpark SparkSession object is created programmatically using SparkSession.builder() and if you are…
Are you getting WARNING: "HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX. ? for every command you issue on a cluster, follow below step to resolve it.
When your datanodes are not starting due to java.io.IOException: Incompatible clusterIDs error, means you have formatted namenode with out deleting files from datanode.
Let's see how to create Spark RDD using sparkContext.parallelize() method and using Spark shell and…