Incompatible clusterIDs error in datanode logs

Spread the love

When your datanodes are not starting due to Incompatible clusterIDs error, means you have formatted namenode with out deleting files from datanode. Incompatible clusterIDs in /tmp/hadoop-ubuntu/dfs/data: namenode clusterID = CID-7dc253be-a1e4-4bf6-b051-9f495185c892; datanode clusterID = CID-90f3ade0-0287-45be-a1db-e94cf5b3147d
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(

You will get this error when the cluster ID of name node and cluster ID of data node are different. We can see the cluster ID of name node in <>/current/VERSION file and cluster ID of data node in <>/current/VERSION file.


Before formatting the name node, we need to delete the files under <>/ directories on all data nodes.

Solution to Fix:

Below are two solutions to fix this, use the one that suits to your need.

Solution 1=> If you have valid data on cluster and do not want to delete it then, copy the clusterID from VERSION file of namenode and past it on datanode VERSION file.

Solution 2 => delete all files from <> of datanode and<> of namenode directory and format namenode using below command

hdfs namenode -format

If this resolves your issue, please leave us a comment. It would be helpful for others.

Related Articles

Naveen (NNK) is a Big Data and Spark examples community page, all examples are simple and easy to understand and well tested in our development environment Read more ..

Leave a Reply

This Post Has 6 Comments

  1. Anonymous

    Hey it worked for me!

    1. NNK

      Glad it helped you and thanks for the comment.

  2. Anonymous

    yes it helped for sure

  3. Green Vetal

    Yes it helped for sure 🙂

  4. Anonymous

    You saved me, thank you!

  5. Rakendu

    Thanks! This sawved me! i went for the solution 2. i just deleted the datanode and namenode folders under the nodes folder. Thank you!