After setting up a new Spark cluster running on Yarn, I’ve come across an error Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources while running Spark application via spark-submit and also tried with PySpark but got the same error.

As you noticed when you get this warning, your application keeps running and wait for the resources indefinitely. You need to kill a job in order to terminate.
This issue is mainly caused due to the resource unavailable to run your application on the cluster, I have resolved this issue and able to run the Spark application on the cluster successfully and though it would be helpful sharing here.
1. Disable dynamic resource allocation
If you have enough nodes, cores and memory resources available to yarn, Spark uses dynamic allocation to create spark workers and by default this dynamic allocation is enabled.
When you have a small cluster with limited resources, you can disable this option and allocate resources as per your need. Make sure to allocate resources less than the actual resources available.
spark-submit --conf spark.dynamicAllocation.enabled=false
When you turn off the dynamic allocation, you need to explicitly allocate the resources. Below is Python (PySpark) spark-submit command with minimum config.
spark-submit --deploy-mode cluster --master yarn \
--driver-memory 3g --executor-memory 3g \
--num-executors 2 --executor-cores 2 \
--conf spark.dynamicAllocation.enabled=false \
readcsv.py
Below is spark-submit for scala with minimum config.
spark-submit --deploy-mode cluster --master yarn \
--driver-memory 3g --executor-memory 3g \
--num-executors 2 --executor-cores 2 \
--conf spark.dynamicAllocation.enabled=false \
--jar application.jar \
--class com.example.ReadCSV
2. By Pragmatically
You can also turn-off the dynamic resource allocation by pragmatically.
// By Pragmatically
conf = SparkConf().setAppName("SparkByExamples.com")
.set("spark.shuffle.service.enabled", "false")
.set("spark.dynamicAllocation.enabled", "false")
3. Starting Slave server
If you are running the Spark standalone cluster, you can also get this error when you are not running your slaves.
Start your slave servers by passing master URL
// Starting Slave server
start-slave.sh
Hope you able to resolve this error and able to run your application.
Happy Learning !!
Related Articles
- Spark – Different Types of Issues While Running in Cluster?
- Spark Deploy Modes – Client vs Cluster Explained
- Spark Shell Command Usage with Examples
- Spark Get Current Number of Partitions of DataFrame
- Spark – Extract DataFrame Column as List
- Spark Kill Running Application or Job?
- How to Submit a Spark Job via Rest API?