Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)
The value of spark.yarn.executor.memoryOverhead in a Spark job with YARN should be allocated to App or just the max value?

1 Answer

0 votes
by (33.1k points)
edited by

Simply use spark.yarn.executor.memoryOverhead

It is just the max value. The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames

You should try this also

--executor-memory/spark.executor.memory

It controls the executor heap size, but JVMs can also use some memory off heap, for example for interned Strings and direct byte buffers.

The value of the spark.yarn.executor.memory overhead property is added to the executor memory to determine the full memory request to YARN for each executor. It defaults to max(executor memory * 0.10, with a minimum of 384). For more insights on this, Spark Training is advised for the beginners to pursue.

The executors will use a memory allocation based on the property of spark.executor.memoryplus an overhead defined by spark.yarn.executor.memoryOverhead

Hope this answer will help you!

If you want to learn Spark then go through the Spark tutorial:

Related questions

0 votes
1 answer
0 votes
2 answers
0 votes
1 answer
asked Jul 4, 2019 in SQL by Tech4ever (20.3k points)

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...