Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (19k points)
What does this attribute do exactly? I mean at first (since I am not battling with a job that fails due to out of memory errors) I thought I should increase that.

On second thought, it seems that this attribute defines the max size of the result a worker can send to the driver, so leaving it at the default (1G) would be the best approach to protect the driver..

But will happen on this case, the worker will have to send more messages, so the overhead will be just that the job will be slower?

If I understand correctly, assuming that a worker wants to send 4G of data to the driver, then having spark.driver.maxResultSize=1G, will cause the worker to send 4 messages (instead of 1 with unlimited spark.driver.maxResultSize). If so, then increasing that attribute to protect my driver from being assassinated from Yarn should be wrong.

But still the question above remains..I mean what if I set it to 1M (the minimum), will it be the most protective approach?

1 Answer

0 votes
by (33.1k points)
Let's say a worker wants to send 4G of data to the driver, then having spark.driver.maxResultSize=1G, will cause the worker to send 4 messages (instead of 1 with unlimited spark.driver.maxResultSize).

No. If the estimated size of the data is larger than maxResultSize given job will be aborted. The goal here is to protect your application from driver loss, nothing more.

The good value should allow the application to proceed normally but protect application from unexpected conditions.

Hope this answer helps you!

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...