Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I'm trying to build a recommender using Spark and just ran out of memory:

Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: Java heap space


I'd like to increase the memory available to Spark by modifying the spark.executor.memory property, in PySpark, at runtime.

Is that possible? If so, how?

1 Answer

0 votes
by (32.3k points)

You can simply do one thing just modify the settings for Spark Context. And for that you need to exist the current running context and then create a new one.

Once a SparkConf object is passed to Spark, it is cloned and can no longer be modified by the user. Spark does not support modifying the configuration at runtime.

So, after the shell is started you have to stop the existing context before creating a new one.

For pySpark:

from pyspark import SparkContext

SparkContext.setSystemProperty('spark.executor.memory', '2g')

sc = SparkContext("local", "App Name")


Refence: https://spark.apache.org/docs/0.8.1/python-programming-guide.html

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...