Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I am just getting the hang of Spark, and I have function that needs to be mapped to an rdd, but uses a global dictionary:

from pyspark import SparkContext

sc = SparkContext('local[*]', 'pyspark')

my_dict = {"a": 1, "b": 2, "c": 3, "d": 4} # at no point will be modified
my_list = ["a", "d", "c", "b"]

def my_func(letter):
    return my_dict[letter]

my_list_rdd = sc.parallelize(my_list)

result = my_list_rdd.map(lambda x: my_func(x)).collect()

print result


The above gives the expected result; however, I am really not sure about my use of the global variable my_dict. It seems that a copy of the dictionary is made with every partition. And it just does not feel right..

It looked like broadcast is what I am looking for. However, when I try to use it:

my_dict_bc = sc.broadcast(my_dict)

def my_func(letter):
    return my_dict_bc[letter] 


I get the following error:

TypeError: 'Broadcast' object has no attribute '__getitem__

1 Answer

0 votes
by (32.3k points)

Let me remind you something very important about Broadcast objects, they have a property called value where the data is stored.

Therefore you have to modify my_func to something like this:

 

my_dict_bc = sc.broadcast(my_dict)

 

def my_func(letter):

    return my_dict_bc.value[letter]

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...