Explore Courses Blog Tutorials Interview Questions
0 votes
in Big Data Hadoop & Spark by (11.4k points)

I have a weka model stored in S3 which is of size around 400MB. Now, I have some set of record on which I want to run the model and perform prediction.

For performing prediction, What I have tried is,

  1. Download and load the model on driver as a static object , broadcast it to all executors. Perform a map operation on prediction RDD. ----> Not working, as in Weka for performing prediction, model object needs to be modified and broadcast require a read-only copy.
  2. Download and load the model on driver as a static object and send it to executor in each map operation. -----> Working (Not efficient, as in each map operation, i am passing 400MB object)
  3. Download the model on driver and load it on each executor and cache it there. (Don't know how to do that)

Does someone have any idea how can I load the model on each executor once and cache it so that for other records I don't load it again?

1 Answer

0 votes
by (32.3k points)

To solve your problem, you have two options:


1. Try to create a singleton object with a lazy val representing the data, as given below:

    object WekaModel {

        lazy val data = {

            // initialize data here. This will only happen once per JVM process



Then try to use the lazy val in your map function. The lazy val makes sure that each worker JVM initializes their own instance of the data. No serialization or broadcasts will be performed for data. { element =>

        // use here





This is more efficient, as it allows you to initialize your data once per JVM instance. This approach is actually a very good choice when needing to initialize a database connection pool for example.



Less control over initialization, e.g. it gets trickier to initialize your object if you require runtime parameters.

You can not free up or release the object if you need to.Also, that is acceptable as the OS will free up the resources when the process exits.

2. Use mapPartition (or foreachPartition) method on the RDD instead of using just map.

This allows you to initialize whatever you need for the entire partition.


    elementsRDD.mapPartition { elements =>

        val model = new WekaModel()

  { element =>

            // use model and element. There is a single instance of model per partition.





Provides more flexibility in the initialization and deinitialization of objects.



Each partition creates and initializes a new instance of your object. Depending on how many partitions you have per JVM instance, it may or may not be an issue.

Browse Categories