Explore Courses Blog Tutorials Interview Questions
0 votes
in Big Data Hadoop & Spark by (11.4k points)

I'm having difficulty getting these components to knit together properly. I have Spark installed and working succesfully, I can run jobs locally, standalone, and also via YARN. I have followed the steps advised (to the best of my knowledge) here and here

I'm working on Ubuntu and the various component versions I have are

I had some difficulty following the various steps such as which jars to add to which path, so what I have added are

  • in /usr/local/share/hadoop-2.6.1/share/hadoop/mapreduce I have added mongo-hadoop-core-1.5.0-SNAPSHOT.jar
  • the following environment variables
  • export HADOOP_HOME="/usr/local/share/hadoop-2.6.1"
  • export PATH=$PATH:$HADOOP_HOME/bin
  • export SPARK_HOME="/usr/local/share/spark-1.5.1-bin-hadoop2.6"
  • export PYTHONPATH="/usr/local/share/mongo-hadoop/spark/src/main/python"
  • export PATH=$PATH:$SPARK_HOME/bin

My Python program is basic

from pyspark import SparkContext, SparkConf
import pymongo_spark

def main():
    conf = SparkConf().setAppName("pyspark test")
    sc = SparkContext(conf=conf)
    rdd = sc.mongoRDD(

if __name__ == '__main__':

I am running it using the command

$SPARK_HOME/bin/spark-submit --driver-class-path /usr/local/share/mongo-hadoop/spark/build/libs/ --master local[4] ~/sparkPythonExample/

and I am getting the following output as a result

Traceback (most recent call last):
  File "/home/me/sparkPythonExample/", line 24, in <module>
  File "/home/me/sparkPythonExample/", line 17, in main
    rdd = sc.mongoRDD('mongodb://username:password@localhost:27017/mydb.mycollection')
  File "/usr/local/share/mongo-hadoop/spark/src/main/python/", line 161, in mongoRDD
    return self.mongoPairRDD(connection_string, config).values()
  File "/usr/local/share/mongo-hadoop/spark/src/main/python/", line 143, in mongoPairRDD
  File "/usr/local/share/mongo-hadoop/spark/src/main/python/", line 80, in _ensure_pickles

1 Answer

0 votes
by (32.3k points)
edited by

While you are using jars in your Spark-submit you should know that some of these jar files are not Uber jars and they need more dependencies for getting downloaded before getting to work.

I will suggest that, In your spark-submit command try to use --package option instead of --jars ...:

spark-submit --packages org.mongodb.mongo-hadoop:mongo-hadoop-core:1.3.1,org.mongodb:mongo-java-driver:3.1.0 [REST OF YOUR OPTIONS]

If you want to know more about Spark, then do check out this awesome video tutorial:

Browse Categories