Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I have the following as the command line to start a spark streaming job.

    spark-submit --class com.biz.test \
            --packages \
                org.apache.spark:spark-streaming-kafka_2.10:1.3.0 \
                org.apache.hbase:hbase-common:1.0.0 \
                org.apache.hbase:hbase-client:1.0.0 \
                org.apache.hbase:hbase-server:1.0.0 \
                org.json4s:json4s-jackson:3.2.11 \
            ./test-spark_2.10-1.0.8.jar \

            >spark_log 2>&1 &
The job fails to start with the following error:

Exception in thread "main" java.lang.IllegalArgumentException: Given path is malformed: org.apache.hbase:hbase-common:1.0.0
    at org.apache.spark.util.Utils$.resolveURI(Utils.scala:1665)
    at org.apache.spark.deploy.SparkSubmitArguments.parse$1(SparkSubmitArguments.scala:432)
    at org.apache.spark.deploy.SparkSubmitArguments.parseOpts(SparkSubmitArguments.scala:288)
    at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:87)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:105)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

1 Answer

0 votes
by (32.3k points)

Always keep in mind that a list of packages should be separated using commas without whitespaces (breaking lines should work just fine) for example

--packages  org.apache.spark:spark-streaming-kafka_2.10:1.3.0,\

  org.apache.hbase:hbase-common:1.0.0

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...