Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

My code is:

import org.apache.spark.SparkContext


It can run in interactive mode, but when I use scalac to compile it, I got the following error message:

object apache is not a member of package org

This seems to be the problem of path, but I do not know exactly how to configure the path.

1 Answer

0 votes
by (32.3k points)

Looking at your problem I suggest you specify the path of libraries used when compiling your Scala code. Usually, this can not be achieved manually, but using a build tool such as Maven or sbt will do the job. You can find a minimal sbt setup at http://spark.apache.org/docs/1.2.0/quick-start.html#self-contained-applications

You can also get this error message if have the wrong scope for spark dependency. Something like:

<dependency>

  <groupId>org.apache.spark</groupId>

  <artifactId>spark-core_2.11</artifactId>

  <version>${spark.version}</version>

  <scope>test</scope> <!-- will not be available during compile phase -->

</dependency>

This dependency won’t work in your case. So, I would suggest it as the following:

<dependency>

  <groupId>org.apache.spark</groupId>

  <artifactId>spark-core_2.11</artifactId>

  <version>${spark.version}</version>

  <scope>provided</scope>

</dependency>

This will work and will not include spark in your "uberjar" which is what you will almost certainly need.

1.2k questions

2.7k answers

501 comments

693 users

Browse Categories

...