Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I'm using Scala to create and run a Spark application locally.

My build.sbt:

name : "SparkDemo"
version : "1.0"
scalaVersion : "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.0"    exclude("org.apache.hadoop", "hadoop-client")
libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.2.0"
libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0"  excludeAll(
ExclusionRule(organization = "org.eclipse.jetty"))
libraryDependencies += "org.apache.hadoop" % "hadoop-mapreduce-client-core" % "2.6.0"
libraryDependencies += "org.apache.hbase" % "hbase-client" % "0.98.4-hadoop2"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "0.98.4-hadoop2"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "0.98.4-hadoop2"
mainClass in Compile := Some("demo.TruckEvents")


During runtime I get the exception:

Exception in thread "main" java.lang.ExceptionInInitializerError during calling of... Caused by: java.lang.SecurityException: class "javax.servlet.FilterRegistration"'s signer information does not match signer information of other classes in the same package

The exception is triggered here:

val sc = new SparkContext("local", "HBaseTest")


I am using the IntelliJ Scala/SBT plugin.

1 Answer

0 votes
by (32.3k points)

As you mentioned you are using IntelliJ IDEA,I would suggest you to try this:

  1. Just right click the project root folder, and then choose Open Module Settings

  2. Now, In the new window, go for the left navigation column and choose Modules in there.

  3. In the rightmost column, select Dependencies tab, find Maven: javax.servlet:servlet-api:2.5

  4. Finally, press ALT+Down and move this item to the bottom.

It should solve your problem.

For more details check this link out: http://wpcertification.blogspot.ru/2016/01/spark-error-class-javaxservletfilterreg.html

Related questions

Browse Categories

...