Explore Courses Blog Tutorials Interview Questions
0 votes
in Big Data Hadoop & Spark by (11.4k points)

I'm using maven with scala archetype. I'm getting that error:

“value $ is not a member of StringContext”

I already tried to add several things in pom.xml, but nothing worked very well...

My code:

import{ParamGridBuilder, TrainValidationSplit}
// To see less warnings
import org.apache.log4j._

// Start a simple Spark Session
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder().getOrCreate()

// Prepare training and test data.
val data ="header","true").option("inferSchema","true").format("csv").load("USA_Housing.csv")

// Check out the Data

// See an example of what the data looks like
// by printing out a Row
val colnames = data.columns
val firstrow = data.head(1)(0)
println("Example Data Row")
for(ind <- Range(1,colnames.length)){

//// Setting Up DataFrame for Machine Learning ////

// A few things we need to do before Spark can accept the data!
// It needs to be in the form of two columns
// ("label","features")

// This will allow us to join multiple feature columns
// into a single column of an array of feautre values

// Rename Price to label column for naming convention.
// Grab only numerical columns from the data
val df ="Price").as("label"),$"Avg Area Income",$"Avg Area House Age",$"Avg Area Number of Rooms",$"Area Population")

// An assembler converts the input values to a vector
// A vector is what the ML algorithm reads to train a model

// Set the input columns from which we are supposed to read the values
// Set the name of the column where the vector will be stored
val assembler = new VectorAssembler().setInputCols(Array("Avg Area Income","Avg Area House Age","Avg Area Number of Rooms","Area Population")).setOutputCol("features")

// Use the assembler to transform our DataFrame to the two columns
val output = assembler.transform(df).select($"label",$"features")

// Create a Linear Regression Model object
val lr = new LinearRegression()

// Fit the model to the data

// Note: Later we will see why we should split
// the data first, but for now we will fit to all the data.
val lrModel =

// Print the coefficients and intercept for linear regression
println(s"Coefficients: ${lrModel.coefficients} Intercept: ${lrModel.intercept}")

// Summarize the model over the training set and print out some metrics!
// Explore this in the spark-shell for more methods to call
val trainingSummary = lrModel.summary

println(s"numIterations: ${trainingSummary.totalIterations}")
println(s"objectiveHistory: ${trainingSummary.objectiveHistory.toList}")

println(s"RMSE: ${trainingSummary.rootMeanSquaredError}")
println(s"MSE: ${trainingSummary.meanSquaredError}")
println(s"r2: ${trainingSummary.r2}")

and my pom.xml is that:

<project xmlns=""

xmlns:xsi="" xsi:schemaLocation="">
  <description>My wonderfull scala app</description>
      <name>My License</name>



    <!-- Test -->

        <!-- see -->
          <!-- If you have classpath issue like NoDefClassError,... -->
          <!-- useManifestOnlyJar>false</useManifestOnlyJar -->

I have no idea about how to fix it. Does anybody have any idea?

1 Answer

0 votes
by (32.3k points)

You simply need to import spark implicits. Simply add this in your code:

val spark = SparkSession.builder().getOrCreate()    

import spark.implicits._ // << add this

Browse Categories