Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I want to change names of two columns using spark withColumnRenamed function. Of course, I can write:

data = sqlContext.createDataFrame([(1,2), (3,4)], ['x1', 'x2'])
data = (data
       .withColumnRenamed('x1','x3')
       .withColumnRenamed('x2', 'x4'))


but I want to do this in one step (having list/tuple of new names). Unfortunately, neither this:

data = data.withColumnRenamed(['x1', 'x2'], ['x3', 'x4'])


nor this:

data = data.withColumnRenamed(('x1', 'x2'), ('x3', 'x4'))


is working. Is it possible to do this that way?

1 Answer

0 votes
by (32.3k points)

As far as I am concerned it is not possible to use a single withColumnRenamed call.

I would suggest you to use DataFrame.toDF method. Do this:

data.toDF('x3', 'x4')

or

new_names = ['x3', 'x4']

data.toDF(*new_names)

You can also do renaming using simple select:

from pyspark.sql.functions import col

mapping = dict(zip(['x1', 'x2'], ['x3', 'x4']))

data.select([col(c).alias(mapping.get(c, c)) for c in data.columns])

Similarly, In Scala 

To rename all columns do:

val newNames = Seq("x3", "x4")

data.toDF(newNames: _*)

To rename from mapping with select:

val  mapping = Map("x1" -> "x3", "x2" -> "x4")

df.select(

  df.columns.map(c => df(c).alias(mapping.get(c).getOrElse(c))): _*

)

or you can also use foldLeft + withColumnRenamed:

mapping.foldLeft(data){

  case (data, (oldName, newName)) => data.withColumnRenamed(oldName, newName) 

}

Note: * Not to be confused with RDD.toDF which is not a variadic functions, and takes column names as a list.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...