Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I'm using Spark 1.3.0 and Python. I have a dataframe and I wish to add an additional column which is derived from other columns. Like this,

>>old_df.columns
[col_1, col_2, ..., col_m]

>>new_df.columns
[col_1, col_2, ..., col_m, col_n]


where

col_n = col_3 - col_4


How do I do this in PySpark?

1 Answer

0 votes
by (32.3k points)

What you are looking for can be achieved by using withColumn method:

old_df = sqlContext.createDataFrame(sc.parallelize(

    [(0, 1), (1, 3), (2, 5)]), ('col_1', 'col_2'))

new_df = old_df.withColumn('col_n', old_df.col_1 - old_df.col_2)

Alternatively, you can also use SQL on a registered table:

old_df.registerTempTable('old_df')

new_df = sqlContext.sql('SELECT *, col_1 - col_2 AS col_n FROM old_df')

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...