0 votes
1 view
in Big Data Hadoop & Spark by (11.5k points)

I'm using Spark 1.3.0 and Python. I have a dataframe and I wish to add an additional column which is derived from other columns. Like this,

>>old_df.columns
[col_1, col_2, ..., col_m]

>>new_df.columns
[col_1, col_2, ..., col_m, col_n]


where

col_n = col_3 - col_4


How do I do this in PySpark?

1 Answer

0 votes
by (31.4k points)

What you are looking for can be achieved by using withColumn method:

old_df = sqlContext.createDataFrame(sc.parallelize(

    [(0, 1), (1, 3), (2, 5)]), ('col_1', 'col_2'))

new_df = old_df.withColumn('col_n', old_df.col_1 - old_df.col_2)

Alternatively, you can also use SQL on a registered table:

old_df.registerTempTable('old_df')

new_df = sqlContext.sql('SELECT *, col_1 - col_2 AS col_n FROM old_df')

Welcome to Intellipaat Community. Get your technical queries answered by top developers !


Categories

...