I'm using Spark 1.3.0 and Python. I have a dataframe and I wish to add an additional column which is derived from other columns. Like this,
>>old_df.columns
[col_1, col_2, ..., col_m]
>>new_df.columns
[col_1, col_2, ..., col_m, col_n]
where
col_n = col_3 - col_4
How do I do this in PySpark?