Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

As a simplified example, I have a dataframe "df" with columns "col1,col2" and I want to compute a row-wise maximum after applying a function to each column :

def f(x):
    return (x+1)

max_udf=udf(lambda x,y: max(x,y), IntegerType())
f_udf=udf(f, IntegerType())

df2=df.withColumn("result", max_udf(f_udf(df.col1),f_udf(df.col2)))
 

So if

 df:

col1   col2
1      2
3      0

 

Then

df2:

col1   col2  result
1      2     3
3      0     4

 

The above doesn't seem to work and produces "Cannot evaluate expression: PythonUDF#f..."

I'm absolutely positive "f_udf" works just fine on my table, and the main issue is with the max_udf.

Without creating extra columns or using basic map/reduce, is there a way to do the above entirely using dataframes and udfs? How should I modify "max_udf"?

I've also tried:

max_udf=udf(max, IntegerType())


which produces the same error.

1 Answer

0 votes
by (32.3k points)

To pass multiple columns or a whole row to an UDF use a struct:

from pyspark.sql.functions import udf, struct

from pyspark.sql.types import IntegerType

df = sqlContext.createDataFrame([(None, None), (1, None), (None, 2)], ("a", "b"))

count_empty_columns = udf(lambda row: len([x for x in row if x == None]), IntegerType())

new_df = df.withColumn("null_count", count_empty_columns(struct([df[x] for x in df.columns])))

new_df.show()

returns:

+----+----+----------+

|   a| b|null_count|

+----+----+----------+

|null|null|         2|

|   1|null|         1|

|null|   2|   1|

+----+----+----------+

And in your approach as UserDefinedFunction is throwing error while accepting UDFs as their arguments, I would suggest you to modify the max_udf like below to make it work.

df = sc.parallelize([(1, 2), (3, 0)]).toDF(["col1", "col2"])

max_udf = udf(lambda x, y: max(x + 1, y + 1), IntegerType())

df2 = df.withColumn("result", max_udf(df.col1, df.col2))

Related questions

Browse Categories

...