Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I've successfully create a row_number() partitionBy by in Spark using Window, but would like to sort this by descending, instead of the default ascending. Here is my working code:

from pyspark import HiveContext
from pyspark.sql.types import *
from pyspark.sql import Row, functions as F
from pyspark.sql.window import Window

data_cooccur.select("driver", "also_item", "unit_count",
    F.rowNumber().over(Window.partitionBy("driver").orderBy("unit_count")).alias("rowNum")).show()


That gives me this result:

 +------+---------+----------+------+
 |driver|also_item|unit_count|rowNum|
 +------+---------+----------+------+
 |   s10|      s11|         1|     1|
 |   s10|      s13|         1|     2|
 |   s10|      s17|         1|     3|


And here I add the desc() to order descending:

data_cooccur.select("driver", "also_item", "unit_count", F.rowNumber().over(Window.partitionBy("driver").orderBy("unit_count").desc()).alias("rowNum")).show()


And get this error:

AttributeError: 'WindowSpec' object has no attribute 'desc'

1 Answer

0 votes
by (32.3k points)

desc should be applied on a column, not on a window definition. You can use either apply this method on a column:

from pyspark.sql.functions import col  

F.rowNumber().over(Window.partitionBy("driver").orderBy(col("unit_count").desc())

Or on a standalone function:

from pyspark.sql.functions import desc

F.rowNumber().over(Window.partitionBy("driver").orderBy(desc("unit_count"))

You can learn in-depth about SQL statements, queries and become proficient in SQL queries by enrolling in an industry-recognized SQL online course.

1.2k questions

2.7k answers

501 comments

693 users

Browse Categories

...