Hi Sunil
In PySpark, We can use PySpark SQL to load csv file or parquet files into a dataframe. If you want to combine multiple files from cloud storage into a data frame, then you need Spark SQL commands for that.
First, We can load a single file into a dataframe. Then you can use the Spark SQL join function to join other files into this data frame. After loading all the data into a single dataframe, you can perform data wrangling functions on that.
You can check this documentation for more information.
Hope this answer will help you!