Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I have a spark job that takes a file with 8 records from hdfs, does a simple aggregation and saves it back to hdfs. I notice there are like hundreds of tasks when I do this.

I also am not sure why there are multiple jobs for this? I thought a job was more like when an action happened. I can speculate as to why - but my understanding was that inside of this code it should be one job and it should be broken down into stages, not multiple jobs. Why doesn't it just break it down into stages, how come it breaks into jobs?

As far as the 200 plus tasks, since the amount of data and the amount of nodes is miniscule, it doesn't make sense that there is like 25 tasks for each row of data when there is only one aggregations and a couple of filters. Why wouldn't it just have one task per partition per atomic operation?

Here is the relevant scala code -

import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object TestProj {object TestProj {
  def main(args: Array[String]) {

    /* set the application name in the SparkConf object */
    val appConf = new SparkConf().setAppName("Test Proj")

    /* env settings that I don't need to set in REPL*/
    val sc = new SparkContext(appConf)
    val sqlContext = new SQLContext(sc)
    import sqlContext.implicits._

    val rdd1 = sc.textFile("hdfs://node002:8020/flat_files/miscellaneous/ex.txt")

     /*the below rdd will have schema defined in Record class*/
     val rddCase =  sc.textFile("hdfs://node002:8020/flat_files/miscellaneous/ex.txt")
      .map(x=>x.split(" "))    //file record into array of strings based spaces
      .map(x=>Record(
        x(0).toInt,
        x(1).asInstanceOf[String],
        x(2).asInstanceOf[String],
        x(3).toInt))


    /* the below dataframe groups on first letter of first name and counts it*/
    val aggDF = rddCase.toDF()
      .groupBy($"firstName".substr(1,1).alias("firstLetter"))
      .count
      .orderBy($"firstLetter")

    /* save to hdfs*/
 aggDF.write.format("parquet").mode("append").save("/raw/miscellaneous/ex_out_agg")

  }

    case class Record(id: Int
      , firstName: String
      , lastName: String
      , quantity
:Int)

}


Below is the screen shot after clicking on the application 

enter image description here

Below is are the stages show when viewing the specific "job" of id 0 enter image description here

Below is the first part of the screen when clicking on the stage with over 200 tasks

enter image description here

This is the second part of the screen inside the stageenter image description here

Below is after clicking on the "executors" tab enter image description here

As requested, here are the stages for Job ID 1

enter image description here

Here are the details for the stage in job ID 1 with 200 tasks

enter image description here

1 Answer

0 votes
by (32.3k points)

The two tasks used for reading (Stage Id 0 in the second figure) is the defaultMinPartitions setting which is set to 2. You can get this argument by reading the value in the REPL sc.defaultMinPartitions. It should also be clearly available in the Spark UI under the "Environment" tap.

You can take a look at the code from GitHub to see that this exactly what is happening. If you want more partitions to be used on reading, just add it as a parameter e.g., sc.textFile("a.txt", 20).

Now the interesting part comes from the 200 partitions that come on the second stage (Stage Id 1 in the second figure). Now, whenever for each time there is a shuffle, Spark needs to decide how many partitions will the shuffle RDD have. As you can imagine, the default is 200.

You can change that using:

SQL context.setConf("spark.sql.shuffle.partitions", "4”)

If you run your code with this configuration you will see that the 200 partitions are not going to be there anymore. How to set this parameter is a kind of art. Maybe you shall choose  2 times the number of cores you have (or whatever).

I think Spark 2.0 has an automatic way to infer the best number of partitions for shuffle RDDs. Looking forward to that!

Finally, the number of jobs you get has to do with how many RDD actions the resulting optimized Dataframe code resulted in. When you read the Spark specs, you will come to know that  each RDD action will trigger one job. When your action involves a Dataframe or SparkSQL the Catalyst optimizer will figure out an execution plan and generate some RDD based code to execute it. It's hard to say exactly why it uses two actions in your case. You may need to look at the optimized query plan to see exactly what is doing.

Browse Categories

...