Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (6.5k points)
I don't wish to use default partitioning parameters. Is there any way to specify the partitions that I want to create in the MR job?

1 Answer

0 votes
by (11.3k points)
edited by

The method to use a custom partitioner for a Hadoop job, follow the following instructions:

  • Create a class and extend the Partitioner Class in your code.
  • In your new class, override the method, 'getPartition' 
  • In the MapReduce running wrapper, use the Partitioner Class method set or add the custom partitioner in the config.

In this way, you can add the custom number that you want in the code itself. In order to understand the workflow of the MapReduce program or to learn how to code MapReduce in itself and work on some projects to build up your skills, you should definitely go for hadoop certification course. 

Browse Categories

...