Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (9k points)
What is Flume in Hadoop?

1 Answer

0 votes
by (45.3k points)

Apache Flume in Hadoop is used to move a large quantity of streaming data in HDFS. It is used to collect log data that is available in the log files from web servers and aggregate it in HDFS to analyze. It supports a number of data sources including:

  • Tail: It helps to pipe data from local files to write in HDFS. It is similar to the ‘tail’ command in UNIX.
  • Apache log4j: It is used to enable Java applications in order to write events to HDFS files using Flume.

If you wish to know more about Apache Flume and Hadoop then you should check out Hadoop Tutorial.

You can also become a Hadoop developer by joining Intellipaat's Hadoop Online Training.

To become proficient in Hadoop and its applications, you should watch this video:

Related questions

0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
asked Aug 31, 2020 in Big Data Hadoop & Spark by Kasheeka (32.1k points)
0 votes
1 answer
asked Aug 31, 2020 in Big Data Hadoop & Spark by Kasheeka (32.1k points)
0 votes
1 answer
asked Aug 31, 2020 in Big Data Hadoop & Spark by Kasheeka (32.1k points)

Browse Categories

...