FLUME Interview Questions with Answers:-
1. Define the core components of Flume.
The core components of Flume are –
- Event- The single log entry or unit of data that is transported.
- Source- This is the component through which data enters Flume workflows.
- Sink- It is responsible for transporting data to the desired destination.
- Channel- it is the duct between the Sink and Source.
- Agent- Any JVM that runs Flume.
- Client- The component that transmits the event to the source that operates with the agent.
2. Define what is Flume?
Flume is a distributed service for collecting, aggregating, and moving large amounts of log data.
3. Which is the reliable channel in Flume to ensure that there is no data loss?
FILE Channel is the most reliable channel among the 3 channels JDBC, FILE and MEMORY.
4. How can Flume be used with HBase?
Apache Flume can be used with HBase using one of the two HBase sinks :
HBaseSink (org.apache.flume.sink.hbase.HBaseSink) supports secure HBase clusters and also the novel HBase IPC that was introduced in the version HBase 0.96.
AsyncHBaseSink (org.apache.flume.sink.hbase.AsyncHBaseSink) has better performance than HBase sink as it can easily make non-blocking calls to HBase.
Working of the HBaseSink :
In HBaseSink, a Flume Event is converted into HBase Increments or Puts. Serializer implements the HBaseEventSerializer which is then instantiated when the sink starts. For every event, sink calls the initialize method in the serializer which then translates the Flume Event into HBase increments and puts to be sent to HBase cluster.
Working of the AsyncHBaseSink:
AsyncHBaseSink implements the AsyncHBaseEventSerializer. The initialize method is called only once by the sink when it starts. Sink invokes the setEvent method and then makes calls to the getIncrements and getActions methods just similar to HBase sink. When the sink stops, the cleanup method is called by the serializer.
5. Define what is an Agent?
A process that hosts flume components such as sources, channels, and sinks, and thus has the ability to receive, store and forward events to their destination.
6. Is it possible to leverage real-time analysis on the big data collected by Flume directly? If yes, then explain how?
Data from Flume can be extracted, transformed and loaded in real-time into Apache Solr servers usingMorphlineSolrSink
7. Is it possible to leverage real-time analysis on the big data collected by Flume directly? If yes, then explain how.
Data from Flume can be extracted, transformed and loaded in real-time into Apache Solr servers using MorphlineSolrSink
8. Define what is a channel?
It stores events, events are delivered to the channel via sources operating within the agent. An event stays in the channel until a sink removes it for further transport.
9. Define the different channel types in Flume. Which channel type is faster?
The 3 different built-in channel types available in Flume are-
- MEMORY Channel – Events are read from the source into memory and passed to the sink.
- JDBC Channel – JDBC Channel stores the events in an embedded Derby database.
- FILE Channel –File Channel writes the contents to a file on the file system after reading the event from a source. The file is deleted only after the contents are successfully delivered to the sink.
MEMORY Channel is the fastest channel among the three however has the risk of data loss. The channel that you choose completely depends on the nature of the big data application and the value of each event.
10. Define what is Interceptor?
An interceptor can modify or even drop events based on any criteria chosen by the developer.
11. Explain about the replication and multiplexing selectors in Flume.
Channel Selectors are used to handling multiple channels. Based on the Flume header value, an event can be written just to a single channel or to multiple channels. If a channel selector is not specified to the source then by default it is the Replicating selector. Using the replicating selector, the same event is written to all the channels in the source’s channels list. The multiplexing channel selector is used when the application has to send different events to different channels.
12. Does Apache Flume provide support for third-party plug-ins?
Most of the data analysts use Apache Flume has plug-in based architecture as it can load data from external sources and transfer it to external destinations.
13. Apache Flume supports third-party plugins also?
Yes, Flume has 100% plugin-based architecture, it can load and ships data from external sources to an external destination which separately from Flume. SO that most of the biodata analysis use this tool for streaming data.
14. Differentiate between FileSink and FileRollSink
The major difference between HDFS FileSink and FileRollSink is that HDFS File Sink writes the events into the Hadoop Distributed File System (HDFS) whereas File Roll Sink stores the events into the local file system.
15. How can Flume be used with HBase?
Apache Flume can be used with HBase using one of the two HBase sinks –
HBaseSink (org.apache.flume.sink.hbase.HBaseSink) supports secure HBase clusters and also the novel HBase IPC that was introduced in the version HBase 0.96.
AsyncHBaseSink (org.apache.flume.sink.hbase.AsyncHBaseSink) has better performance than HBase sink as it can easily make non-blocking calls to HBase.
Working of the HBaseSink –
In HBaseSink, a Flume Event is converted into HBase Increments or Puts. Serializer implements the HBaseEventSerializer which is then instantiated when the sink starts. For every event, sink calls the initialize method in the serializer which then translates the Flume Event into HBase increments and puts to be sent to HBase cluster.
Working of the AsyncHBaseSink-
AsyncHBaseSink implements the AsyncHBaseEventSerializer. The initialize method is called only once by the sink when it starts. Sink invokes the setEvent method and then makes calls to the getIncrements and getActions methods just similar to HBase sink. When the sink stops, the cleanup method is called by the serializer.
16. Can Flume can distribute data to multiple destinations?
Yes. It supports multiplexing flow. The event flows from one source to multiple channel and multiple destinations, It is achieved by defining a flow multiplexer/
17. How a multi-hop agent can be set up in Flume?
Avro RPC Bridge mechanism is used to set up a Multi-hop agent in Apache Flume.
18. Why we are using Flume?
Most often Hadoop developer uses this too to get lig data from social media sites. It’s developed by Cloudera for aggregating and moving a very large amount of data. The primary use is to gather log files from different sources and asynchronously persist in the Hadoop cluster.
19. Define what is FlumeNG
A real-time loader for streaming your data into Hadoop. It stores data in HDFS and HBase. You’ll want to get started with FlumeNG, which improves on the original flume.
20. Can fume provide 100% reliability to the data flow?
Yes, it provides end-to-end reliability of the flow. By default uses a transactional approach in the data flow.
Source and sink encapsulate in a transactional repository provide by the channels. This channels are responsible to pass reliably from end to end flow. so it provides 100% reliability to the data flow.
FLUME Questions pdf free download::
21. Define what is sink processors?
Since processors is a mechanism by which you can create a fail-over task and load balancing.
22. Define what are the tools used in Big Data?
Tools used in Big Data includes
- Hadoop
- Hive
- Pig
- Flume
- Mahout
- Sqoop
23. Does the agent communicate with other Agents?
NO each agent runs independently. Flume can easily horizontally. As a result, there is no single point of failure.
24. Does Apache Flume provide support for third-party plug-ins?
Most of the data analysts use Apache Flume has plug-in based architecture as it can load data from external sources and transfer it to external destinations.
25. Define what are the complicated steps in Flume configurations?
Flume can process streaming data. so if started once, there is no stop/end to the process. asynchronously it can flows data from source to HDFS via an agent. First of all, an agent should know individual components how they are connect to load data. so configuration is the trigger to load streaming data. for example, consumer key, consumer secret access Token, and Access Token Secret are key factors to download data from Twitter.
26. Which is the reliable channel in Flume to ensure that there is no data loss?
FILE Channel is the most reliable channel among the 3 channels JDBC, FILE and MEMORY.
27. Define what are Flume core components
- Course, Channels, and sink are core components in Apache Flume.
- When Flume source receives the event from external sources, it stores the event in one or multiple channels.
- Flume channel is temporarily stored and keep the event until’s consumed by the Flume sink. It acts as a Flume repository.
- Flume Sink removes the event from the channel and put into an external repository like HDFS or Move to the next flume.
28. Define what are the Data extraction tools in Hadoop?
Sqoop can be used to transfer data between RDBMS and HDFS. Flume can be used to extract the streaming data from social media, weblog etc and store it on HDFS.
29. Define what are the important steps in the configuration?
The configuration file is the heart of the Apache Flume’s agents.
- Every Source must have at least one channel.
- Every Sink must have only one channel
- Every component must have a specific type.
30. Differentiate between File Sink and File Roll Sink?
The major difference between HDFS File Sink and File Roll Sink is that HDFS File Sink writes the events into the Hadoop Distributed File System (HDFS) whereas File Roll Sink stores the events into the local file system.
31. Define what is Apache Spark?
Spark is a fast, easy-to-use and flexible data processing framework. It has an advanced execution engine supporting cyclic data flow and in-memory computing. Spark can run on Hadoop, standalone or in the cloud and is capable of accessing diverse data sources including HDFS, HBase, Cassandra, and others.
32. Define what is Apache Flume?
Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data source. Review this Flume use case to learn how Mozilla collects and Analyse the Logs using Flume and Hive.
Flume is a framework for populating Hadoop with data. Agents are populated throughout one’s IT infrastructure – inside web servers, application servers and mobile devices, for example – to collect data and integrate it into Hadoop.
33. Define what is flume agent?
A flume agent is JVM holds the flume core components(source, channel, sink) through which events flow from an external source like web-servers to a destination like HDFS. An agent is the heart of the Apache Filme.
34. Define what is the Flume event?
A unit of data with a set of string attribute called Flume event. The external source like web-server sends events to the source. Internally Flume has inbuilt functionality to understand the source format.
Each log file is considered as an event. Each event has header and value sectors, which has header information and appropriate value that assign to a particular header.
35. Define what are the Data extraction tools in Hadoop?
Sqoop can be used to transfer data between RDBMS and HDFS. Flume can be used to extract the streaming data from social media, weblog etc and store it on HDFS.
36. Does Flume provide 100% reliability to the data flow?
Yes, Apache Flume provides end to end reliability because of its transactional approach in a data flow.
37. Define why Flume.?
- Flume is not limited to collect logs from distributed systems, but it is capable of performing other use cases such as
- Collecting readings from an array of sensors
- Collecting impressions from custom apps for an ad network
- Collecting readings from network devices in order to monitor their performance.
- Flume is targeted to preserve the reliability, scalability, manageability, and extensibility while it serves a maximum number of clients with higher QoS
38. can you explain about configuration files?
- The agent configuration is stored in the local configuration file. it comprises of each agents source, sink and channel information.
- Each core components such as source, sink, and channel have properties such as name, type and set properties
39. Tell any two feature Flume?
- Fume collects data efficiently, aggregate and moves a large amount of log data from many different sources to the centralized data store.
- Flume is not restricted to log data aggregation and it can transport a massive quantity of event data including but not limited to network traffic data, social-media generated data, email message na pretty much any data storage.