1. What is Hadoop framework?
Hadoop is a open source framework which is written in java by apche software foundation. This framework is used to wiritesoftware application which requires to process vast amount of data (It could handle multi tera bytes of data). It works
in-paralle on large clusters which could have 1000 of computers (Nodes) on the clusters. It also process data very reliably
and fault-tolerant manner. See the below image how does it looks.
2. On What concept the Hadoop framework works?
It works on MapReduce, and it is devised by the Google.3. What is MapReduce ?
Map reduce is an algorithm or concept to process Huge amount of data in a faster way. As per its name you can divide it Map and Reduce. The main MapReduce job usually splits the input data-set into independent chunks. (Big data sets in the multiple small datasets) MapTask: will process these chunks in a completely parallel manner (One node can process one or more chunks). The framework sorts the outputs of the maps. Reduce Task : And the above output will be the input for the reducetasks, produces the final result. Your business logic would be written in the MappedTask and ReducedTask. Typically both the input and the output of the job are stored in a file-system (Not database). The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks. 4. What is compute and Storage nodes?
Compute Node: This is the computer or machine where your actual business logic will be executed.In most of the cases compute node and storage node would be the same machine.
5. How does master slave architecture in the Hadoop?
The MapReduce framework consists of a single master JobTracker and multiple slaves, each cluster-node will have one TaskskTracker.PappuPass Learning Resources 4
The master is responsible for scheduling the jobs' component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves execute the tasks as directed by the master.
6. How does an Hadoop application look like or their basic components?
Minimally an Hadoop application would have following components.=> Input location of data
=> Output location of processed data.
=> A map task.
=> A reduced task.
=> Job configuration
The Hadoop job client then submits the job (jar/executable etc.) and configuration to the JobTracker which then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client.
7. Explain how input and output data format of the Hadoop framework?
The MapReduce framework operates exclusively on pairs, that is, the framework views the input to the job as a set of pairs and produces a set of pairs as the output of the job, conceivably of different types. See the flow mentioned below(input) -> map -> -> combine/sorting -> -> reduce -> (output)
8. What are the restriction to the key and value class ?
The key and value classes have to be serialized by the framework. To make them serializable Hadoop provides a Writable interface. As you know from the java itself that the key of the Map should be comparable, hence the key has to implement one more interface Writable Comparable. 9. Explain the Word Count implementation via Hadoop framework ?
We will count the words in all the input file flow as below: => inputAssume there are two files each having a sentence
Hello World Hello World (In file 1)
Hello World Hello World (In file 2)
=> Mapper : There would be each mapper for the a file
For the given sample input the first map output:
< Hello, 1>
< World, 1>
< Hello, 1>
< World, 1>
The second map output:
< Hello, 1>
< World, 1>
< Hello, 1>
< World, 1>
=> Combiner/Sorting (This is done for each individual map)
So output looks like this
The output of the first map:
< Hello, 2>
< World, 2>
The output of the second map:
< Hello, 2>
< World, 2>
=> Reducer :
It sums up the above output and generates the output as below
< Hello, 4>
< World, 4>
=> Output
Final output would look like
Hello 4 times
World 4 times
10.Which interface needs to be implemented to create Mapper and Reducer for the Hadoop?
org.apache.hadoop.mapreduce.Mapper org.apache.hadoop.mapreduce.Reducer 11. What Mapper does?
12. What is the InputSplit in map reduce software?
13. What is the InputFormat ?
14. Where do you specify the Mapper Implementation?
15. How Mapper is instantiated in a running job?
16. Which are the methods in the Mapper interface?
17. What happens if you don’t override the Mapper methods and keep them as it is?
18. What is the use of Context object?
19. How can you add the arbitrary key-value pairs in your mapper?
20. How does Mapper’s run() method works?
21. Which object can be used to get the progress of a particular job ?
22. What is next step after Mapper or MapTask?
23. How can we control particular key should go in a specific reducer?
24. What is the use of Combiner?
25. How many maps are there in a particular Job?
26. What is the Reducer used for?
27. Explain the core methods of the Reducer?
28. What are the primary phases of the Reducer?
29. Explain the shuffle?
30. Explain the Reducer’s Sort phase?
31. Explain the Reducer’s reduce phase?
32. How many Reducers should be configured?
33. It can be possible that a Job has 0 reducers?
34. What happens if number of reducers are 0?
35. How many instances of JobTracker can run on a Hadoop Cluser?
36. What is the JobTracker and what it performs in a Hadoop Cluster?
37. How a task is scheduled by a JobTracker?
38. How many instances of Tasktracker run on a Hadoop cluster?
39. What are the two main parts of the Hadoop framework?
40. Explain the use of TaskTracker in the Hadoop cluster?
41. What do you mean by TaskInstance?
42. How many daemon processes run on a Hadoop cluster?
43. How many maximum JVM can run on a slave node?
44. What is NAS?
45. How HDFA differs with NFS?
46. How does a NameNode handle the failure of the data nodes?
47. Can Reducer talk with each other?
48. Where the Mapper’s Intermediate data will be stored?
49. What is the use of Combiners in the Hadoop framework?
50. What is the Hadoop MapReduce API contract for a key and value Class?
51. What is a IdentityMapper and IdentityReducer in MapReduce?
52. What is the meaning of speculative execution in Hadoop? Why is it important?
53. When the reducers are are started in a MapReduce job?
54. What is HDFS ? How it is different from traditional file systems?
55. What is HDFS Block size? How is it different from traditional file system block size?
56. What is the full form of fsck?
57. What is a NameNode? How many instances of NameNode run on a Hadoop Cluster?
58. What is a DataNode? How many instances of DataNode run on a Hadoop Cluster?
59. How the Client communicates with HDFS?
60. How the HDFS Blocks are replicated?