1、hadoop运行的原理?

2、mapreduce的原理?

3、HDFS存储的机制?

4、举一个简单的例子说明mapreduce是怎么来运行的 ?

5、面试的人给你出一些问题,让你用mapreduce来实现?

比如:现在有10个文件夹,每个文件夹都有1000000个url.现在让你找出top1000000url。

6、hadoop中Combiner的作用?

Src: http://p-x1984.javaeye.com/blog/859843

Q1. Name the most common InputFormats defined in Hadoop? Which one is default ?

Following 2 are most common InputFormats defined in Hadoop

- TextInputFormat

- KeyValueInputFormat

- SequenceFileInputFormat

Q2. What is the difference between TextInputFormatand KeyValueInputFormat class

TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper

KeyValueInputFormat: Reads text file and parses lines into key, val pairs. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.

Q3. What is InputSplit in Hadoop

When a hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called Input Split

Q4. How is the splitting of file invoked in Hadoop Framework

It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user

Q5. Consider case scenario: In M/R system,

- HDFS block size is 64 MB

- Input format is FileInputFormat

- We have 3 files of size 64K, 65Mb and 127Mb

then how many input splits will be made by Hadoop framework?

Hadoop will make 5 splits as follows

- 1 split for 64K files

- 2  splits for 65Mb files

- 2 splits for 127Mb file

Q6. What is the purpose of RecordReader in Hadoop

The InputSplithas defined a slice of work, but does not describe how to access it. The RecordReaderclass actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the InputFormat

Q7. After the Map phase finishes, the hadoop framework does "Partitioning, Shuffle and sort". Explain what happens in this phase?

- Partitioning

Partitioning is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same

- Shuffle

After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling.

- Sort

Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer

Q9. If no custom partitioner is defined in the hadoop then how is data partitioned before its sent to the reducer

The default partitioner computes a hash value for the key and assigns the partition based on this result

Q10. What is a Combiner

The Combiner is a "mini-reduce" process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.

Q11. Give an example scenario where a cobiner can be used and where it cannot be used

There can be several examples following are the most common ones

- Scenario where you can use combiner

Getting list of distinct words in a file

- Scenario where you cannot use a combiner

Calculating mean of a list of numbers

Q12. What is job tracker

Job Tracker is the service within Hadoop that runs Map Reduce jobs on the cluster

Q13. What are some typical functions of Job Tracker

The following are some typical tasks of Job Tracker

- Accepts jobs from clients

- It talks to the NameNode to determine the location of the data

- It locates TaskTracker nodes with available slots at or near the data

- It submits the work to the chosen Task Tracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker

Q14. What is task tracker

Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations - from a JobTracker

Q15. Whats the relationship between Jobs and Tasks in Hadoop

One job is broken down into one or many tasks in Hadoop.

Q16. Suppose Hadoop spawned 100 tasks for a job and one of the task failed. What willhadoop do ?

It will restart the task again on some other task tracker and only if the task fails more than 4 (default setting and can be changed) times will it kill the job

Q17. Hadoop achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program and slow down the program. What mechanism Hadoop provides to combat this

Speculative Execution

Q18. How does speculative execution works in Hadoop

Job tracker makes different task trackers process same input. When tasks complete, they announce this fact to the Job Tracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the Task Trackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.

Q19. Using command line in Linux, how will you

- see all jobs running in the hadoop cluster

- kill a job

- hadoop job -list

- hadoop job -kill jobid

Q20. What is Hadoop Streaming

Streaming is a generic API that allows programs written in virtually any language to be used asHadoop Mapper and Reducer implementations

Q21. What is the characteristic of streaming API that makes it flexible run map reduce jobs in languages like perl, ruby, awk etc.

Hadoop Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a Map Reduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value) pairs on stdout.

Q22. Whats is Distributed Cache in Hadoop

Distributed Cache is a facility provided by the Map/Reduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.

Q23. What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it

This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 mappers or reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR job then every mapper will try to access it from HDFS hence if a task tracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.

Q.24 What mechanism does Hadoop framework provides to synchronize changes made in Distribution Cache during runtime of the application

This is a trick questions. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution

Q25. Have you ever used Counters in Hadoop. Give us an example scenario

Anybody who claims to have worked on a Hadoop project is expected to use counters

Q26. Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job

Yes, The input format class provides methods to add multiple directories as input to a Hadoop job

Q27. Is it possible to have Hadoop job output in multiple directories. If yes then how

Yes, by using Multiple Outputs class

Q28. What will a hadoop job do if you try to run it with an output directory that is already present? Will it

- overwrite it

- warn you and continue

- throw an exception and exit

The hadoop job will throw an exception and exit.

Q29. How can you set an arbitary number of mappers to be created for a job in Hadoop

This is a trick question. You cannot set it

Q30. How can you set an arbitary number of reducers to be created for a job in Hadoop

You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting

hadoop可能遇到的问题的更多相关文章

  1. day 06Hadoop

    更换虚拟机以后操作的步奏1.到每一台机器上修改ip地址 ,然后修改hosts1.5 给每台机器配置免密码登录 2.修改hadoop 的配置文件,发送到每台机器上3.启动dfs start-dfs.sh ...

  2. hadoop面试时可能遇到的问题

    面试hadoop可能被问到的问题,你能回答出几个 ? 1.hadoop运行的原理? 2.mapreduce的原理? 3.HDFS存储的机制? 4.举一个简单的例子说明mapreduce是怎么来运行的 ...

  3. hadoop环境配置过程中可能遇到问题的解决方案

    Failed to set setXIncludeAware(true) for parser 遇到此问题一般是jar包冲突的问题.一种情况是我们向java的lib目录添加我们自己的jar包导致had ...

  4. Hadoop/Spark环境运行过程中可能遇到的问题或注意事项

    1.集群启动的时候,从节点的datanode没有启动 问题原因:从节点的tmp/data下的配置文件中的clusterID与主节点的tmp/data下的配置文件中的clusterID不一致,导致集群启 ...

  5. zookeeper集群的搭建以及hadoop ha的相关配置

    1.环境 centos7 hadoop2.6.5 zookeeper3.4.9 jdk1.8 master作为active主机,data1作为standby备用机,三台机器均作为数据节点,yarn资源 ...

  6. Hadoop4 利用VMware搭建自己的hadoop集群

    前言:       前段时间自己学习如何部署伪分布式模式的hadoop环境,之前由于工作比较忙,学习的进度停滞了一段时间,所以今天抽出时间把最近学习的成果和大家分享一下.       本文要介绍的是如 ...

  7. 【hadoop】——修改hadoop FileUtil.java,解决权限检查的问题

    在Hadoop Eclipse开发环境搭建这篇文章中,第15.)中提到权限相关的异常,如下: 15/01/30 10:08:17 WARN util.NativeCodeLoader: Unable ...

  8. Hadoop第3周练习--Hadoop2.X编译安装和实验

    作业题目 位系统下进行本地编译的安装方式 选2 (1) 能否给web监控界面加上安全机制,怎样实现?抓图过程 (2)模拟namenode崩溃,例如将name目录的内容全部删除,然后通过secondar ...

  9. Hadoop第1~2周练习—Hadoop1.X和2.X安装

        练习题目     Hadoop1.X安装 2.1    准备工作 2.1.1   硬软件环境 2.1.2   集群网络环境 2.1.3   安装使用工具 2.2  环境搭建 2.2.1   安 ...

随机推荐

  1. Objective-C ,ios,iphone开发基础:JSON解析(使用苹果官方提供的JSON库:NSJSONSerialization)

    json和xml的普及个人觉得是为了简化阅读难度,以及减轻网络负荷,json和xml 数据格式在格式化以后都是一种树状结构,可以树藤摸瓜的得到你想要的任何果子. 而不格式化的时候json和xml 又是 ...

  2. CF Fox And Two Dots (DFS)

    Fox And Two Dots time limit per test 2 seconds memory limit per test 256 megabytes input standard in ...

  3. 【Irrlicht鬼火引擎】 安装配置Irrlicht鬼火引擎

    一.下载引擎 官方网站:http://irrlicht.sourceforge.net/‎ 官方网站需要FQ才能进入,如果不想FQ,可以通过其他下载地址:CSDN下载:http://download. ...

  4. AI 对不起 我还爱着你

    艾弗森,对不起,我还爱着你.有时候我自己都不知道自己我怎么了,直到最后才发现,我还爱着你. 那天起,我认识了你,便一发不可收拾.这些天,谢谢你,似乎因为你的影响让我改变了,坚持了许多.致以至今我才发现 ...

  5. sublime text修改TAB缩进为空格

    在sublime text中将TAB缩进直接转化为4个空格,可以按照如下方式操作: 菜单栏: Preferences -> Settings – More -> Syntax Specif ...

  6. 核心概念 —— 服务容器

    1.简介 Laravel 服务容器是一个用于管理类依赖和执行依赖注入的强大工具.依赖注入听上去很花哨,其实质是通过构造函数或者某些情况下通过 set 方法将类依赖注入到类中. 让我们看一个简单的例子: ...

  7. HDOJ2016数据的交换输出

    数据的交换输出 Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others)Total Sub ...

  8. oracle数据库创建表空间和表临时空间

    1:创建临时表空间 create temporary tablespace user_temp tempfile 'Q:\oracle\product\10.2.0\oradata\Test\xyrj ...

  9. Android环境搭建的步骤

    Android 环境搭建步骤 这里简单介绍一下学习Android之后如何搭建环境的问题 一.    在搭建环境之前,首先你要先下载Java JDK(根据系统位数选择下载是64位或32位的),Eclip ...

  10. win7下的mstsc ubuntu下的rdesktop

    远程图形化登录, win7下: 开始->mstsc->10.108.103.93即可进行后续输入账号密码验证登录. 功能类似rdesktop. 如图: