编译过程漫长无比,错误百出,需要耐心耐心!!

1.准备的环境及软件

操作系统:Centos6.4 64位

jdk:jdk-7u80-linux-x64.rpm,不要使用1.8

maven:apache-maven-3.3.3-bin.tar.gz

protobuf:protobuf-2.5.0.tar.gz  注:谷歌的产品,最好是提前百度准备一下这个文件

hadoop src:hadoop-2.5.2-src.tar.gz     hadoop的官网下载

ant:apache-ant-1.9.6-bin.tar.gz             编译common的时候会用到

openssl devel

ncurses-devel

CMake:2.6 or later

编译过程保证网络畅通,需要从外网下载其他的依赖包

2.环境准备过程

1)安装jdk

比较简单通过rpm -ivh jdk-7u80-linux-x64.rpm安装即可

2)解压hadoop源码包
tar zxvf hadoop-2.5.2-src.tar.gz

查看文件夹hadoop-2.5.2-src内的BUILDING.txt ,里面有编译所需要的环境要求

vi BUILDING.txt

3)安装maven(root用户安装)

直接解压缩到目录/usr

tar zxvf apache-maven-3.3.3-bin.tar.gz -C /usr

设置环境变量

vi + /etc/profile

export MAVEN_HOME=/usr/apache-maven-3.3.3
export PATH=.:$PATH:$MAVEN_HOME/bin

使用source命令使配置文件生效

source /etc/profile

验证maven是否安装成功mvn -version,显示maven,java,linux等信息

4)安装protobuf

解压缩到/usr目录

tar zxvf protobuf-2.5.0.tar.gz -C /usr

切换到目录/usr/protobuf-2.5.0中然后执行./configrue,make命令

./configure

需要安装gcc c++执行yum进行安装

yum install gcc
yum install gcc-c++ #会两次提示输入y(yes/no)
yum install make

我在进行上面第二个命令,对configure进行预编译时报错,查看异常后,是c++没有安装成功,便再一次执行了:yum install  gcc-c++ 命令,OK

安装完成后执行./configure通过后执行下面的make命令

make
make install

命令执行的时间比较长,最后一个命令的执行安装,如果出现错误,重新来过即可

执行完成后测试是否安装成功

protoc --version

5)安装CMake、openssl-devel

CMake 需要2.6以上的版本,在安装的时候会提示输入 y/N,同时由于安装这个组件是需要联网的,故根据网速不同,安装的进度也不一样

yum install cmake
yum install openssl-devel
yum install ncurses-devel
6)安装ant

解压文件

tar zxvf apache-ant-1.9.6-bin.tar.gz -C /usr

设置环境变量

vi + /etc/profile

export ANT_HOME=/usr/apache-ant-1.9.6
export PATH=.:$PATH:$ANT_HOME/bin

使用source命令使配置文件生效

source /etc/profile

验证是否安装成功ant -version,显示版本信息

3.编译hadoop源代码

切换为hadoop用户

切换到目录hadoop-2.5.2-src,执行命令

mvn package-DeskipTests -Pdist,native

之后是漫长的等待过程,中间过程会下载很多的文件,具体作用不详,经常因为下载网络问题失败,再次执行编译命令即可,之前下载过不会重新执行下载

出现失败情况,如果发现在输出[INFO]前的日志是停留在某个下载链接上,直接重试

出现编译测试错误,如下图,暂时未查明原因,直接重试一次

意外中断编译后再次开始后,编译异常,如下图

为了解决问题,通过以下命令重新尝试,分为两步操作

第一步、执行

mvn clean install –DskipTests 

命令,等待完成(会自动联网下载很多东西)网络通畅后迅速搞定,不通畅的时候反复了多次

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [ 4.721 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 1.135 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 4.990 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 0.814 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.552 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 4.834 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 4.277 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [ 5.709 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 2.516 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [ 53.258 s]
[INFO] Apache Hadoop NFS .................................. SUCCESS [ 1.175 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.070 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [01:05 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 4.430 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 28.926 s]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 0.851 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.092 s]
[INFO] hadoop-yarn ........................................ SUCCESS [ 0.063 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [ 4.983 s]
[INFO] hadoop-yarn-common ................................. SUCCESS [01:01 min]
[INFO] hadoop-yarn-server ................................. SUCCESS [ 0.092 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [ 20.415 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [01:24 min]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [ 0.619 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 1.173 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 5.231 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 1.075 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [ 1.853 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [ 0.129 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [ 0.563 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [ 0.515 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [ 0.112 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [ 0.132 s]
[INFO] hadoop-mapreduce-client ............................ SUCCESS [ 0.151 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 8.638 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 5.152 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [ 1.165 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 4.623 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 1.714 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 18.468 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [ 0.856 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 1.955 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [ 0.096 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 9.471 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [01:08 min]
[INFO] Apache Hadoop Archives ............................. SUCCESS [ 0.841 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [ 1.113 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 2.258 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [ 0.981 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [ 1.185 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [ 0.098 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 1.246 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [ 0.427 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 0.336 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 21.003 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 2.185 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.097 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [ 0.227 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 08:47 min
[INFO] Finished at: 2016-01-14T09:05:59+08:00
[INFO] Final Memory: 96M/237M
[INFO] ------------------------------------------------------------------------

执行编译

mvn package -Pdist,native -DskipTests -Dtar

编译成功

[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [ 6.143 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 3.923 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 6.345 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.466 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 3.457 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 8.004 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 6.740 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [ 7.214 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 6.145 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [02:51 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [ 26.889 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.107 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [04:44 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [04:57 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [01:01 min]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 15.992 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.307 s]
[INFO] hadoop-yarn ........................................ SUCCESS [ 0.160 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [02:30 min]
[INFO] hadoop-yarn-common ................................. SUCCESS [01:06 min]
[INFO] hadoop-yarn-server ................................. SUCCESS [ 0.111 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [ 27.082 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [ 40.111 s]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [ 9.432 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 15.236 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 38.984 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 0.889 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [ 13.748 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [ 0.165 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [ 7.001 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [ 4.640 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [ 0.177 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [ 13.099 s]
[INFO] hadoop-mapreduce-client ............................ SUCCESS [ 0.141 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 39.011 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 33.940 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [ 7.769 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 18.003 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 16.896 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 7.284 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [ 3.377 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 11.164 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [ 5.954 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 8.246 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 14.047 s]
[INFO] Apache Hadoop Archives ............................. SUCCESS [ 3.657 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [ 12.067 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 8.998 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [ 5.860 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [ 5.582 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [ 11.003 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 11.071 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [ 16.352 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 0.241 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 8.179 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 7.388 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.048 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [01:07 min]
[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 27:05 min
[INFO] Finished at: 2016-01-14T09:56:35+08:00
[INFO] Final Memory: 77M/237M
[INFO] ------------------------------------------------------------------------

检查编译的本地文件,完成编译

hadoop 2.5.2源码编译的更多相关文章

  1. 基于cdh5.10.x hadoop版本的apache源码编译安装spark

    参考文档:http://spark.apache.org/docs/1.6.0/building-spark.html spark安装需要选择源码编译方式进行安装部署,cdh5.10.0提供默认的二进 ...

  2. Hadoop(一)Hadoop的简介与源码编译

    一 Hadoop简介 1.1Hadoop产生的背景 1. HADOOP最早起源于Nutch.Nutch的设计目标是构建一个大型的全网搜索引擎,包括网页抓取.索引.查询等功能,但随着抓取网页数量的增加, ...

  3. hadoop 2.7.3 源码编译教程

    1.工具准备,最靠谱的是hadoop说明文档里要求具备的那些工具. 到hadoop官网,点击source下载hadoop-2.7.3-src.tar.gz. 解压之 tar -zxvf hadoop- ...

  4. Hadoop源码编译过程

    一.           为什么要编译Hadoop源码 Hadoop是使用Java语言开发的,但是有一些需求和操作并不适合使用java,所以就引入了本地库(Native Libraries)的概念,通 ...

  5. Hadoop,HBase,Zookeeper源码编译并导入eclipse

    基本理念:尽可能的参考官方英文文档 Hadoop:  http://wiki.apache.org/hadoop/FrontPage HBase:  http://hbase.apache.org/b ...

  6. hadoop 源码编译

    hadoop 源码编译 1.准备jar 1) hadoop-2.7.2-src.tar.gz 2) jdk-8u144-linux-x64.tar.gz 3) apach-ant-1.9.9-bin. ...

  7. Spark源码编译

    原创文章,转载请注明: 转载自http://www.cnblogs.com/tovin/p/3822995.html spark源码编译步骤如下: cd /home/hdpusr/workspace ...

  8. hadoop-1.2.0源码编译

    以下为在CentOS-6.4下hadoop-1.2.0源码编译步骤. 1. 安装并且配置ant 下载ant,将ant目录下的bin文件夹加入到PATH变量中. 2. 安装git,安装autoconf, ...

  9. hadoop-2.0.0-mr1-cdh4.2.0源码编译总结

    准备编译hadoop-2.0.0-mr1-cdh4.2.0的同学们要谨慎了.首先看一下这篇文章: Hadoop作业提交多种方案 http://www.blogjava.net/dragonHadoop ...

随机推荐

  1. 机器学习聚类算法之K-means

    一.概念 K-means是一种典型的聚类算法,它是基于距离的,是一种无监督的机器学习算法. K-means需要提前设置聚类数量,我们称之为簇,还要为之设置初始质心. 缺点: 1.循环计算点到质心的距离 ...

  2. Linux日常之命令sort

    素材借鉴:https://www.cnblogs.com/51linux/archive/2012/05/23/2515299.html 命令sort 是Linux中常用的排序命令,属于管道命令. 常 ...

  3. C++线程同步 -- windows

    简介: 在一般情况下,创建一个线程是不能提高程序的执行效率的,所以要创建多个线程.但是多个线程同时运行的时候可能调用线程函数,在多个线程同时对同一个内存地址进行写入, 由于CPU时间调度上的问题,写入 ...

  4. SpringMVC @RequestMapping注解详解

    @RequestMapping 参数说明 value:定义处理方法的请求的 URL 地址.(重点) method:定义处理方法的 http method 类型,如 GET.POST 等.(重点) pa ...

  5. MySQL数据库安装和启动

    目录 一.数据库介绍 二.数据库的分类 1. 关系型数据库系统 2. 当下的关系型数据库系统 3. 当下的非关系型数据库系统 4. 关系型和非关系型数据库系统的区别 三.MySQL的架构 四.MySQ ...

  6. 【NOIP2017提高组模拟12.17】环

    题目 小A有一个环,环上有n个正整数.他有特殊的能力,能将环切成k段,每段包含一个或者多个数字.对于一个切分方案,小A将以如下方式计算优美程度: 首先对于每一段,求出他们的数字和.然后对于每段的和,求 ...

  7. 【NOIP2016提高A组五校联考4】ksum

    题目 分析 发现,当子段[l,r]被取了出来,那么[l-1,r].[l,r+1]一定也被取了出来. 那么,首先将[1,n]放入大顶堆,每次将堆顶的子段[l,r]取出来,因为它是堆顶,所以一定是最大的子 ...

  8. 【JZOJ2156】【2017.7.10普及】复仇者vsX战警之训练

    题目 月球上反凤凰装甲在凤凰之力附身霍普之前,将凤凰之力打成五份,分别附身在X战警五大战力上面辐射眼.白皇后.钢力士.秘客和纳摩上(好尴尬,汗). 在凤凰五使徒的至高的力量的威胁下,复仇者被迫逃到昆仑 ...

  9. Redis 数据安全与性能保障

    数据安全与性能保障 ·将数据持久化至硬盘·将数据复制至其他机器·处理系统故障·reids事务·非实物型流水线·诊断性能问题 持久化选项: 共享选项,这个选项决定了快照文件和AOF文件的保存位置dir ...

  10. CDOJ 1057 秋实大哥与花 线段树 区间更新+区间查询

    链接: I - 秋实大哥与花 Time Limit:1000MS     Memory Limit:65535KB     64bit IO Format:%lld & %llu Submit ...