Hadoop3.1.2 + Hbase2.2.0 设置lzo压缩算法:

写在前面,在配置hbase使用lzo算法时,在网上搜了很多文章,一般都是比较老的文章,一是版本低,二是一般都是使用hadoop-gpl-compression,hadoop-gpl-compression是一个比较老的依赖包,现已被hadoop-lzo替代,希望遇到hadoop和hbase配置lzo算法时,能有所帮助

安装lzo库

1.下载最新的lzo库,下载地址:http://www.oberhumer.com/opensource/lzo/download/

2.解压lzo库

tar -zxvf lzo-2.10.tar.gz

3.进入解压后的lzo目录,执行./configure  --enable-shared

cd lzo-2.10
./configure --enable-shared -prefix=/usr/local/hadoop/lzo

4.执行make进行编译,编译完成后,执行make install进行安装

make && make install

如果没有安装lzo库,在hbase中创建表时指定compression为lzo时会报错:

ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.RuntimeException: native-lzo library not available Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2314)
at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:2156)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2048)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:651)
at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.RuntimeException: native-lzo library not available
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:103)
at org.apache.hadoop.hbase.master.HMaster.checkCompression(HMaster.java:2384)
at org.apache.hadoop.hbase.master.HMaster.checkCompression(HMaster.java:2377)
at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:2154)
... 7 more
Caused by: java.lang.RuntimeException: native-lzo library not available
at com.hadoop.compression.lzo.LzoCodec.getCompressorType(LzoCodec.java:135)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168)
at org.apache.hadoop.hbase.io.compress.Compression$Algorithm.getCompressor(Compression.java:355)
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:98)
... 10 more

5,库文件被默认安装到了/usr/local/lib,将/usr/local/lib拷贝到/usr/lib下,或者在/usr/lib下建立软连接

cd /usr/lib
ln -s /usr/local/lib/* .

6.安装lzop  wget http://www.lzop.org/download/lzop-1.04.tar.gz

tar -zxvf lzop-1.04.tar.gz
./configure -enable-shared -prefix=/usr/local/hadoop/lzop
make && make install

7.把lzop复制到/usr/bin/或建立软连接

ln -s /usr/local/hadoop/lzop/bin/lzop /usr/bin/lzop

二、安装hadoop-lzo

1.下载hadoop-lzo ,下载地址:wget https://github.com/twitter/hadoop-lzo/archive/master.zip 这是一个zip压缩包,如果想使用git下载,可以使用该链接:https://github.com/twitter/hadoop-lzo

2.编译hadoop-lzo源码,在编译之前如果没有安装maven需要配置maven环境,解压缩master.zip,为:hadoop-lzo-master,进入hadoop-lzo-master中,修改pom.xml中hadoop版本配置,进行maven编译

unzip master.zip
cd hadoop-lzo-master
vim pom.xml
修改hadoop.current.version为自己对应的hadoop版本,我这里是3.1.2
<properties>
  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  <hadoop.current.version>3.1.2</hadoop.current.version>
  <hadoop.old.version>1.0.4</hadoop.old.version>
</properties>

3.在hadoop-lzo-master目录中执行一下命令编译hadoop-lzo:

export CFLAGS=-m64
export CXXFLAGS=-m64
export C_INCLUDE_PATH=/usr/local/hadoop/lzo/include #对应lzo安装的目录
export LIBRARY_PATH=/usr/local/hadoop/lzo/lib      #对应lzo安装的目录
mvn clean package -Dmaven.test.skip=true

4.打包完成后,进入target/native/Linux-amd64-64,将libgplcompression*复制到hadoop的native中,将hadoop-lzo.xxx.jar 复制到每台hadoop的common包里

cd target/native/Linux-amd64-64
tar -cBf - -C lib . | tar -xBvf - -C ~
cp ~/libgplcompression* $HADOOP_HOME/lib/native/
cp target/hadoop-lzo-0.4.18-SNAPSHOT.jar $HADOOP_HOME/share/hadoop/common/
libgplcompression*文件:

  其中libgplcompression.so和libgplcompression.so.0是链接文件,指向libgplcompression.so.0.0.0,

将上面生成的libgplcompression*和target/hadoop-lzo-xxx-SNAPSHOT.jar同步到集群中的所有机器对应的目录($HADOOP_HOME/lib/native/,$HADOOP_HOME/share/hadoop/common/)。

配置hadoop环境变量

1.在$HADOOP_HOME/etc/hadoop/hadoop-env.sh文件中配置:

export LD_LIBRARY_PATH=/usr/local/lib/lzo/lib
# Extra Java CLASSPATH elements. Optional.
export HADOOP_CLASSPATH="<extra_entries>:$HADOOP_CLASSPATH:${HADOOP_HOME}/share/hadoop/common"
export JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:$HADOOP_HOME/lib/native

2.在$HADOOP_HOME/etc/hadoop/core-site.xml加上如下配置:

<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec,
org.apache.hadoop.io.compress.BZip2Codec
</value>
</property> <property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

如果没有配置,在hbase中创建表时compression指定lzo时会报错:

Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.RuntimeException: java.lang.ClassNotFoundException: com.hadoop.compression.lzo.LzoCodec
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:103)
at org.apache.hadoop.hbase.master.HMaster.checkCompression(HMaster.java:2384)
at org.apache.hadoop.hbase.master.HMaster.checkCompression(HMaster.java:2377)
at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:2154)
... 7 more
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: com.hadoop.compression.lzo.LzoCodec
at org.apache.hadoop.hbase.io.compress.Compression$Algorithm$1.buildCodec(Compression.java:128)
at org.apache.hadoop.hbase.io.compress.Compression$Algorithm$1.getCodec(Compression.java:114)
at org.apache.hadoop.hbase.io.compress.Compression$Algorithm.getCompressor(Compression.java:353)
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:98)
... 10 more
Caused by: java.lang.ClassNotFoundException: com.hadoop.compression.lzo.LzoCodec
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.hbase.io.compress.Compression$Algorithm$1.buildCodec(Compression.java:124)
... 13 more

3.在$HADOOP_HOME/etc/hadoop/mapred-site.xml加上如下配置:

<property>
<name>mapred.compress.map.output</name>
<value>true</value>
</property> <property>
<name>mapred.map.output.compression.codec</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property> <property>
<name>mapred.child.env</name>
<value>LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib</value>
</property>
<property>
<name>mapred.child.env</name>
<value>LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib</value>
</property>

将上述修改的配置文件全部同步到集群的所有机器上,并重启Hadoop集群,这样就可以在Hadoop中使用lzo。

在Hbase中配置lzo

1.将hadoop-lzo-xxx.jar复制到/hbase/lib中

cp target/hadoop-lzo-0.4.18-SNAPSHOT.jar $HBASE_HOME/lib

2.在hbase/lib下创建native文件夹,在/hbase/lib/native下创建Linux-amd64-64 -> /opt/hadoop/lib/native的软连接

ln -s /opt/hadoop/lib/native Linux-amd64-64

如图:

3.在$HBASE_HOME/conf/hbase-env.sh中添加如下配置:

export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/

4.在$HBASE_HOME/conf/hbase-site.xml中添加如下配置:

<property>
<name>hbase.regionserver.codecs</name>
<value>lzo</value>
</property>

5.启动hbase一切正常

注意:关于hadoop-gpl-compression的说明:

  hadoop-lzo-xxx的前身是hadoop-gpl-compression-xxx,之前是放在google code下管理,地址:http://code.google.com/p/hadoop-gpl-compression/ .但由于协议问题后来移植到github上,也就是现在的hadoop-lzo-xxx,github,链接地址:https://github.com/kevinweil/hadoop-lzo.网上介绍hadoop lzo压缩绝大部分都是基于hadoop-gpl-compression的介绍.而hadoop-gpl-compression还是09年开发的,跟现在hadoop版本已经无法再完全兼容,会发生一些问题。因此也趟了一些坑。希望能给一些朋友一点帮助。

在使用hadoop-gpl-compression-xxx.jar时,hbase启动会报如下错:

2019-09-03 11:36:22,771 INFO  [main] lzo.GPLNativeCodeLoader: Loaded native gpl library
2019-09-03 11:36:22,866 WARN [main] lzo.LzoCompressor: java.lang.NoSuchFieldError: lzoCompressLevelFunc
2019-09-03 11:36:22,866 ERROR [main] lzo.LzoCodec: Failed to load/initialize native-lzo library
2019-09-03 11:36:23,169 WARN [main] util.CompressionTest: Can't instantiate codec: lzo
org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.RuntimeException: native-lzo library not available
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:103)
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:69)
at org.apache.hadoop.hbase.regionserver.HRegionServer.checkCodecs(HRegionServer.java:834)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:565)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:506)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3180)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3198)
Caused by: java.lang.RuntimeException: native-lzo library not available
at com.hadoop.compression.lzo.LzoCodec.getCompressorType(LzoCodec.java:135)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168)
at org.apache.hadoop.hbase.io.compress.Compression$Algorithm.getCompressor(Compression.java:355)
at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:98)
... 14 more
2019-09-03 11:36:23,183 ERROR [main] regionserver.HRegionServer: Failed construction RegionServer
java.io.IOException: Compression codec lzo not supported, aborting RS construction
at org.apache.hadoop.hbase.regionserver.HRegionServer.checkCodecs(HRegionServer.java:835)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:565)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:506)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3180)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3198)
2019-09-03 11:36:23,184 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster.
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3187)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3198)
Caused by: java.io.IOException: Compression codec lzo not supported, aborting RS construction
at org.apache.hadoop.hbase.regionserver.HRegionServer.checkCodecs(HRegionServer.java:835)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:565)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:506)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3180)
... 5 more

  当删除hadoop-gpl-compression-xxx.jar时,替换为hadoop-lzo.xxx.jar后,再启动hbase,一切正常:

2019-09-03 14:57:43,755 INFO  [main] lzo.GPLNativeCodeLoader: Loaded native gpl library from the embedded binaries
2019-09-03 14:57:43,758 INFO [main] lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev 5dbdddb8cfb544e58b4e0b9664b9d1b66657faf5]
2019-09-03 14:57:43,983 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2019-09-03 14:57:44,088 INFO [main] compress.CodecPool: Got brand-new compressor [.lzo_deflate]

  

Hadoop3.1.2 + Hbase2.2.0 设置lzo压缩算法的更多相关文章

  1. Android4.0设置界面改动总结(三)

    Android4.0设置界面改动总结大概介绍了一下设置改tab风格,事实上原理非常easy,理解两个基本的函数就可以: ①.invalidateHeaders(),调用此函数将又一次调用onBuild ...

  2. 配置子目录Web.config使其消除继承,iis7.0设置路由

    iis7.0设置路由 ,url转向,伪静态 <system.webServer>      <modules runAllManagedModulesForAllRequests=& ...

  3. Android4.0设置界面改动总结(二)

    今年1月份的时候.有和大家分享给予Android4.0+系统设置的改动:Android4.0设置界面改动总结 时隔半年.回头看看那个时候的改动.事实上是有非常多问题的,比方说: ①.圆角Item会影响 ...

  4. hyper-v 中 安装 Centos 7.0 设置网络 教程

    安装环境是: 系统:win server 2012 r2 DataCenter hyper-v版本:6.3.9600.16384 centos版本:7.0 从网上下载的 centos 7.0  如果找 ...

  5. IIS7.0设置404错误页,返回500状态码

    一般在II6下,设置自定义404错误页时,只需要在错误页中选择自定义的页面,做自己的404页面即可.但是在IIS7.0及以上时,设置完404错误页后,会发现状态码返回的是500,并且可能会引起页面乱码 ...

  6. cocos2dx 2.0+ 版本,IOS6.0+设置横屏

    使用cocos2dx 自带的xcode模板,是不能正常的设置为横屏的. 一共修改了三个地方: 在项目属性中:Deployment Info中,勾选上 Landscape left,以及 Landsca ...

  7. CentOS 6.0 设置IP地址、网关、DNS

    切忌:    在做任何操作之前先备份原文件,我们约定备份文件的名称为:源文件名称+bak,例如原文件名称为:centos.txt    那么备份文件名称为:centos.txtbak 引言:linux ...

  8. VC6.0设置选项解读(转)

    其实软件调试还是一个技术熟练过程,得慢慢自己总结,可以去搜索引擎查找一些相关的文章看看,下边是一篇关于VC6使用的小文章,贴出来大家看看: 大家可能一直在用VC开发软件,但是对于这个编译器却未必很了解 ...

  9. Phonegap 3.0 设置APP是否全屏

    Phonegap 3.0 默认是全屏,如需要取消全屏,可手动修改config, 在APP/res/xml/config.xml文件可设置preference: <?xml version='1. ...

随机推荐

  1. json与js对象间的转化

  2. css3系列之详解perspective

    perspective 简单来说,就是设置这个属性后,那么,就可以模拟出像我们人看电脑上的显示的元素一样.比如说, perspective:800px   意思就是,我在离屏幕800px 的地方观看这 ...

  3. tab选项卡代码

    $('.case_header ul li').click(function(){ $(this).addClass('active').siblings().removeClass('active' ...

  4. 从原理层面掌握@SessionAttribute的使用【一起学Spring MVC】

    每篇一句 不是你当上了火影大家就认可你,而是大家都认可你才能当上火影 前言 该注解顾名思义,作用是将Model中的属性同步到session会话当中,方便在下一次请求中使用(比如重定向场景~). 虽然说 ...

  5. Linux基础文件权限

    一.基本权限 文件权限设置: 可以赋于某个用户或组 能够以何种方式 访问某个文件 权限对象:属主: u属组: g其他人: o 基本权限类型:读:r 4写:w 2执行: x 1 rwx rw- r-- ...

  6. 用html和css写一个头部header和左侧菜单栏menu-bar固定的的页面

    这个页面header部分是100%的宽度,60px的高度,左侧是刚好一屏的高度,180的宽度,右侧的部分把剩余的空间占满,刚开始的时候还没怎么接触这样的页面,以为使用js读取浏览的可视化宽高,然后在做 ...

  7. CodeGlance右侧窗口缩略图消失不见

    说明下问题,idea中的CodeGlance插件会在右侧显示缩略图,可以快速定位代码.今天遇到个问题升级了插件后右侧窗口消失.经过卸载插件,重启,reset一系列操作后还是没能恢复. 能去搜索引擎搜索 ...

  8. 为什么建立数据仓库需要使用ETL工具?

    在做项目时是不是时常让客户有这样的困扰: 1.开发时间太长 2.花费太多 3.需要太多资源 4.集成多个事务系统数据总是需要大量人力成本 5.找不到合适的技能和经验的人 6.一旦建立,数据仓库无法足够 ...

  9. String——字符串

    首先看一下string的一部分源码吧 public final class String private final char value[]; 我们暂且只看这两行, 第一行String被final修 ...

  10. 2019牛客多校训练第三场B.Crazy Binary String(思维+前缀和)

    题目传送门 大致题意: 输入整数n(1<=n<=100000),再输入由n个0或1组成的字符串,求该字符串中满足1和0个数相等的最长子串.子序列. sample input: 801001 ...