解决讨厌的警告 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
问题:
执行任何hadoop命令,都会提示如下WARN。虽然影响不大,但是每次运行一个命令都有这么个WARN,让人很不爽,作为一个精致的男人, 必须要干掉它。
[root@master logs]# hdfs dfs -cat /output/part-r-
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
网上搜了下,这个问题有两个原因。
解决办法1:
增加调试信息设置
export HADOOP_ROOT_LOGGER=DEBUG,console
再执行一遍命令, 关注到红色部分。
[root@master native]# hdfs dfs -cat /output/part-r-
// :: DEBUG util.Shell: setsid exited with exit code
// :: DEBUG conf.Configuration: parsing URL jar:file:/opt/hadoop/hadoop-2.9./share/hadoop/common/hadoop-common-2.9..jar!/core-default.xml
// :: DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@20e2cbe0
// :: DEBUG conf.Configuration: parsing URL file:/opt/hadoop/hadoop-2.9./etc/hadoop/core-site.xml
// :: DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@a67c67e
// :: DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
// :: DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
// :: DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups])
// :: DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Renewal failures since startup])
// :: DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Renewal failures since last successful login])
// :: DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
// :: DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
// :: DEBUG security.Groups: Creating new Groups object
// :: DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
18/12/20 17:20:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: /opt/hadoop/hadoop-2.9.2/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /opt/hadoop/hadoop-2.9.2/lib/native/libhadoop.so.1.0.0)
18/12/20 17:20:44 DEBUG util.NativeCodeLoader: java.library.path=/opt/hadoop/hadoop-2.9.2/lib:/opt/hadoop/hadoop-2.9.2/lib/native
18/12/20 17:20:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: DEBUG util.PerformanceAdvisory: Falling back to shell based
// :: DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
// :: DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=; warningDeltaMs=
// :: DEBUG core.Tracer: sampler.classes = ; loaded no samplers
// :: DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
// :: DEBUG security.UserGroupInformation: hadoop login
// :: DEBUG security.UserGroupInformation: hadoop login commit
// :: DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
// :: DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root
// :: DEBUG security.UserGroupInformation: User entry: "root"
// :: DEBUG security.UserGroupInformation: Assuming keytab is managed externally since logged in from subject.
// :: DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE)
// :: DEBUG core.Tracer: sampler.classes = ; loaded no samplers
// :: DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
// :: DEBUG fs.FileSystem: Loading filesystems
// :: DEBUG fs.FileSystem: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar
// :: DEBUG fs.FileSystem: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar
// :: DEBUG fs.FileSystem: ftp:// = class org.apache.hadoop.fs.ftp.FTPFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar
// :: DEBUG fs.FileSystem: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar
// :: DEBUG fs.FileSystem: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar
// :: DEBUG fs.FileSystem: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar
// :: DEBUG fs.FileSystem: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar
// :: DEBUG fs.FileSystem: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar
// :: DEBUG fs.FileSystem: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar
// :: DEBUG fs.FileSystem: hftp:// = class org.apache.hadoop.hdfs.web.HftpFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar
// :: DEBUG fs.FileSystem: hsftp:// = class org.apache.hadoop.hdfs.web.HsftpFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar
// :: DEBUG fs.FileSystem: Looking for FS supporting hdfs
// :: DEBUG fs.FileSystem: looking for configuration option fs.hdfs.impl
// :: DEBUG fs.FileSystem: Looking in service filesystems for implementation class
// :: DEBUG fs.FileSystem: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
// :: DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
// :: DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = false
// :: DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
// :: DEBUG impl.DfsClientConf: dfs.domain.socket.path =
// :: DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to
// :: DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
// :: DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@932bc4a
// :: DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@1b1426f4
// :: DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
// :: DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
// :: DEBUG ipc.Client: The ping interval is ms.
// :: DEBUG ipc.Client: Connecting to master/192.168.102.3:
// :: DEBUG ipc.Client: IPC Client () connection to master/192.168.102.3: from root: starting, having connections
// :: DEBUG ipc.Client: IPC Client () connection to master/192.168.102.3: from root sending # org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
// :: DEBUG ipc.Client: IPC Client () connection to master/192.168.102.3: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 42ms
// :: DEBUG ipc.Client: IPC Client () connection to master/192.168.102.3: from root sending # org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations
// :: DEBUG ipc.Client: IPC Client () connection to master/192.168.102.3: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getBlockLocations took 2ms
// :: DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{
fileLength=
underConstruction=false
blocks=[LocatedBlock{BP--192.168.102.3-:blk_1073741832_1008; getBlockSize()=; corrupt=false; offset=; locs=[DatanodeInfoWithStorage[192.168.102.4:,DS----a4e3-1517663a515a,DISK], DatanodeInfoWithStorage[192.168.102.5:,DS-ca41aefb-6ecd-48c8-a063-dab5052a96d4,DISK]]}]
lastLocatedBlock=LocatedBlock{BP--192.168.102.3-:blk_1073741832_1008; getBlockSize()=; corrupt=false; offset=; locs=[DatanodeInfoWithStorage[192.168.102.5:,DS-ca41aefb-6ecd-48c8-a063-dab5052a96d4,DISK], DatanodeInfoWithStorage[192.168.102.4:,DS----a4e3-1517663a515a,DISK]]}
isLastBlockComplete=true}
// :: DEBUG hdfs.DFSClient: Connecting to datanode 192.168.102.4:
// :: DEBUG ipc.Client: IPC Client () connection to master/192.168.102.3: from root sending # org.apache.hadoop.hdfs.protocol.ClientProtocol.getServerDefaults
// :: DEBUG ipc.Client: IPC Client () connection to master/192.168.102.3: from root got value #
// :: DEBUG ipc.ProtobufRpcEngine: Call: getServerDefaults took 0ms
// :: DEBUG sasl.SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /192.168.102.4, datanodeId = DatanodeInfoWithStorage[192.168.102.4:,DS----a4e3-1517663a515a,DISK]
hadoop
hbase
hive
mapreduce
spark
sqoop
storm
// :: DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@1b1426f4
// :: DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@1b1426f4
// :: DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@1b1426f4
// :: DEBUG ipc.Client: Stopping client
// :: DEBUG ipc.Client: IPC Client () connection to master/192.168.102.3: from root: closed
// :: DEBUG ipc.Client: IPC Client () connection to master/192.168.102.3: from root: stopped, remaining connections
// :: DEBUG util.ShutdownHookManager: Completed shutdown in 0.004 seconds; Timeouts:
// :: DEBUG util.ShutdownHookManager: ShutdownHookManger completed shutdown.
说明系统中的glibc的版本和libhadoop.so需要的版本不一致导致。
查看系统的libc版本
[root@master native]# ll /lib64/libc.so.
lrwxrwxrwx. root root 12月 : /lib64/libc.so. -> libc-2.12.so
系统版本小于libhadoop.so.1.0.0所需版本 version `GLIBC_2.14'
离线安装gcc4.8
https://blog.csdn.net/qq805934132/article/details/82893724
下载glibc
一、安装glibc-2.14(由于我的集群是内部局域网,所以只能找了台其他的服务器编译了一下
[root@jrgc130 ~]# wget http://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.gz
[root@jrgc130 ~]# mv glibc-2.14.tar.gz /opt/software
[root@jrgc130 ~]# cd /opt/software
[root@jrgc130 software]# tar xf glibc-2.14.tar.gz
[root@jrgc130 software]# cd glibc-2.14
[root@jrgc130 glibc-2.14]# mkdir build
[root@jrgc130 glibc-2.14]# cd build
[root@jrgc130 build]# ../configure --prefix=/usr/local/glibc-2.14
[root@jrgc130 build]# make -j4
[root@jrgc130 build]# make install
此处因为缺少很多库,没有编译成功。后续再想办法解决吧
解决办法2:
另一个原因是由于在apache hadoop官网上下载的hadoopXXX.tar.gz实际是32位的机器上编译的(蛋疼吧),我集群使用的64bit的,加载.so文件时出错,当然基本上不影响使用hadoop(如果你使用mahout做一些机器学习的任务时有可能会遇到麻烦,加载不成功,任务直接退出,所以还是有必要解决掉这个WARN的)。
具体办法:
1. 下载hadoop-2.9.2-src.tar.gz源码 https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.9.2/hadoop-2.9.2-src.tar.gz
2. 在某台64位机器上编译(由于我的集群机器是内部局域网,所以只能找一台能连外网的服务器编译)
3. 替换之前的$HADOOP_HOME/lib/native为新编译的native
Hadoop源码编译
编译步骤:
1、安装jdk 配置环境变量
2、安装maven 配置环境变量
export MAVEN_HOME=/home/yuany/hadoop/apache-maven-3.6. export PATH=$MAVEN_HOME:/home/yuany/android-studio/bin:/usr/local/lib/anaconda2/bin:$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
source ~/.bashrc
mvn -version
sudo apt-get install g++ autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev
6、安装protobuf
- 下载protobuf代码 https://github.com/protocolbuffers/protobuf/releases
- 安装protobuf
yuany@Mobile238:~/hadoop$ tar xzvf protobuf-all-3.6..tar.gz
yuany@Mobile238:~/hadoop$ cd protobuf-3.6./
yuany@Mobile238:~/hadoop/protobuf-3.6.$ ./configure --prefix=/usr/local/protobuf
yuany@Mobile238:~/hadoop/protobuf-3.6.$ make
yuany@Mobile238:~/hadoop/protobuf-3.6.$ make install
3. 至此安装完成,下面是配置:
(1) vim ~/.bashrc,添加
export PATH=$PATH:/usr/local/protobuf/bin/
export PKG_CONFIG_PATH=/usr/local/protobuf/lib/pkgconfig/
编译Hadoop
先把源码拷贝到 linux上,进入源码目录/home/yuany/hadoop/hadoop-2.9.2-src
执行
mvn clean package -Pdist,native -DskipTests -Dtar
等待结果......经过漫长的等待。如果看到如下结果证明编译成功!
解决讨厌的警告 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable的更多相关文章
- Hadoop _ 疑难杂症 解决1 - WARN util.NativeCodeLoader: Unable to load native-hadoop library for your plat
最近博主在进行Hive测试 压缩解压缩的时候 遇到了这个问题, 该问题也常出现在日常 hdfs 指令中, 在启动服务 与 hdfs dfs 执行指令的时候 : 都会显示该提示,下面描述下该问题应该如何 ...
- Hadoop - 彻底解决警告:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform...
目录 1 - 在日志配置文件中忽略警告 - 有效 2 - 指定本地库的路径 - 无效 3 - 不使用 Hadoop 本地库 - 无效 4 - 替换 Hadoop 本地库 - 有效 5 - 根据源码,编 ...
- HADOOP:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable终于解决了
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin- ...
- Hadoop集群“WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable”解决办法
Hadoop集群部署完成后,经常会提示 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platfo ...
- hadoop命令运行,去除:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform 警告
参照:Hadoop之—— WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... 修 ...
- Hadoop问题解决:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
在配置好hadoop的环境之后,命令启动./start-all.sh发现经常出现这样的一个警告: WARN util.NativeCodeLoader: Unable to load native-h ...
- WARN util.NativeCodeLoader: Unable to load native-hadooplibrary for your platform… using builtin-java classes where applicable
方法1glibc 官方要求的2.14版本以上 方法2:http://www.secdoctor.com/html/yyjs/31101.html 方法3: http://dl.bintray.com/ ...
- [hadoop] WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop 启动后,有警告信息: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform ...
- hadoop2.4 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
在Ubuntu上安装完hadoop2.4以后,使用以下命令: hadoop fs -ls // :: WARN util.NativeCodeLoader: Unable to load native ...
随机推荐
- 自学python之路(day5)
一.文件操作1. 只读1) r 以str形式 f = open('d:\文件操作.txt',mode='r',encoding='utf-8') # r是默认的 content=f.read() pr ...
- JavaScript中的各种宽高总结
window和document首先我们来高清两个概念: window和document的区别是什么? window.location和document.location是一样吗?第一个问题 ...
- oracle数据库实例启动与关闭
区分数据库与实例:实例是指各种内存结构和服务进程,数据库是指基于磁盘存储的数据文件.控制文件.参数文件.日志文件和归档日志文件组成的物里文件集合. 数据库实例启动: startup [nomount ...
- python基础---面向对象的概念
1.面向对象 什么是面向过程?? 将一个复杂单位问题一步步小化,最终只需要完成一个人小的功能就可以了 比如:将大象放进冰箱要几步? 一共三步:打开冰箱,把大象塞进入,关门就可以了 优点:复杂度降低了, ...
- mybatis 源码分析二
1.SqlSession下的四大对象 Executor.StatementHandler.ParameterHandler.ResultSetHandler StatementHandler的作用是使 ...
- xPath 用法总结整理
xPath 用法总结整理 一.xpath介绍 XPath 是一门在 XML 文档中查找信息的语言.XPath 用于在 XML 文档中通过元素和属性进行导航. XPath 使用路径表达式在 XML ...
- TP5对数据库操作的事物作用
假如: 你写好了一段完整的代码,模型对数据库的操作,增删改查什么的,都没有问题,当然运行速度也是最快的,完全不用担心会出错, 前提肯定是已经写好的一整段代码, 但是,万一服务器中断了呢,执行一半,后面 ...
- 数组之slice,splice,Concact,Reverse,Sort方法
Slice(strart,end)用来从数组中提取元素.该方法不会改变元素数组,而是将截取到的元素封装到一个新数组中返回 参数start 截取开始的位置索引,包含开始索引 参数end 截取结束位置的索 ...
- 【论文笔记】Malware Detection with Deep Neural Network Using Process Behavior
[论文笔记]Malware Detection with Deep Neural Network Using Process Behavior 论文基本信息 会议: IEEE(2016 IEEE 40 ...
- sqlalchemy关于时间的数据类型
#导入模块 from sqlalchemy import Column, Integer, String, Date, create_engine from sqlalchemy.ext.declar ...