hadoop报错WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/06/14 10:44:58 WARN common.Util: Path /opt/hadoopdata/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
19/06/14 10:44:58 WARN common.Util: Path /opt/hadoopdata/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
19/06/14 10:44:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1 解决
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
查看
[root@hadoop1 conf]# sed -i '$a export HADOOP_ROOT_LOGGER=DEBUG,console' /etc/profile
[root@hadoop1 conf]# source /etc/profile
[hadoop@hadoop1 sbin]$ hadoop fs -ls /
19/06/14 11:04:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
drwxrwx--- - hadoop supergroup 0 2019-05-31 16:26 /tmp
drwxr-xr-x - hadoop supergroup 0 2019-05-31 16:20 /user
##查看文件是否有系统一致
[root@hadoop1 native]# file /opt/hadoop/lib/native/libhadoop.so.1.0.0
libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
[hadoop@hadoop2 sbin]$ uname -i
x86_64
给hadoop执行操作了debug,还是没看到详细日志
一开始执行网上的这些操作步骤
vim /opt/hadoop/etc/hadoop/hadoop-env.sh
vim /etc/profile
export HADOOP_HOME=/opt/hadoop/
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
source /etc/profile
vim ~/.bashrc
source ~/.bashrc
完成之后还是报错
于是开始执行
[root@hadoop1 build]# strings /lib64/libc.so.6 | grep GLIBC
GLIBC_2.2.5
GLIBC_2.2.6
GLIBC_2.3
GLIBC_2.3.2
GLIBC_2.3.3
GLIBC_2.3.4
GLIBC_2.4
GLIBC_2.5
GLIBC_2.6
GLIBC_2.7
GLIBC_2.8
GLIBC_2.9
GLIBC_2.10
GLIBC_2.11
GLIBC_2.12
GLIBC_PRIVATE
应该是缺少了2.14的支持
http://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.gz
tar -zxvf glibc-2.14.tar.gz
cd glibc-2.14 && mkdir build && cd build
../configure --prefix=/opt/glibc-2.14
##如果有报错
configure: error: in `/opt/glibc-2.14/build':
configure: error: no acceptable C compiler found in $PATH
--执行yum install -y gcc gcc-c++ make cmake
make -j4
make install
[root@hadoop1 build]# mkdir /opt/glibc-2.14/etc/
[root@hadoop1 build]# cp /etc/ld.so.c* /opt/glibc-2.14/etc/
cp: omitting directory `/etc/ld.so.conf.d'
[root@hadoop1 build]# ln -sf /opt/glibc-2.14/lib/libc-2.14.so /lib64/libc.so.6
[root@hadoop1 build]# strings /lib64/libc.so.6 | grep GLIBC
GLIBC_2.2.5
GLIBC_2.2.6
GLIBC_2.3
GLIBC_2.3.2
GLIBC_2.3.3
GLIBC_2.3.4
GLIBC_2.4
GLIBC_2.5
GLIBC_2.6
GLIBC_2.7
GLIBC_2.8
GLIBC_2.9
GLIBC_2.10
GLIBC_2.11
GLIBC_2.12
GLIBC_2.13
GLIBC_2.14
GLIBC_PRIVATE
[hadoop@hadoop1 sbin]$ hadoop fs -ls / ###不在报错
Found 2 items
drwxrwx--- - hadoop supergroup 0 2019-05-31 16:26 /tmp
drwxr-xr-x - hadoop supergroup 0 2019-05-31 16:20 /user
2 hdfs文件修<!-- Put site-specific property overrides in this file. --
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop2:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoopdata/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoopdata/hdfs/data</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/opt/hadoopdata/hdfs/snn</value>
</property>
<property>
<name>dfs.namenode.checkpoint.period</name>
<value>3600</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
====ha之后新增加的内容
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>hadoop1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>hadoop2:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.ns1.nn1</name>
<value>hadoop1:8040</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.ns1.nn2</name>
<value>hadoop2:8040</value>
</property> <property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>hadoop1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>hadoop2:50070</value>
</property> <property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/ns1</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/hadoopdata/hdfs/journal</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
</configuration>
修改并应用到其他节点
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop2:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/hadoopdata/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:////opt/hadoopdata/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
启动
[hadoop@hadoop1 hadoop]$ /opt/hadoop/sbin/start-dfs.sh
Starting namenodes on [hadoop1 hadoop2]
hadoop2: starting namenode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-namenode-hadoop2.out
hadoop1: starting namenode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-namenode-hadoop1.out
hadoop2: starting datanode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-datanode-hadoop2.out
hadoop1: starting datanode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-datanode-hadoop1.out
hadoop3: starting datanode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-datanode-hadoop3.out
Starting journal nodes [hadoop1 hadoop2 hadoop3]
hadoop1: starting journalnode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-journalnode-hadoop1.out
hadoop2: starting journalnode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-journalnode-hadoop2.out
hadoop3: starting journalnode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-journalnode-hadoop3.out
Starting ZK Failover Controllers on NN hosts [hadoop1 hadoop2]
hadoop1: starting zkfc, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-zkfc-hadoop1.out
hadoop2: starting zkfc, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-zkfc-hadoop2.out
hadoop报错WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable的更多相关文章
- Hadoop集群“WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable”解决办法
Hadoop集群部署完成后,经常会提示 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platfo ...
- Hadoop问题解决:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
在配置好hadoop的环境之后,命令启动./start-all.sh发现经常出现这样的一个警告: WARN util.NativeCodeLoader: Unable to load native-h ...
- hadoop "startdfs.sh" WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
这个waring 信息是可以忽略的.下面是解决方案 在hadoop-env.sh中添加 export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.pat ...
- 17/11/24 05:08:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-11-24 21:20:25 1:什么叫失望,什么叫绝望.总之是一脸懵逼的继续...... 之前部署的hadoop都是hadoop-2.4.1.tar.gz,这几天换成了hadoop-2.6 ...
- hadoop2.4 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
在Ubuntu上安装完hadoop2.4以后,使用以下命令: hadoop fs -ls // :: WARN util.NativeCodeLoader: Unable to load native ...
- Hadoop安装—— WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platf
今天在安装hadoop完成测试创建用户目录失败在网上找到了原因记录一下原文地址 http://blog.csdn.net/l1028386804/article/details/51538611 配置 ...
- Hadoop - 彻底解决警告:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform...
目录 1 - 在日志配置文件中忽略警告 - 有效 2 - 指定本地库的路径 - 无效 3 - 不使用 Hadoop 本地库 - 无效 4 - 替换 Hadoop 本地库 - 有效 5 - 根据源码,编 ...
- [hadoop] WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop 启动后,有警告信息: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform ...
- HADOOP:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable终于解决了
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin- ...
随机推荐
- Qt Creator 4.9 发布
Qt Creator 4.8中引入了语言服务器协议支持,允许Qt Creator通过利用此标准轻松支持更多编程语言,使IDE和其他编程工具可以轻松地获得通用编程语言支持的交换格式.使用Qt Creat ...
- 在Scrapy里设置Cookies 要注意一点!
1.requests里设置cookies,可以将cookies放入headers里一同提交. {'Accept': 'text/html,application/xhtml+xml,applicati ...
- Linux日常之命令tee
命令tee (1)读取标准输入的数据,并将其内容输出成文件 (2)主要用于重定向到文件 常用参数 -a,将读取的内容追加到文件的后面,而不是覆盖(在默认的情况下是覆盖) 命令tee与重定向的区别 重定 ...
- ArrayList与LinkedList的区别
两者区别大致分为以下几点: 1.ArrayList采用的是采用的是数组形式保存数据,这种方式将对象放在连续的位置中(线性存储):LinkedList采用的将对象放在独立的空间中,每个空间还保留下一个节 ...
- 【洛谷P2922】Secret Message
题目大意:给定 N 个字符串组成的字典,有 M 个询问,每次给定一个字符串,求字典中有多少个单词为给定字符串的前缀或前缀是给定的字符串. 题解:在 trie 上维护一个 tag 表示有多少字符串以当前 ...
- 炸弹:线段树优化建边+tarjan缩点+建反边+跑拓扑
这道题我做了有半个月了...终于A了... 有图为证 一句话题解:二分LR线段树优化建边+tarjan缩点+建反边+跑拓扑统计答案 首先我们根据题意,判断出来要炸弹可以连着炸,就是这个炸弹能炸到的可以 ...
- Quartz(二)
1 SchedulerFactory 1.1 概述 Quartz是以模块的方式构建的,因为,要使它运行,几个组件必须很好的组合在一起.非常幸运的是,已经有了一些现存的助手可以完成这些工作. 所有Sch ...
- winform 异步更新ui
http://download.csdn.net/download/mingge38/9378852
- linux-selinxu---性能 -8
sed -ri.bk '/^SELINUX=/s/(SELINUX=)(.*)/\1disabled/' /etc/selinuconfig 修改并备份 脚本打开关闭 selinux if [[ &q ...
- vector利用swap()函数进行内存的释放
首先,vector与deque不同,其内存占用空间只会增长,不会减小.比如你首先分配了10,000个字节,然后erase掉后面9,999个,则虽然有效元素只有一个,但是内存占用仍为10,000个.所有 ...