1. Problem

We wrote a shell script to uninstall Cloudera Manager(CM) that run in a cluster with 3 linux server. After run the script, we reinstalled the CM normally. But when we established Hdfs encountered a problem: **failed to format NameNode. Cannot find CDH's bigtop-detect-javahome. **

2.Thinking

  1. By google we found if it caused by the exist of floder "/dfs", but after used command rm -rf /dfs the problem still happen.
  2. We found the error reported: /usr/libexec/bigtop-detect-javahome, so we thought if caused by JAVA_HOME variable exception. At first, we disable the file: /etc/profile's JAAVA_HOME variable. To make the change effect, we shouldsource /etc/profile and disconnect the xshell. however, the problem still happen.
  3. Recheck the uninstall script, we found the script delete files: /usr/bin/hadoop* which was important.

    now /etc/profile as follow:
#export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
#export PATH=$JAVA_HOME/bin:$PATH
#export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

3. Slove

follwing step you should do in each server of the cluster.

3.1 Check file hadoop avaliable

cd to the floder /usr/bin and check if file hadoopis avaliable. the content of file as follows:

#!/bin/bash
# Autodetect JAVA_HOME if not defined
. /usr/lib/bigtop-utils/bigtop-detect-javahome
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_MAPRED_HOME=/usr/lib/hadoop
export HADOOP_LIBEXEC_DIR=//usr/lib/hadoop/libexec
export HADOOP_CONF_DIR=/etc/hadoop/conf
exec /usr/lib/hadoop/bin/hadoop "$@"

we found it load file bigtop-detect-javahome and set HADOOP_HOME.

It's a pity that /usr/lib/hadoop* and /usr/lib/hadoop* was delete by our uninstall script, so we copied those file and floder from another cluster to our cluster.

3.2 Check file bigtop-detect-javahome avaliable

cd to the floder /usr/lib/bigtop-utils and check if filebigtop-detect-javahome is avaliable, the content of file as follows:

# attempt to find java
if [ -z "${JAVA_HOME}" ]; then
for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}; do
for candidate in `ls -rvd ${candidate_regex}* 2>/dev/null`; do
if [ -e ${candidate}/bin/java ]; then
export JAVA_HOME=${candidate}
break 2
fi
done
done
fi

the file is used to set JAVA_HOME variable when system doesn't define JAVA_HOME in /etc/profile.

Finally, I found no matter the file /etc/profile with JAVA_HOME variable or without it, Hdfs can be installed.

3.3 More and more

Go through the first two steps can slove the problem, but more google let me know /etc/default/ is real config floder !

refer: http://blog.sina.com.cn/s/blog_5d9aca630101pxr1.html

addexport JAVA_HOME=path into /etc/default/bigtop-utils, then source /etc/default/bigtop-utils.

You only need to do it once and all CDH components would use that variable to figure out where JDK is.

4. Conclusion

  1. Always focus on the log file content and error's relation is.
  2. Make the best use of your time and modest ask for advise to other. otherwise, the problem always exists.

Problem of Uninstall Cloudera: Cannot Add Hdfs and Reported Cannot Find CDH's bigtop-detect-javahome的更多相关文章

  1. Problem of Uninstall Cloudera: Can't Add Hdfs and Reported Cannot Find CDH's bigtop-detect-javahome

    [toc] 1. Problem We wrote a shell script to uninstall Cloudera Manager(CM) that run in a cluster wit ...

  2. 使用Cloudera Manager搭建HDFS完全分布式集群

    使用Cloudera Manager搭建HDFS完全分布式集群 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 关于Cloudera Manager的搭建我这里就不再赘述了,可以参考 ...

  3. 使用Cloudera Manager搭建zookeeper集群及HDFS HA实战篇

    使用Cloudera Manager搭建zookeeper集群及HDFS HA实战篇 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.使用Cloudera Manager搭建zo ...

  4. hdfs client access the hdfs cluster not in one domain

    https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html#Clients_u ...

  5. cloudera learning1:cloudera简介及安装

    cloudera分为两个部分:CDH和CM.CDH是Cloudera Distribution Hadoop的简称,顾名思义,就是cloudera公司发布的Hadoop版本,封装了Apache Had ...

  6. CDH(Cloudera)与hadoop(apache)对比

    本文出自:CDH(Cloudera)与hadoop(apache)对比http://www.aboutyun.com/thread-9225-1-1.html(出处: about云开发)   问题导读 ...

  7. 纯手工全删除域内最后一个EXCHANGE--How to Manually Uninstall Last Exchange 2010 Server from Organization

    http://www.itbigbang.com/how-to-manually-uninstall-last-exchange-2010-server-from-organization/ 没办法, ...

  8. Hadoop cloudera版和Apache(原生态)的区别

    ---------------------------------------------------------------------------------------------------- ...

  9. HDFS 和 YARN 的 HA 故障切换【转】

    来源:https://blog.csdn.net/u011414200/article/details/50336735 一 非 HDFS HA 集群转换成 HA 集群二 HDFS 的 HA 自动切换 ...

随机推荐

  1. fusionjs uber开源的通用web插件化开发框架

    fusionjs uber开源的web 插件化开发框架 核心特性: 基于插件的开发,依赖注入开发 开箱即用的服务器端渲染,构建结果拆分,模块热加载 Tree-shaking 支持 集成的插件 redu ...

  2. R学习笔记 ---- 系列文章

    R实战 开篇:介绍R的使用 R学习笔记 第五篇:字符串操作 R学习笔记 第六篇:数据变换和清理 R学习笔记 第四篇:函数,分支和循环 R学习笔记 第三篇:数据框 R学习笔记 第二篇:矩阵.数组和列表 ...

  3. Pgsql和Mysql的对比

    工作中用过这两个数据库,但都不是太深入,仅限于用而已,但给我留下的印象就是Pgsql更好些,因为这两个库我都遇到过数据丢失的问题,前者我通过网上方法加自己的判断有惊无险的恢复了,而后者搜索各种资料加问 ...

  4. 生成.eps文件方法

    生成.eps文件方法 背景: 要写论文了,图像的分辨率是一大痛点 方法一: 两步生成.eps文件 用visio 制作图形,保存为pdf格式: 直接用adobe acrobat 打开pdf,然后保存为. ...

  5. bzoj 4006 [JLOI2015]管道连接——斯坦纳树

    题目:https://www.lydsy.com/JudgeOnline/problem.php?id=4006 除了模板,就是记录 ans[ s ] 表示 s 合法的最小代价.合法即保证 s 里同一 ...

  6. springboot利用MockMvc测试controller控制器

    主要记录一下控制器的测试,service这些类测试相对简单些(可测试性强) API测试需求比较简单: ① 需要返回正确的http状态码 200 ② 需要返回json数据,并且不能返回未经捕获的系统异常 ...

  7. 基于C#的微信公众平台开发系列1

    1.首先服务器地址及Token验证: Token验证请求地址wx.ashx代码: using System; using System.Web; public class wx : IHttpHand ...

  8. ROS创建Web代理(Web proxy)给QQ使用HTTP代理

    使用Web代理可以提高网页的访问速度,因为访问的数据会存储在内存或是硬盘中,就会直接从代理服务器中读取.同时,为了提高网络访问的安全性,可以给Web代理服务器设置相应的权限,使它的安全性得到提高. 下 ...

  9. Hive 安装操作

    本篇为安装篇较简单: 前提:1: 安装了hadoop-1.0.4(1.0.3也可以)正常运行2:安装了hbase-0.94.3, 正常运行 接下来,安装Hive,基于已经安装好的hadoop,步骤如下 ...

  10. hadoop学习day1环境配置笔记(非完整流程)

    hdfs的工作机制: 1.客户把一个文件存入hdfs,其实hdfs会把这个文件切块后,分散存储在N台linux机器系统中(负责存储文件块的角色:data node)<准确来说:切块的行为是由客户 ...