Formatting HDFS
Working on hadoop, especially on test clusters, I have managed to break my HDFS layer and sometimes with no possible redemption, or at least none that I wanted to invest time in. For whatever other reason sometimes you just want to scratch your HDFS and start anew.
Without going on too much details, which is outside the point of this blog post. HDFS is mainly composed of 2 types of elements:
- Namenode: At high level the namenode stores the HDFS namespace, think of it as your file system tree.
- Datanode: this is where your data is actually stored
The Namenode: /hadoop/hdfs/namenode/current
All new edits are written to the the edit log and regularly merged out to an FSImage file, for more concise management. An fsimage file represents the file system state after all modifications up to a specific transaction ID. The seen_txid file, has the last seen transaction. VERSION: contains cluster and hdfs IDs.
For a more detailled explanation: Hdfs metadata
The Datanode: /hadoop/hdfs/data/current
In our example we will only focus on VERSIOn very close to the namenode VERSION.
Hdfs non HA formatting
In non HA everything is simple enough.
- Stop the HDFS Service
- run hadoop namenode -format (as user hdfs)
- clear the data directory on all datanodes
- restart hdfs
At this point your HDFS layer is empty and if you check the VERSION of namenodes and datanodes they should coincide
Hdfs HA formatting
In HA things get a little more complicated. In HA Standby and Active namenodes have a shared storage managed by the journal node service. HA relies on a failover scenario to swap from StandBy to Active Namenode and as any other system in hadoop this uses zookeeper. As you can see a couple more pieces need to made aware of a formatting action.
The initial steps are very close
- Stop the Hdfs service
- Start only the journal nodes (as they will need to be made aware of the formatting)
- On the first namenode (as user hdfs)
- hadoop namenode -format
- hdfs namenode -initializeSharedEdits -force (for the journal nodes)
- hdfs zkfc -formatZK -force (to force zookeeper to reinitialise)
- restart that first namenode
- On the second namenode
- hdfs namenode -bootstrapStandby -force (force synch with first namenode)
- On every datanode clear the data directory
- Restart the HDFS service
This was a very simple step by step guide to formatting. In a later article we will cover actually repairing common errors in HDFS
Formatting HDFS的更多相关文章
- HDFS中namenode启动失败
1.环境配置: -1.core-site.xml文件 <configuration> <property> <name>fs.defaultFS</name& ...
- Hadoop 2.7.4 HDFS+YRAN HA部署
实验环境 主机名称 IP地址 角色 统一安装目录 统一安装用户 sht-sgmhadoopnn-01 172.16.101.55 namenode,resourcemanager /usr/local ...
- Hadoop集群-HDFS集群中大数据运维常用的命令总结
Hadoop集群-HDFS集群中大数据运维常用的命令总结 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 本篇博客会简单涉及到滚动编辑,融合镜像文件,目录的空间配额等运维操作简介.话 ...
- Hadoop集群(二) HDFS搭建
HDFS只是Hadoop最基本的一个服务,很多其他服务,都是基于HDFS展开的.所以部署一个HDFS集群,是很核心的一个动作,也是大数据平台的开始. 安装Hadoop集群,首先需要有Zookeeper ...
- Apache hadoop namenode ha和yarn ha ---HDFS高可用性
HDFS高可用性Hadoop HDFS 的两大问题:NameNode单点:虽然有StandbyNameNode,但是冷备方案,达不到高可用--阶段性的合并edits和fsimage,以缩短集群启动的时 ...
- HDFS ha 格式化报错:a shared edits dir must not be specified if HA is not enabled.
错误内容: Formatting using clusterid: CID-19921335-620f-4e72-a056-899702613a6b2019-01-12 07:28:46,986 IN ...
- hadoop 2.7.3本地环境运行官方wordcount-基于HDFS
接上篇<hadoop 2.7.3本地环境运行官方wordcount>.继续在本地模式下测试,本次使用hdfs. 2 本地模式使用fs计数wodcount 上面是直接使用的是linux的文件 ...
- Hadoop学习之旅二:HDFS
本文基于Hadoop1.X 概述 分布式文件系统主要用来解决如下几个问题: 读写大文件 加速运算 对于某些体积巨大的文件,比如其大小超过了计算机文件系统所能存放的最大限制或者是其大小甚至超过了计算机整 ...
- python基础操作以及hdfs操作
目录 前言 基础操作 hdfs操作 总结 一.前言 作为一个全栈工程师,必须要熟练掌握各种语言...HelloWorld.最近就被"逼着"走向了python开发之路, ...
随机推荐
- mongo学习- group操作
group可以使用 $sum,$avg,$max,$min,$first,$last
- nginx 集群介绍
nginx 集群介绍 完成一次请求的步骤 1)用户发起请求 2)服务器接受请求 3)服务器处理请求(压力最大) 4)服务器响应请求 缺点:单点故障 单台服务器资源有限 单台服务器处理耗时长 ·1)部署 ...
- POJ 2778 DNA Sequence (AC自动机+DP+矩阵)
题意:给定一些串,然后让你构造出一个长度为 m 的串,并且不包含以上串,问你有多少个. 析:很明显,如果 m 小的话 ,直接可以用DP来解决,但是 m 太大了,我们可以认为是在AC自动机图中,根据离散 ...
- HDU 6065 RXD, tree and sequence (LCA+DP)
题意:给定上一棵树和一个排列,然后问你把这个排列分成m个连续的部分,每个部分的大小的是两两相邻的LCA的最小深度,问你最小是多少. 析:首先这个肯定是DP,然后每个部分其实就是里面最小的那个LCA的深 ...
- C++中函数模版与类模版
1.什么是模板? (1)可以这样来解释这个问题,例如当我们需要定义多个函数,而这个函数功能其实都是一样的,例如两个数相加的函数, 只是相加的两个数的类型不相同而已,这就导致我们需要定义多个函数:当我们 ...
- (DP)uva 10036 Problem C: Divisibility
链接: http://acm.hust.edu.cn/vjudge/contest/view.action?cid=88171#problem/F 代码: #include <cstdio> ...
- 基数排序简单Java实现
基数排序(radix sort)又称“桶子法”,在对多个正整数进行排序时可以使用.它的灵感来自于队列(Queue),它最独特的地方在于利用了数字的有穷性(阿拉伯数字只有0到9的10个). 基数排序使用 ...
- [leetcode] 12. Merge Sorted Array
这道题的无聊之处在于题目其实给了一些很奇怪的测试用例.比如他会给一些空的数组来,但是这个是不科学的,因为在C++中不允许定义一个空的列表,我们用的又不是那种糙又快的python,所以在这里我遇到了一些 ...
- Android-LogUtil-工具类
LogUtil-工具类 是专门Log日志打印 和 Toast的提示,的公共方法 package common.library.utils; import android.content.Context ...
- CORS 跨域请求
一.简介 CORS需要浏览器和服务器同时支持.目前,所有浏览器都支持该功能,IE浏览器不能低于IE10. 整个CORS通信过程,都是浏览器自动完成,不需要用户参与.对于开发者来说,CORS通信与同源的 ...