1.hdfs的牛逼特性

  • Hadoop, including HDFS, is well suited for distributed storage and distributed processing using commodity hardware. It is fault tolerant, scalable, and extremely simple to expand. MapReduce, well known for its simplicity and applicability for large set of distributed applications, is an integral part of Hadoop. 分布式存储
  • HDFS is highly configurable with a default configuration well suited for many installations. Most of the time, configuration needs to be tuned only for very large clusters. 适当的配置
  • Hadoop is written in Java and is supported on all major platforms. 平台适应性
  • Hadoop supports shell-like commands to interact with HDFS directly. shell-like的操作方式
  • The NameNode and Datanodes have built in web servers that makes it easy to check current status of the cluster. 内置web服务,方便检查集群
  • New features and improvements are regularly implemented in HDFS. The following is a subset of useful features in HDFS:
    • File permissions and authentication.  文件权限验证
    • Rack awareness: to take a node's physical location into account while scheduling tasks and allocating storage.
    • Safemode: an administrative mode for maintenance.  安全模式,用于运维
    • fsck: a utility to diagnose health of the file system, to find missing files or blocks.  检查文件系统的工具,发现丢失的文件或者块
    • fetchdt: a utility to fetch DelegationToken and store it in a file on the local system.
    • Balancer: tool to balance the cluster when the data is unevenly distributed among DataNodes.
    • Upgrade and rollback: after a software upgrade, it is possible to rollback to HDFS' state before the upgrade in case of unexpected problems.
    • Secondary NameNode: performs periodic checkpoints of the namespace and helps keep the size of file containing log of HDFS modifications within certain limits at the NameNode.
    • Checkpoint node: performs periodic checkpoints of the namespace and helps minimize the size of the log stored at the NameNode containing changes to the HDFS. Replaces the role previously filled by the Secondary NameNode, though is not yet battle hardened. The NameNode allows multiple Checkpoint nodes simultaneously, as long as there are no Backup nodes registered with the system.
    • Backup node: An extension to the Checkpoint node. In addition to checkpointing it also receives a stream of edits from the NameNode and maintains its own in-memory copy of the namespace, which is always in sync with the active NameNode namespace state. Only one Backup node may be registered with the NameNode at once.
 
2.webUI
    默认是50070端口
 
3.hdfs基本管理命令
bin/hdfs dfsadmin -参数
  • -report: reports basic statistics of HDFS. Some of this information is also available on the NameNode front page. 报告状态
  • -safemode: though usually not required, an administrator can manually enter or leave Safemode.  开启安全模式
  • -finalizeUpgrade: removes previous backup of the cluster made during last upgrade. 删除上次集群更新时的备份
  • -refreshNodes: Updates the namenode with the set of datanodes allowed to connect to the namenode. Namenodes re-read datanode hostnames in the file defined bydfs.hostsdfs.hosts.exclude. Hosts defined in dfs.hosts are the datanodes that are part of the cluster. If there are entries in dfs.hosts, only the hosts in it are allowed to register with the namenode. Entries in dfs.hosts.exclude are datanodes that need to be decommissioned. Datanodes complete decommissioning when all the replicas from them are replicated to other datanodes. Decommissioned nodes are not automatically shutdown and are not chosen for writing for new replicas.
  • -printTopology : Print the topology of the cluster. Display a tree of racks and datanodes attached to the tracks as viewed by the NameNode. 打印拓扑
 
4.secondary namenode 
    namenode把文件系统的修改以日志追加方式写到本地文件系统,namenode启动时,先从镜像中读取HDFS的状态,然后再把日志中的修改合并到镜像中,再打开一个新的日志文件接收新的修改。namenode仅仅在启动时才合并状态镜像和日志,所以日志可能会变的非常大,在下次启动时需要合并的内容太多导致启动时间很长。
    secondary namenode定时的从namenode合并日志,并且保证日志大小限制在一定的范围内。一般不和主namenode放一起,但机器的配置要和namenode一样。
    secondary namenode上的checkpoint 里程由以下两个参数控制:
  • dfs.namenode.checkpoint.period, set to 1 hour by default, specifies the maximum delay between two consecutive checkpoints, and
  • dfs.namenode.checkpoint.txns, set to 1 million by default, defines the number of uncheckpointed transactions on the NameNode which will force an urgent checkpoint, even if the checkpoint period has not been reached.
dfs.namenode.checkpoint.preiod  两次执行checkpoint之间的最大时间间隔
dfs.namenode.checkpoint.txns    当没有checkpoint的事务达到多少时执行,即使未达到上面的参数设置的时间,默认是100万(比如10分钟修改了100万个,那么10分钟就执行一次checkpoint而非1小时)
 
5.checkpoint node
    和secondary namenode极为相似,不同的地方是checkpoint下载hdfs状态镜像和日志文件,并在本地合并,合并后还上传到正在运行的namenode.
dfs.namenode.backup.address       地址
dfs.namenode.backup.http-address  ip端口
dfs.namenode.checkpoint.preiod 和dfs.namenode.checkpoint.txns  同样影响checkpoint
checkpoint node和secondary namenode实际上就是一个东西,只是名称有所不同
 
6.backup node
    backup node的功能和checkpoint node一样,但是backup node能实时的从namenode读取namespace变化数据并合并到本地(注意:namenode是不合并,只有重启后才合并),所以backup node是namenode的完全实时备份。
    目前一个集群只能有一个backup node,未来可以支持多个。一旦有个backup node,checkpoint node就无法再注册进集群。backup node的配置文件和checkpoint一致(dfs.namenode.backup.address \ dfs.namenode.backup.http-address),以bin/hdfs namenode -backup启动
 
7.import checkpoint
    如果镜像文件和日志文件丢失,可以用import checkpoint方式从checkpoint节点读取。需要配置三个参数:
dfs.namenode.name.dir namenode的元数据文件夹
dfs.namenode.checkpoint.dir checkpoint node上传镜像的文件夹
以-importCheckpoint的方式启动namenode 
 
8.balancer
    HDFS中数据可能不是均衡的放在集群中。考虑到一下情况:
 
  • Policy to keep one of the replicas of a block on the same node as the node that is writing the block.  在当前读写的节点中保存一个数据备份。
  • Need to spread different replicas of a block across the racks so that cluster can survive loss of whole rack. 保存数据分布到各个机架,可以允许整个机架的丢失
  • One of the replicas is usually placed on the same rack as the node writing to the file so that cross-rack network I/O is reduced.
  • Spread HDFS data uniformly across the DataNodes in the cluster.
 
9.机架感知,略
 
10.safemode
    当集群重新启动时,namenode读取状态镜像和日志信息,此时namenode等待datanode报告块信息,所以不会立即打开集群,此时namenode处于safemode,集群处于只读状态。等datanode报告完块信息后,集群自动打开,解除safemode状态。可以手动设置safemode状态。
 
11.fsck
    fsck命令用来检查文件(文件块)不一致,与传统的fsck不一样的地方是,该命令并不修正错误,默认下不检查已经打开的文件.fsck命令不是hadoop shell 命令,但是可以以bin/hdfs fsck启动.
 
12.fecthdt
    HDFS支持fecthdt命令来读取口令并存放在本地文件系统中.该口令可用于非安全验证的客户端连接到安全的服务器上(比如namenode).略..
 
13.recovery mode
    恢复模式.如果仅有的namemode元数据丢失了,可以通过recovery mode找到部分数据,此时以namenode -recover启动namenode,然后按照提示输入文件位置,可以使用force参数不输入让hdfs自己找文件位置
 
14.upgrade and rollback
    升级和回滚.略
 
15.File permissions and security 
    文件权限和安全.HDFS的文件权限类似LINUX.启动namenode的用户被视为HDFS的超级用户.
 
16.可扩展性
    HDFS可以支持数千个节点的集群.每个集群只有一个namenode,因此namenode的内存成为集群大小的限制
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

<wiz_tmp_tag id="wiz-table-range-border" contenteditable="false" style="display: none;">

 
 
 
 

一:HDFS 用户指导的更多相关文章

  1. [技术翻译]Guava-libraries(一): 用户指导

    用户指导 本文翻译自http://code.google.com/p/guava-libraries/wiki/GuavaExplained,由十八子将翻译,发表于博客园 http://www.cnb ...

  2. iOS-王云鹤 APP首次启动显示用户指导

    这个功能的重点就是在如何判断应用是第一次启动的. 其实很简单 我们只需要在一个类里面写好用户引导页面  基本上都是使用UIScrollView 来实现, 新建一个继承于UIViewController ...

  3. Hadoop HDFS 用户指南

    This document is a starting point for users working with Hadoop Distributed File System (HDFS) eithe ...

  4. HDFS用户指南

    https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html hdfs的一些特征: 1.hadoop 包含hdfs 很适合分布式存储以及分布式处 ...

  5. hdfs 创建一个新用户

    需要给第三方提供hdfs用户,和上传文件的权限 1.需要先在linux 上创建一个普通用户: hn,并修改密码 sudo -u hdfs hadoop fs -mkdir /user/用户名 sudo ...

  6. Hive Tutorial(上)(Hive 入门指导)

    用户指导 Hive 指导 Hive指导 概念 Hive是什么 Hive不是什么 获得和开始 数据单元 类型系统 内置操作符和方法 语言性能 用法和例子(在<下>里面) 概念 Hive是什么 ...

  7. 相同版本的CDH集群间迁移hdfs以及hbase

    前言 由于项目数据安全的需要,这段时间看了下hadoop的distcp的命令使用,不断的纠结的问度娘,度娘告诉我的结果也让我很纠结,都是抄来抄去, 还好在牺牲大量的时间的基础上还终于搞出来了,顺便写这 ...

  8. 【Hadoop学习】HDFS 短路本地读

    Hadoop版本:2.6.0 本文系从官方文档翻译而来,转载请尊重译者的工作,注明以下链接: http://www.cnblogs.com/zhangningbo/p/4146296.html 背景 ...

  9. 解决从linux本地文件系统上传文件到HDFS时的权限问题

    当使用 hadoop fs -put localfile /user/xxx 时提示: put: Permission denied: user=root, access=WRITE, inode=& ...

随机推荐

  1. 插入排序_c++

    插入排序_c++ GitHub 文解 插入排序的核心思想是针对于 N 个元素进行排序时,共进行 K = (N-1) 次排序,第 M 次排序时将第 M + 1 个元素插入前 M 个元素中进行排序. 图解 ...

  2. Jquery与js简单对比

    //Javascript window.onload=function () { var oBtn=document.getElementById('btn1'); oBtn.onclick=func ...

  3. mysql alter 增加修改表结构及约束

    1) 加索引,添加时若未指定索引名,默认为字段名   mysql> alter table 表名 add index 索引名 (字段名1[,字段名2 …]); 例子: mysql> alt ...

  4. linux中删除文件内空白行的几种方法。

    linux中删除文件内空白行的几种方法 有时你可能需要在 Linux 中删除某个文件中的空行.如果是的,你可以使用下面方法中的其中一个.有很多方法可以做到,但我在这里只是列举一些简单的方法. 你可能已 ...

  5. linux系统基础之---RAID(基于centos7.4 1708)

  6. js扩展String.prototype.format字符串拼接的功能

    1.题外话,有关概念理解:String.prototype 属性表示 String原型对象.所有 String 的实例都继承自 String.prototype. 任何String.prototype ...

  7. markdown常用命令(持续整理更新...)

    编写使用的工具 VS Code 拥有丰富插件支持的代码编辑器,当然也支持markdown MdEditor一款在线编辑markdown网站 1.标题 示例: # 一级标题 ## 二级标题 ### 三级 ...

  8. Cannot send session cache limiter - headers already sent问题

    在php.ini中将“always_populate_raw_post_data ”设置为“-1”,并重启

  9. 1. Linux内核的配置与裁减:

    一.内核的配置和编译流程: 1)编写driver及其子目录下的Kconfig文件,将驱动的配置项写入menuconfig配置界面:2)  执行make menuconfig命令,进入内核配置界面,将对 ...

  10. 爬虫-scrapy五大核心组件及工作流