照抄于网络:

name value description
dfs.namenode.logging.level info The logging level for dfs namenode. Other values are "dir"(trace namespace mutations), "block"(trace block under/over replications and blockcreations/deletions), or "all".
dfs.secondary.http.address 0.0.0.0:50090     The secondary namenode http server address and port.    If the port is 0 then the server will start on a free port.  
dfs.datanode.address 0.0.0.0:50010     The address where the datanode server will listen to.    If the port is 0 then the server will start on a free port.  
dfs.datanode.http.address 0.0.0.0:50075     The datanode http server address and port.    If the port is 0 then the server will start on a free port.  
dfs.datanode.ipc.address 0.0.0.0:50020     The datanode ipc server address and port.    If the port is 0 then the server will start on a free port.  
dfs.datanode.handler.count 3 The number of server threads for the datanode.
dfs.http.address 0.0.0.0:50070     The address and the base port where the dfs namenode web ui will listen on.    If the port is 0 then the server will start on a free port.  
dfs.https.enable false Decide if HTTPS(SSL) is supported on HDFS  
dfs.https.need.client.auth false Whether SSL client certificate authentication is required  
dfs.https.server.keystore.resource ssl-server.xml Resource file from which ssl server keystore  information will be extracted  
dfs.https.client.keystore.resource ssl-client.xml Resource file from which ssl client keystore  information will be extracted  
dfs.datanode.https.address 0.0.0.0:50475  
dfs.https.address 0.0.0.0:50470  
dfs.datanode.dns.interface default The name of the Network Interface from which a data node should   report its IP address.  
dfs.datanode.dns.nameserver default The host name or IP address of the name server (DNS)  which a DataNode should use to determine the host name used by the  NameNode for communication and display purposes.  
dfs.replication.considerLoad true Decide if chooseTarget considers the target's load or not  
dfs.default.chunk.view.size 32768 The number of bytes to view for a file on the browser.  
dfs.datanode.du.reserved 0 Reserved space in bytes per volume. Always leave this much space free for non dfs use.  
dfs.name.dir ${hadoop.tmp.dir}/dfs/name Determines where on the local filesystem the DFS name node      should store the name table(fsimage).  If this is a comma-delimited list      of directories then the name table is replicated in all of the      directories, for redundancy.
dfs.name.edits.dir ${dfs.name.dir} Determines where on the local filesystem the DFS name node      should store the transaction (edits) file. If this is a comma-delimited list      of directories then the transaction file is replicated in all of the       directories, for redundancy. Default value is same as dfs.name.dir  
dfs.web.ugi webuser,webgroup The user account used by the web interface.    Syntax: USERNAME,GROUP1,GROUP2, ...  
dfs.permissions true     If "true", enable permission checking in HDFS.    If "false", permission checking is turned off,    but all other behavior is unchanged.    Switching from one parameter value to the other does not change the mode,    owner or group of files or directories.  
dfs.permissions.supergroup supergroup The name of the group of super-users.
dfs.data.dir ${hadoop.tmp.dir}/dfs/data Determines where on the local filesystem an DFS data node  should store its blocks.  If this is a comma-delimited  list of directories, then data will be stored in all named  directories, typically on different devices.  Directories that do not exist are ignored.  
dfs.replication 3 Default block replication.   The actual number of replications can be specified when the file is created.  The default is used if replication is not specified in create time.  
dfs.replication.max 512 Maximal block replication.   
dfs.replication.min 1 Minimal block replication.   
dfs.block.size 67108864 The default block size for new files.
dfs.df.interval 60000 Disk usage statistics refresh interval in msec.
dfs.client.block.write.retries 3 The number of retries for writing blocks to the data nodes,   before we signal failure to the application.  
dfs.blockreport.intervalMsec 3600000 Determines block reporting interval in milliseconds.
dfs.blockreport.initialDelay 0 Delay for first block report in seconds.
dfs.heartbeat.interval 3 Determines datanode heartbeat interval in seconds.
dfs.namenode.handler.count 10 The number of server threads for the namenode.
dfs.safemode.threshold.pct 0.999f     Specifies the percentage of blocks that should satisfy     the minimal replication requirement defined by dfs.replication.min.    Values less than or equal to 0 mean not to start in safe mode.    Values greater than 1 will make safe mode permanent.  
dfs.safemode.extension 30000     Determines extension of safe mode in milliseconds     after the threshold level is reached.  
dfs.balance.bandwidthPerSec 1048576         Specifies the maximum amount of bandwidth that each datanode        can utilize for the balancing purpose in term of        the number of bytes per second.  
dfs.hosts   Names a file that contains a list of hosts that are  permitted to connect to the namenode. The full pathname of the file  must be specified.  If the value is empty, all hosts are  permitted.
dfs.hosts.exclude   Names a file that contains a list of hosts that are  not permitted to connect to the namenode.  The full pathname of the  file must be specified.  If the value is empty, no hosts are  excluded.
dfs.max.objects 0 The maximum number of files, directories and blocks  dfs supports. A value of zero indicates no limit to the number  of objects that dfs supports.  
dfs.namenode.decommission.interval 30 Namenode periodicity in seconds to check if decommission is   complete.
dfs.namenode.decommission.nodes.per.interval 5 The number of nodes namenode checks if decommission is complete  in each dfs.namenode.decommission.interval.
dfs.replication.interval 3 The periodicity in seconds with which the namenode computes   repliaction work for datanodes.
dfs.access.time.precision 3600000 The access time for HDFS file is precise upto this value.                The default value is 1 hour. Setting a value of 0 disables               access times for HDFS.  
dfs.support.append false Does HDFS allow appends to files?               This is currently set to false because there are bugs in the               "append code" and is not supported in any prodction cluster.  

docs/hdfs-default.html
这里是hdfs参数的含义。
其中可见
dfs.replication.min
最小副本数
dfs.safemode.threshold.pct
阈值比例

Specifies the percentage of blocks that should satisfy     the minimal replication requirement defined by dfs.replication.min.    Values less than or equal to 0 mean not to start in safe mode.    Values greater than 1 will make safe mode permanent.  
指定应有多少比例的数据块满足最小副本数要求。小于等于0意味不进入安全模式,大于1意味一直处于安全模式。

dfs.replication.min 是定义数据块复制的最小复制量、

dfs.safemode.threshold.pct定义当小与一个比例的数据块没有被复制, 那就将系统切换成安全模式, 所以在这里填写的值应该是0~1之间的数, 也就是你所认为系统能安全运行的最小复制延迟量, 如果填写大于或等于1, 那不意味着系统始终在安全模式下, 这样是不能对外提供服务的。 如果该值填写过小, 那需要考虑复制的数据是否安全了, 这个值还是不要改的好,使用默认的参数 99.9%

dfs常见的配置文件中的value与description的更多相关文章

  1. dfs常见的配置文件中的value与description(重要)

    不多说,直接上干货! name value description dfs.namenode.logging.level info The logging level for dfs namenode ...

  2. Java 获取*.properties配置文件中的内容 ,常见的两种方法

    import java.io.InputStream; import java.util.Enumeration; import java.util.List; import java.util.Pr ...

  3. 零基础学习java------40---------Maven(maven的概念,安装,maven在eclipse中使用),springboot(spring整合springmvc(注解),spring整合mybatis(常见的配置文件)),前端页面(bootstrap软件)

    一 maven 1. Maven的相关概念 1.1 项目开发中遇到的问题 (1)都是同样的代码,为什么在我的机器上可以编译执行,而在他的机器上就不行? (2)为什么在我的机器上可以正常打包,而配置管理 ...

  4. MVC开发中的常见错误-02-在应用程序配置文件中找不到名为“OAEntities”的连接字符串。

    在应用程序配置文件中找不到名为“OAEntities”的连接字符串. 分析原因:由于Model类是数据库实体模型,通过从数据库中引用的方式添加实体,所以会自动产生一个数据库连接字符串,而程序运行到此, ...

  5. XML配置文件的命名空间与Spring配置文件中的头

    一直以来,写Spring配置文件,都是把其他配置文件的头拷贝过来,最多改改版本号,也不清楚哪些是需要的,到底是干嘛的.今天整理一下,拒绝再无脑copy. 一.Spring配置文件常见的配置头 < ...

  6. nginx配置文件中的location理解

    关于一些对location认识的误区 1. location 的匹配顺序是"先匹配正则,再匹配普通". 矫正: location 的匹配顺序其实是"先匹配普通,再匹配正则 ...

  7. 对Java配置文件中敏感信息进行加解密的工具类

    在 JavaEE 配置文件中,例如 XML 或者 properties 文件,由于某些敏感信息不希望普通人员看见,则可以采用加密的方式存储,程序读取后进行解密. 常见的如: 数据库用户密码,短信平台用 ...

  8. MyBatis配置文件中的常用配置

    一.连接数据库的配置单独放在一个properties文件中 之前,我们是直接将数据库的连接配置信息写在了MyBatis的conf.xml文件中,如下: <?xml version="1 ...

  9. Prometheus 配置文件中 metric_relabel_configs 配置--转载

    Prometheus 配置文件中 metric_relabel_configs 配置 参考1:https://www.baidu.com/link?url=YfpBgnD1RoEthqXOL3Lgny ...

随机推荐

  1. form的submit与onsubmit的用法与区别

    发生顺序:onsubmit -> submit1.阻止表单提单:<script>function submitFun(){    //逻辑判断    return true; //允 ...

  2. spark-submit提示资源不足

    ensure that workers are registered and have sufficient resources spark-cluster启动的配置里配置了每个worker的内存,如 ...

  3. 一段代码了解Java中char和int的转换

    题目要求: 将输入的大写字母转成对应小写的后5个,如A转换后为f:如果转换后大于z则从a重新计,即多出1就转成a,多出2就转成b以此类推. Java代码: ```java private static ...

  4. Unity-Tween

    1.GoKit 免费开源 AssetStore:https://www.assetstore.unity3d.com/en/#!/content/3663 下载地址:https://github.co ...

  5. NC反弹CMDSHELL提权总结

    Server-U等都不可以用的情况下.   一般都可思考用此方法不过这种方法, 只要对方装了防火墙, 或是屏蔽掉了除常用的那几个端口外的所有端口…   那么这种方法也失效了…. 1:通过shell将上 ...

  6. N个数全排列的非递归算法

    //N个数全排列的非递归算法 #include"stdio.h" void swap(int &a, int &b) { int temp; temp = a; a ...

  7. C#中的volatile用法

    volatile 影响编译器编译的结果,指出,volatile 变量是随时可能发生变化的,与volatile变量有关的运算,不要进行编译优化,以免出错,(VC++ 在产生release版可执行码时会进 ...

  8. CentOS 7安装Splunk

    导读 Splunk是探索和搜索数据的最有力工具,从收集和分析应用程序.Web服务器.数据库和服务器平台的实时可视化海量数据流,分析出IT企业产生的海量数据,安全系统或任何商业应用,给你一个总的见解获得 ...

  9. [codeforces 325]B. Stadium and Games

    [codeforces 325]B. Stadium and Games 试题描述 Daniel is organizing a football tournament. He has come up ...

  10. ZeroMQ安装

    一.ZeroMQ介绍 ZeroMQ是一个开源的消息队列系统,按照官方的定义,它是一个消息通信库,帮助开发者设计分布式和并行的应用程序. 首先,我们需要明白,ZeroMQ不是传统的消息队列系统(比如Ac ...