最基本的配置方法,aix、kerberos等的操作详见http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html

nfs3挂在到本地后,可以允许如下操作:

  • Users can browse the HDFS file system through their local file system on NFSv3 client compatible operating systems.
  • Users can download files from the the HDFS file system on to their local file system.
  • Users can upload files from their local file system directly to the HDFS file system.
  • Users can stream data directly to HDFS through the mount point. File append is supported but random write is not supported.
  • 用户可以通过操作系统兼容的本地nfsv3客户端来阅览hdfs文件系统
  • 用户可以从hdfs文件系统下载文档到本地文件系统
  • 用户可以将本地文件从本地文件系统直接上传到hdfs文件系统
  • 用户可以通过挂载点直接流化数据。支持文件附加,但是不支持随机写。

一、官方配置介绍.

1.更新core-site.xml的相关配置

<property>

  <name>hadoop.proxyuser.nfsserver.groups</name>

  <value>root,users-group1,users-group2</value>

  <description> The 'nfsserver' user is allowed to proxy all members of the 'users-group1' and 'users-group2' groups. Note that in most cases you will need to include the group "root" because the user "root" (which usually belonges to "root" group) will generally be the user that initially executes the mount on the NFS client system. Set this to '*' to allow nfsserver user to proxy any group.

nfs网管使用代理拥护来代理所有用户访问nfs挂载,在非安全模式,运行nfs网关的用户即为代理用户,因此黄色高亮部分应该换成启动nfs3的代理用户名

</description>

</property>

<property>

  <name>hadoop.proxyuser.nfsserver.hosts</name>

  <value>nfs-client-host1.com</value>

  <description> This is the host where the nfs gateway is running. Set this to '*' to allow requests from any hosts to be proxied.

允许挂载的主机域名

</description>

</property>

2.更新hdfs-site.xml的相关配置

<property>
<name>dfs.namenode.accesstime.precision</name>
<value>3600000</value>
<description>The access time for HDFS file is precise upto this value.
The default value is 1 hour. Setting a value of 0 disables
access times for HDFS.默认配置,如无需更改,可忽略
</description>
</property>
  <property>
<name>nfs.dump.dir</name>
<value>/tmp/.hdfs-nfs</value>
<description>Users are expected to update the file dump directory. NFS client often reorders writes,
especially when the export is not mounted with “sync” option. Sequential writes can arrive at the NFS
gateway at random order. This directory is used to temporarily save out-of-order writes before writing
to HDFS. For each file, the out-of-order writes are dumped after they are accumulated to exceed certain
threshold (e.g., 1MB) in memory. One needs to make sure the directory has enough space. For example, if
the application uploads 10 files with each having 100MB, it is recommended for this directory to have
roughly 1GB space in case if a worst-case write reorder happens to every file. Only NFS gateway needs to
restart after this property is update
</description>
</property>
<property>
<name>nfs.exports.allowed.hosts</name>
<value>* rw</value>
</property>
<property>
<name>nfs.superuser</name>
<value>the_name_of_hdfs_superuser</value>
<description>namenode进程的用户,默认不设置,如果设置了,则所有nfs.exports.allowed.hosts上的允许的nfs客户端上的该用户都可以访问hdfs上的任意文件。
 </description>
</property>
<property>
<name>nfs.metrics.percentiles.intervals</name>
<value>100</value>
<description>Enable the latency histograms for read, write and
commit requests. The time unit is 100 seconds in this example.
</description>
</property>
Export point. One can specify the NFS export point of HDFS. Exactly one export point is supported. 
Full path is required when configuring the export point. By default, the export point is the root directory “/”.<property>
<name>nfs.export.point</name>
<value>/</value>
</property>

二、实践

1.更新core-site.xml

<property>

  <name>hadoop.proxyuser.hadoop.groups</name>

  <value>*</value>

   <description> The 'nfsserver' user is allowed to proxy all members of the 'users-group1' and 'users-group2' groups. Note that in most cases you will need to include the group "root" because the user "root" (which usually belonges to "root" group) will generally be the user that initially executes the mount on the NFS client system. Set this to '*' to allow nfsserver user to proxy any group. </description>

</property>

<property>

  <name>hadoop.proxyuser.hadoop.hosts</name>

  <value>*</value>

   <description> This is the host where the nfs gateway is running. Set this to '*' to allow requests from any hosts to be proxied. </description>

</property>

2.更新hdfs-site.xml

  <property>
<name>nfs.dump.dir</name>
<value>/home/hadoop/data/.hdfs-nfs</value>
</property>
<property>
<name>nfs.exports.allowed.hosts</name>
<value>* rw</value>
</property>

3.JVM和Log配置

Log:

log4j.logger.org.apache.hadoop.hdfs.nfs=DEBUG

log4j.logger.org.apache.hadoop.oncrpc=DEBUG

JVM:hadoop-env.sh

export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"

export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"

4.启动nfs3和portmap

1)停掉系统nfsv3 和rpcbind/portmap

[root]> service nfs stop

[root]> service rpcbind stop

2)启动hadoop的portmap

[root]> $HADOOP_HOME/bin/hdfs --daemon start portmap

 3)启动nfs3

[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon start nfs3

5.确认nfs服务可用性

1)确认所有服务已启动并正在运行

[root]> rpcinfo -p $nfs_server_ip

返回类似如下输入即可

program vers proto port

100005 1 tcp 4242 mountd

100005 2 udp 4242 mountd

100005 2 tcp 4242 mountd

100000 2 tcp 111 portmapper

100000 2 udp 111 portmapper

100005 3 udp 4242 mountd

100005 1 udp 4242 mountd

100003 3 tcp 2049 nfs

100005 3 tcp 4242 mountd

2)验证hdfs命名空间已被export和可以被挂载

[root]> showmount -e $nfs_server_ip

返回如下输出即可

        Exports list on $nfs_server_ip :

        / (everyone)

5.挂载export “/"

mkdir -p $mountpoint

[root]>mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync $server:/ $mount_point

完成!

6.也可以将hdfs文件系统挂载到远程节点,非hadoop集群节点亦可,操作方法

在远程机器执行5操作

前提:与nfsv3server端互相能ping通

使用nfs3将hdfs挂载到本地或远程目录(非kerberos适用)的更多相关文章

  1. 如何用ssh挂载远程目录

    如何用ssh挂载远程目录 标签: sshserver服务器linux网络 2011-06-24 10:05 2979人阅读 评论(0) 收藏 举报 版权声明:本文为博主原创文章,未经博主允许不得转载. ...

  2. ssh key 免密码登陆服务器,批量分发管理以及挂载远程目录的sshfs

    ssh key 免密码登陆服务器,批量分发管理以及挂载远程目录的sshfs 第一部分:使用ssh key 实现服务器间的免密码交互登陆 步骤1: 安装openssh-clients [root@001 ...

  3. OSSFS将OSS bucket 挂载到本地文件系统及注意事项

    OSSFS将OSS bucket 挂载到本地文件系统及注意事项 下载ossfs安装包 wget http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/as ...

  4. Ubuntu下使用sshfs挂载远程目录到本地(和Windows挂载盘一样)

    访问局域网中其他Ubuntu机器,在不同机器间跳来跳去,很是麻烦,如果能够把远程目录映射到本地无疑会大大方面使用,就像Windows下的网络映射盘一样.在Linux的世界无疑也会有这种机制和方式,最近 ...

  5. xenserver添加磁盘后挂载为本地存储库并且删除

    方法一: 1.1:查看磁盘列表 fdisk -l [root@xenserver ~]# fdisk -l Disk /dev/sdb: 7999.4 GB, 7999376588800 bytes, ...

  6. 【HDFS API编程】从本地拷贝文件,从本地拷贝大文件,拷贝HDFS文件到本地

    接着之前继续API操作的学习 CopyFromLocalFile: 顾名思义,从本地文件拷贝 /** * 使用Java API操作HDFS文件系统 * 关键点: * 1)create Configur ...

  7. 使用sshfs将远程目录挂载到本地

    使用sshfs将远程目录挂载到本地 转自:http://blog.sina.com.cn/s/blog_6561ca8c0102vc2u.html 在Linux下我们通常使用ssh命令来登录远程Lin ...

  8. CentOS 7安装SSHFS 实现远程主机目录 挂载为本地目录

    安装sshfs 官方下载地址 https://github.com/libfuse/sshfs/releases 首先,我们需要安装sshfs软件.sshfs是一个基于SSH文件传输协议的文件系统客户 ...

  9. Linux使用sshfs挂载远程目录到本地

    1安装sshfs [root@iZwz9hy7gff0kpg1swp1d3Z ~]# yum install sshfs 2创建本地目录 [root@iZwz9hy7gff0kpg1swp1d3Z ~ ...

随机推荐

  1. poj 1180:Batch Scheduling【斜率优化dp】

    我会斜率优化了!这篇讲的超级棒https://blog.csdn.net/shiyongyang/article/details/78299894?readlog 首先列个n方递推,设sf是f的前缀和 ...

  2. P2476 [SCOI2008]着色方案

    传送门 数学太珂怕了--膜一下->这里 记\(sum[i]\)为题中\(c[i]\)的前缀和,\(C[i][j]\)表示\(C_{i}^j\) 设\(f[i][j]\)表示前面\(i\)中颜色已 ...

  3. android_app c++框架

    找遍了全网,没有一个完整的可用的框架.ndk自带的android_native_app_glue确实不太好用,闭关几天,写出了一个框架.完全的消息队列调用,目前测试的主体框架是没有什么问题了,程序入口 ...

  4. Java多线程(一) Thread和 Runnable

    http://www.cnblogs.com/lwbqqyumidi/p/3804883.html 1.继承Thread 2.实现Runnable接口 public class MyRunnable ...

  5. [Qt Creator 快速入门] 第2章 Qt程序编译和源码详解

    一.编写 Hello World Gui程序 Hello World程序就是让应用程序显示"Hello World"字符串.这是最简单的应用,但却包含了一个应用程序的基本要素,所以 ...

  6. 题解报告:hdu 2588 GCD(欧拉函数)

    Description The greatest common divisor GCD(a,b) of two positive integers a and b,sometimes written ...

  7. 设置VMWare虚拟机使拷贝虚拟机后固定原有的IP地址

    VMWare中已经安装并设置好的虚拟机在拷贝给别人后,再次打开该虚拟机时原有自动获取的IP地址将会变化,那么原有根据该IP地址进行的设置均将失效,还需要重新设置,比较麻烦,经过百度查询原来可以将虚拟机 ...

  8. java excel poi导入 过滤空行的方法 判断是否是空行

    private boolean isRowEmpty(Row row){ for (int c = row.getFirstCellNum(); c < row.getLastCellNum() ...

  9. apache自带的ab压力测试工具

    httpd-2.4.27-Win64-VC15 链接: https://pan.baidu.com/s/1027MtVwbq1zjUgF7P7Rrkw 密码: ne6a 下载解压后doc窗口cd .. ...

  10. POJ_3278_Catch That Cow

    Catch That Cow Time Limit: 2000MS   Memory Limit: 65536K Total Submissions: 54911   Accepted: 17176 ...