hadoop文件系统FileSystem详解 转自http://hi.baidu.com/270460591/item/0efacd8accb7a1d7ef083d05
Hadoop文件系统
基本的文件系统命令操作, 通过hadoop fs -help可以获取所有的命令的详细帮助文件。
Java抽象类org.apache.hadoop.fs.FileSystem定义了hadoop的一个文件系统接口。该类是一个抽象类,通过以下两种静态工厂方法可以过去FileSystem实例:
public static FileSystem.get(Configuration conf) throws IOException
public static FileSystem.get(URI uri, Configuration conf) throws IOException
具体方法实现:
1、public boolean mkdirs(Path f) throws IOException
一次性新建所有目录(包括父目录), f是完整的目录路径。
2、public FSOutputStream create(Path f) throws IOException
创建指定path对象的一个文件,返回一个用于写入数据的输出流
create()有多个重载版本,允许我们指定是否强制覆盖已有的文件、文件备份数量、写入文件缓冲区大小、文件块大小以及文件权限。
3、public boolean copyFromLocal(Path src, Path dst) throws IOException
将本地文件拷贝到文件系统
4、public boolean exists(Path f) throws IOException
检查文件或目录是否存在
5、public boolean delete(Path f, Boolean recursive)
永久性删除指定的文件或目录,如果f是一个空目录或者文件,那么recursive的值就会被忽略。只有recursive=true时,一个非空目录及其内容才会被删除。
6、FileStatus类封装了文件系统中文件和目录的元数据,包括文件长度、块大小、备份、修改时间、所有者以及权限信息。
通过"FileStatus.getPath()"可查看指定HDFS中某个目录下所有文件。
package hdfsTest;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class OperatingFiles {
//initialization
static Configuration conf = new Configuration();
static FileSystem hdfs;
static {
String path = "/usr/java/hadoop-1.0.3/conf/";
conf.addResource(new Path(path + "core-site.xml"));
conf.addResource(new Path(path + "hdfs-site.xml"));
conf.addResource(new Path(path + "mapred-site.xml"));
path = "/usr/java/hbase-0.90.3/conf/";
conf.addResource(new Path(path + "hbase-site.xml"));
try {
hdfs = FileSystem.get(conf);
} catch (IOException e) {
e.printStackTrace();
}
}
//create a direction
public void createDir(String dir) throws IOException {
Path path = new Path(dir);
hdfs.mkdirs(path);
System.out.println("new dir \t" + conf.get("fs.default.name") + dir);
}
//copy from local file to HDFS file
public void copyFile(String localSrc, String hdfsDst) throws IOException{
Path src = new Path(localSrc);
Path dst = new Path(hdfsDst);
hdfs.copyFromLocalFile(src, dst);
//list all the files in the current direction
FileStatus files[] = hdfs.listStatus(dst);
System.out.println("Upload to \t" + conf.get("fs.default.name") + hdfsDst);
for (FileStatus file : files) {
System.out.println(file.getPath());
}
}
//create a new file
public void createFile(String fileName, String fileContent) throws IOException {
Path dst = new Path(fileName);
byte[] bytes = fileContent.getBytes();
FSDataOutputStream output = hdfs.create(dst);
output.write(bytes);
System.out.println("new file \t" + conf.get("fs.default.name") + fileName);
}
//list all files
public void listFiles(String dirName) throws IOException {
Path f = new Path(dirName);
FileStatus[] status = hdfs.listStatus(f);
System.out.println(dirName + " has all files:");
for (int i = 0; i< status.length; i++) {
System.out.println(status[i].getPath().toString());
}
}
//judge a file existed? and delete it!
public void deleteFile(String fileName) throws IOException {
Path f = new Path(fileName);
boolean isExists = hdfs.exists(f);
if (isExists) { //if exists, delete
boolean isDel = hdfs.delete(f,true);
System.out.println(fileName + " delete? \t" + isDel);
} else {
System.out.println(fileName + " exist? \t" + isExists);
}
}
public static void main(String[] args) throws IOException {
OperatingFiles ofs = new OperatingFiles();
System.out.println("\n=======create dir=======");
String dir = "/test";
ofs.createDir(dir);
System.out.println("\n=======copy file=======");
String src = "/home/ictclas/Configure.xml";
ofs.copyFile(src, dir);
System.out.println("\n=======create a file=======");
String fileContent = "Hello, world! Just a test.";
ofs.createFile(dir+"/word.txt", fileContent);
}
}
1. Creating a configuration object:To be able to read from or write to HDFS, you need to create a Configuration object and pass configuration parameter to it using hadoop configuration files.
// Conf object will read the HDFS configuration parameters from theseXML
// files. You may specify the parameters for your own if you want.
Configuration conf = new Configuration();
conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml"));
conf.addResource(new Path("/opt/hadoop-0.20.0/conf/hdfs-site.xml"));
If you do not assign the configurations to conf object (using hadoop xml file) your HDFS operation will be performed on the local file system and not on the HDFS.
2. Adding file to HDFS:Create a FileSystem object and use a file stream to add a file.
FileSystem fileSystem = FileSystem.get(conf);
// Check if the file already exists
Path path = new Path("/path/to/file.ext");
if (fileSystem.exists(path)) {
System.out.println("File " + dest + " already exists");
return;
}
// Create a new file and write data to it.
FSDataOutputStream out = fileSystem.create(path);
InputStream in = new BufferedInputStream(new FileInputStream(
new File(source)));
byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = in.read(b)) > 0) {
out.write(b, 0, numBytes);
}
// Close all the file descripters
in.close();
out.close();
fileSystem.close();
3. Reading file from HDFS:Create a file stream object to a file in HDFS and read it.
FileSystem fileSystem = FileSystem.get(conf);
Path path = new Path("/path/to/file.ext");
if (!fileSystem.exists(path)) {
System.out.println("File does not exists");
return;
}
FSDataInputStream in = fileSystem.open(path);
String filename = file.substring(file.lastIndexOf('/') + 1,
file.length());
OutputStream out = new BufferedOutputStream(new FileOutputStream(
new File(filename)));
byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = in.read(b)) > 0) {
out.write(b, 0, numBytes);
}
in.close();
out.close();
fileSystem.close();
3. Deleting file from HDFS:Create a file stream object to a file in HDFS and delete it.
FileSystem fileSystem = FileSystem.get(conf);
Path path = new Path("/path/to/file.ext");
if (!fileSystem.exists(path)) {
System.out.println("File does not exists");
return;
}
// Delete file
fileSystem.delete(new Path(file), true);
fileSystem.close();
3. Create dir in HDFS:Create a file stream object to a file in HDFS and read it.
FileSystem fileSystem = FileSystem.get(conf);
Path path = new Path(dir);
if (fileSystem.exists(path)) {
System.out.println("Dir " + dir + " already not exists");
return;
}
// Create directories
fileSystem.mkdirs(path);
fileSystem.close();
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class HDFSClient {
public HDFSClient() {
}
public void addFile(String source, String dest) throws IOException {
Configuration conf = new Configuration();
// Conf object will read the HDFS configuration parameters from these
// XML files.
conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml"));
conf.addResource(new Path("/opt/hadoop-0.20.0/conf/hdfs-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1,
source.length());
// Create the destination path including the filename.
if (dest.charAt(dest.length() - 1) != '/') {
dest = dest + "/" + filename;
} else {
dest = dest + filename;
}
// System.out.println("Adding file to " + destination);
// Check if the file already exists
Path path = new Path(dest);
if (fileSystem.exists(path)) {
System.out.println("File " + dest + " already exists");
return;
}
// Create a new file and write data to it.
FSDataOutputStream out = fileSystem.create(path);
InputStream in = new BufferedInputStream(new FileInputStream(
new File(source)));
byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = in.read(b)) > 0) {
out.write(b, 0, numBytes);
}
// Close all the file descripters
in.close();
out.close();
fileSystem.close();
}
public void readFile(String file) throws IOException {
Configuration conf = new Configuration();
conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
Path path = new Path(file);
if (!fileSystem.exists(path)) {
System.out.println("File " + file + " does not exists");
return;
}
FSDataInputStream in = fileSystem.open(path);
String filename = file.substring(file.lastIndexOf('/') + 1,
file.length());
OutputStream out = new BufferedOutputStream(new FileOutputStream(
new File(filename)));
byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = in.read(b)) > 0) {
out.write(b, 0, numBytes);
}
in.close();
out.close();
fileSystem.close();
}
public void deleteFile(String file) throws IOException {
Configuration conf = new Configuration();
conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
Path path = new Path(file);
if (!fileSystem.exists(path)) {
System.out.println("File " + file + " does not exists");
return;
}
fileSystem.delete(new Path(file), true);
fileSystem.close();
}
public void mkdir(String dir) throws IOException {
Configuration conf = new Configuration();
conf.addResource(new Path("/opt/hadoop-0.20.0/conf/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
Path path = new Path(dir);
if (fileSystem.exists(path)) {
System.out.println("Dir " + dir + " already not exists");
return;
}
fileSystem.mkdirs(path);
fileSystem.close();
}
public static void main(String[] args) throws IOException {
if (args.length < 1) {
System.out.println("Usage: hdfsclient add/read/delete/mkdir" +
" [<local_path> <hdfs_path>]");
System.exit(1);
}
HDFSClient client = new HDFSClient();
if (args[0].equals("add")) {
if (args.length < 3) {
System.out.println("Usage: hdfsclient add <local_path> " +
"<hdfs_path>");
System.exit(1);
}
client.addFile(args[1], args[2]);
} else if (args[0].equals("read")) {
if (args.length < 2) {
System.out.println("Usage: hdfsclient read <hdfs_path>");
System.exit(1);
}
client.readFile(args[1]);
} else if (args[0].equals("delete")) {
if (args.length < 2) {
System.out.println("Usage: hdfsclient delete <hdfs_path>");
System.exit(1);
}
client.deleteFile(args[1]);
} else if (args[0].equals("mkdir")) {
if (args.length < 2) {
System.out.println("Usage: hdfsclient mkdir <hdfs_path>");
System.exit(1);
}
client.mkdir(args[1]);
} else {
System.out.println("Usage: hdfsclient add/read/delete/mkdir" +
" [<local_path> <hdfs_path>]");
System.exit(1);
}
System.out.println("Done!");
}
}
hadoop文件系统FileSystem详解 转自http://hi.baidu.com/270460591/item/0efacd8accb7a1d7ef083d05的更多相关文章
- GET、POST详解 --转自http://hi.baidu.com/richarwu/item/bd43633a6ba62986b611dbcd
HTTP Get,Post请求详解 请求类型 三种最常见的请求类型是:GET,POST 和 HEAD GET:获取一个文档 大部分被传输到浏览器的html,images,js,css, … 都是通过G ...
- hadoop基础-SequenceFile详解
hadoop基础-SequenceFile详解 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.SequenceFile简介 1>.什么是SequenceFile 序列文件 ...
- hdfs文件系统架构详解
hdfs文件系统架构详解 官方hdfs分布式介绍 NameNode *Namenode负责文件系统的namespace以及客户端文件访问 *NameNode负责文件元数据操作,DataNode负责文件 ...
- Hadoop RPC机制详解
网络通信模块是分布式系统中最底层的模块,他直接支撑了上层分布式环境下复杂的进程间通信逻辑,是所有分布式系统的基础.远程过程调用(RPC)是一种常用的分布式网络通信协议,他允许运行于一台计算机的程序调用 ...
- hadoop之yarn详解(框架进阶篇)
前面在hadoop之yarn详解(基础架构篇)这篇文章提到了yarn的重要组件有ResourceManager,NodeManager,ApplicationMaster等,以及yarn调度作业的运行 ...
- 高可用,多路冗余GFS2集群文件系统搭建详解
高可用,多路冗余GFS2集群文件系统搭建详解 2014.06 标签:GFS2 multipath 集群文件系统 cmirror 实验拓扑图: 实验原理: 实验目的:通过RHCS集群套件搭建GFS2集群 ...
- 【转载】Hadoop历史服务器详解
免责声明: 本文转自网络文章,转载此文章仅为个人收藏,分享知识,如有侵权,请联系博主进行删除. 原文作者:过往记忆(http://www.iteblog.com/) 原文地址: ...
- hadoop hdfs uri详解
body{ font-family: "Microsoft YaHei UI","Microsoft YaHei",SimSun,"Segoe UI& ...
- hadoop之mapreduce详解(进阶篇)
上篇文章hadoop之mapreduce详解(基础篇)我们了解了mapreduce的执行过程和shuffle过程,本篇文章主要从mapreduce的组件和输入输出方面进行阐述. 一.mapreduce ...
随机推荐
- table 操作
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/ ...
- 关于讯飞 使用android SDK出现21001错误码的分析
21001,没有安装语音组件1.有没有使用SpeechUtility.createUtility()设置appid2.有没有将libmsc.so放到工程中,jar包有Msc.jar.Sunflower ...
- 【转】TableLayout(表格布局)
转自:http://www.cnblogs.com/zhangs1986/archive/2013/01/17/2864536.html TableLayout(表格布局) 表格布局模型以行列的形式管 ...
- TortoiseGit使用与操作
使用 Git命令有时候确实不怎么方便,特别是每次都要输入密码,如果配置 SSH 的方式,又实在是很麻烦.(当然,必须使用 Windows 神器才有方便友好的客户端图形界面啦!!!) 1.克隆项目 打开 ...
- Inside Flask - flask.__init__.py 和核心组件
Inside Flask - flask.__init__.py 和核心组件 简单的示例 首先看看一个简单的示例.使用 Flask ,通常是从 flask 模块导入 Flask . request 等 ...
- 密码有效性验证失败。该密码不够复杂,不符合 Windows 策略要求
我在sqlserver2005中建立一个用户的时候,我的用户名和密码是一样的,它不允许我建立报“密码有效性验证失败.该密码不够复杂,不符合 Windows 策略要求”错误,我把密码改成其他一些就可以, ...
- jquery.form插件
提交表单的2种方式 // ajaxForm $("#form1").ajaxForm(options); // ajaxSubmit $ ...
- Android界面实现----PagerTabStrip绚丽的滑动标签
在ViewPager这种可以滑动的控件上,总是有很多的文章可以做.Android自带的控件,实现一个指示器,这个控件,就是support-v4包里面的PagerTabStrip控件. 首先,我们先看一 ...
- 新的三种EBS类型解析
就在前两天,创建EBS的之后页面发生了点变化,出现三种新的类型: General Purpose (SSD) Volumes Provisioned IOPS (SSD) Volumes Magnet ...
- spring4 文件下载功能
需要准备的工具和框架 Spring 4.2.0.RELEASE Bootstrap v3.3.2 Maven 3 JDK 1.7 Tomcat 8.0.21 Eclipse JUNO Service ...