Hadoop fs -put bandwidth 暴力版
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/ // scalastyle:off println
package com.weibo.tools import java.io.{BufferedInputStream,FileInputStream}
import java.net.URI
import java.io.BufferedInputStream
import java.util.concurrent.TimeUnit import org.apache.hadoop.conf.{Configuration => hdfsConfig}
import org.apache.hadoop.fs.{FileStatus, FileSystem, Path}
import org.apache.hadoop.io.IOUtils import org.apache.spark.{SparkConf, SparkContext} object Bandwidthlimited_local2HDFS_Writer {
val kiloByte = 1024
def upload_one_buffer(inStream : java.io.BufferedInputStream,
outputStream : org.apache.hadoop.fs.FSDataOutputStream,
log_buffer : Array[Byte],
pre_buffer_sum : Long,
totalSize : Long
) : Long = {
val readSize = inStream.read(log_buffer)
val buffer_sum = pre_buffer_sum + readSize
outputStream.write(log_buffer.splitAt(readSize)._1)
outputStream.flush
TimeUnit.MILLISECONDS.sleep(999)
// println(s"${inStream} uploading. ${buffer_sum} uploaded. readSize : ${readSize}. ${buffer_sum * 100 / totalSize}% finished. ")
buffer_sum
}
def LocalLog2HDFS_Writer(sc : SparkContext,
localSrcPath : String,
remoteTarPath : String,
bandwidth : String
) : Long = {
sc.hadoopConfiguration.setBoolean("dfs.support.append",true)
val hdfs = FileSystem.get(new URI("/"), sc.hadoopConfiguration)
val filePath = new Path(remoteTarPath)
val inStream = new BufferedInputStream(new FileInputStream(localSrcPath))
val totalSize = inStream.available
hdfs.exists(filePath) match {
case false => hdfs.create(filePath).close
case true => println(hdfs.getFileStatus(filePath).toString)
}
val outputStream = hdfs.append(filePath)
val buffer_size = kiloByte * bandwidth.toInt
val log_buffer = new Array[Byte](buffer_size)
var buffer_sum = 0L
try {
while(inStream.available >= buffer_size) {
val readSize = inStream.read(log_buffer)
buffer_sum += readSize
outputStream.write(log_buffer.splitAt(readSize)._1)
outputStream.flush
outputStream.hflush
println(s"${localSrcPath} uploading. ${buffer_sum} uploaded. readSize : ${readSize}. ${buffer_sum * 100 / totalSize}% finished. ")
TimeUnit.MILLISECONDS.sleep(999)
}
if(inStream.available > 0) {
val readSize = inStream.read(log_buffer)
buffer_sum += readSize
outputStream.write(log_buffer.splitAt(readSize)._1)
outputStream.flush
println(s"${localSrcPath} uploading. ${buffer_sum} uploaded. readSize : ${readSize}. ${buffer_sum * 100 / totalSize}% finished. ")
}
} finally {
inStream.close
outputStream.close
}
buffer_sum
}
def Local2HDFS_Writer(sc : SparkContext, args: Array[String]) : Long = {
val helper_info = """ the file localSrcPath pointed limited 1.999G
Bandwidthlimited_local2HDFS_Writer localSrcPath remoteTarPath bandwidth=10K(by KB)"""
println(helper_info)
require(args.size >= 3, helper_info)
val localSrcPath = args(0)
val remoteTarPath = args(1)
val bandwidth = args(2)
LocalLog2HDFS_Writer(sc, localSrcPath, remoteTarPath, bandwidth)
}
def LocalLogReducer2HDFS(sc : SparkContext, taskList : List[(String, String)], bandwidth : String) : Int = {
var sum = 0
taskList.iterator.map{
case (localSrcPath, remoteTarPath) =>
LocalLog2HDFS_Writer(sc, localSrcPath, remoteTarPath, bandwidth)
sum += 1
}
sum
}
def LocalLogReducer(sc : SparkContext, srcParentPath : String, bandwidth : String) = {} def main(args: Array[String]) { val conf = new SparkConf()
.setAppName("Bandwidthlimited_local2HDFS_Writer")
.setMaster("local[1]")
val sc = new SparkContext(conf)
Local2HDFS_Writer(sc, args)
sc.stop()
}
}
https://github.com/Suanec/Betn_repo/blob/32d56acd3b57efc15573389619ed7793efdf298c/joyCodes/assembly_lib/src/main/scala/Bandwidthlimited_local2HDFS_Writer.scala
暴力破解版,为了优先实现功能,利用Spark + Scala依托于Hadoop API,实现了一个上传限速的功能。存在的问题:
1. hdfs 官方说append本身是不安全的,不建议使用在生产环境中。
2. 限制网速是通过限制流的读写来实现的,可能会出现网速震荡,但平均值符合预期。
3. 网速限制以KB为单位,请留意。
4. 文件大小受限于读入流的问题,目前仅能保证1.999G文件正常使用,超过后可能出现,进度监控失败,重复上传,乱码等问题。
Hadoop fs -put bandwidth 暴力版的更多相关文章
- Hadoop介绍及最新稳定版Hadoop 2.4.1下载地址及单节点安装
Hadoop介绍 Hadoop是一个能对大量数据进行分布式处理的软件框架.其基本的组成包括hdfs分布式文件系统和可以运行在hdfs文件系统上的MapReduce编程模型,以及基于hdfs和MapR ...
- 执行hadoop fs -ls时出现错误RuntimeException: core-site.xml not found
由于暴力关机,Hadoop fs -ls 出现了下图问题: 问题出现的原因是下面红框框里面的东西,我当时以为从另一个节点下载一个conf.cloudera.yarn文件就能解决问题,发现不行啊,于是删 ...
- 【转】Hadoop FS Shell命令
FS Shell 调用文件系统(FS)Shell命令应使用 bin/hadoop fs <args> 的形式. 所有的的FS shell命令使用URI路径作为参数.URI格式是scheme ...
- hadoop fs 命令
1,hadoop fs –fs [local | <file system URI>]:声明hadoop使用的文件系统,如果不声明的话,使用当前配置文件配置的,按如下顺序查找:hadoop ...
- hadoop fs -mkdir testdata错误 提示No such file or directory
解决方法: hadoop fs -mkdir -p testdata
- Hadoop FS shell commands
命令格式:hadoop fs -command -option args appendToFileUsage: hadoop fs -appendToFile <localsrc> ... ...
- 何时使用hadoop fs、hadoop dfs与hdfs dfs命令(转)
hadoop fs:使用面最广,可以操作任何文件系统. hadoop dfs与hdfs dfs:只能操作HDFS文件系统相关(包括与Local FS间的操作),前者已经Deprecated,一般使用后 ...
- hadoop fs管理文件权限
sudo addgroup Hadoop#添加一个hadoop组sudo usermod -a -G hadoop larry#将当前用户加入到hadoop组 修改hadoop目录的权限sudo ch ...
- HDFS的基本shell操作,hadoop fs操作命令
(1)分布式文件系统 随着数据量越来越多,在一个操作系统管辖的范围存不下了,那么就分配到更多的操作系统管理的磁盘中,但是不方便管理和维护,因此迫切需要一种系统来管理多台机器上的文件,这就是分布式文件管 ...
随机推荐
- java.lang.NumberFormatException: multiple points错误问题
最近项目一直会出现时间转换报错,一直不知道是什么问题??? java.lang.NumberFormatException: multiple points at sun.misc.Float ...
- ajax的请求,参数怎么管理?
一般发送一条ajax 然后点击界面需要更改查询条件,第一种是做一个form表单比较合适的设计.更改了参数回收表单然后重新发送ajax: 还有一种是把参数缓存到变量中,然后更改了条件修改变量再次重发aj ...
- JS 之 阻止事件冒泡,阻止默认事件,event.stopPropagation()和event.preventDefault(),return false的区别
在前端开发中,有时我们需要阻止冒泡和阻止默认事件的发生. 一.event.stopPropagation() 阻止事件的冒泡,不让事件向documen上蔓延,但是默认事件任然会执行,当调用这个方法的时 ...
- nginx伪静态配置教程总结
在nginx中配置伪静态,也就是常说的url重写功能,只需在nginx.conf配置文件中写入重写规则即可. 当然,这个规则是需要熟悉正则表达式,只掌握nginx自身的正则匹配模式即可,对正则不了解的 ...
- python中字符串前的r什么意思
Python中,u表示unicode string,表示使用unicode进行编码,没有u表示byte string,类型是str,在没有声明编码方式时,默认ASCI编码.如果要指定编码方式,可在文件 ...
- why-the-default-authentication-hadoop-is-unsecured ?
https://www.learningjournal.guru/article/hadoop/hadoop-security-using-kerberos/ https://stackoverflo ...
- MySQL架构总览->查询执行流程->SQL解析顺序
Reference: https://www.cnblogs.com/annsshadow/p/5037667.html 前言: 一直是想知道一条SQL语句是怎么被执行的,它执行的顺序是怎样的,然后 ...
- 循环神经网络(Recurrent Neural Networks, RNN)介绍
目录 1 什么是RNNs 2 RNNs能干什么 2.1 语言模型与文本生成Language Modeling and Generating Text 2.2 机器翻译Machine Translati ...
- Mysql获取最大自增ID(auto_increment)的五种方式及其特点
在关系型数据库的表结构中,一般情况下,都会定义一个具有‘AUTO_INCREMENT’扩展属性的‘ID’字段,以确保数据表的每一条记录都有一个唯一标识. 而实际应用中,获取到最近最大的ID值是必修课之 ...
- xib view frame 大小调整
1.IOS - xib(Interface Builder,view) - can't change view size(view不能改变大小问题) 很多时候,我们自定义tableview.colle ...