webUploader实现大文件分片,断点续传
问题:
公司现在的业务需求是要上传一个大文件,上一次写了一篇博客,做了一个简单的文件上传,支持单文件,大型文件上传
现在对之前的上传进行优化,支持断点续传,秒传功能
上次博客:【http://www.cnblogs.com/hackxiyu/p/8194066.html】
分析:
这篇文章参考了其它博主的文章,参考地址:【https://github.com/Fourwenwen/Breakpoint-http】
环境需要:
1.本地测试的话需要配置好Redis,用来保存文件的MD5校验值,和秒传功能的实现
2.jquery,bootstrap,webUploader的相关js,css文件
3.我用的是springBoot来实现的,页面是首页嵌套的,所以没有html,body标签,大家根据自己情况来定
解决:
1.页面html文件,业务js文件杂糅到一起,大家可以拆开清晰一些
- <!--引入css文件-->
- <link rel="stylesheet" type="text/css" href="static/html/bigFileUpload/assets/bootstrap-3.3.7-dist/css/bootstrap.css">
- <link rel="stylesheet" type="text/css" href="static/html/bigFileUpload/assets/webuploader.css">
- <div id="uploader" class="wu-example">
- <div id="thelist" class="uploader-list"></div>
- <div class="btns">
- <div id="picker">选择大文件</div>
- <button id="ctlBtn" class="btn btn-default">开始上传</button>
- </div>
- </div>
- <!--引入JS,jquery的js已经引入-->
- <script type="text/javascript" src="static/html/bigFileUpload/assets/webuploader.js"></script>
- <script type="text/javascript" src="static/html/bigFileUpload/assets/bootstrap-3.3.7-dist/js/bootstrap.js"></script>
- <!--业务js文件-->
- <script>
- var $btn = $('#ctlBtn');
- var $thelist = $('#thelist');
- var chunkSize = 5 * 1024 * 1024;
- // HOOK 这个必须要再uploader实例化前面
- WebUploader.Uploader.register({
- 'before-send-file': 'beforeSendFile',
- 'before-send': 'beforeSend'
- }, {
- beforeSendFile: function (file) {
- console.log("beforeSendFile");
- // Deferred对象在钩子回掉函数中经常要用到,用来处理需要等待的异步操作。
- var task = new $.Deferred();
- // 根据文件内容来查询MD5
- uploader.md5File(file).progress(function (percentage) { // 及时显示进度
- console.log('计算md5进度:', percentage);
- getProgressBar(file, percentage, "MD5", "MD5");
- }).then(function (val) { // 完成
- console.log('md5 result:', val);
- file.md5 = val;
- // 模拟用户id
- // file.uid = new Date().getTime() + "_" + Math.random() * 100;
- file.uid = WebUploader.Base.guid();
- // 进行md5判断
- $.post("break/checkFileMd5", {uid: file.uid, md5: file.md5,"Authorization": localStorage.token},
- function (data) {
- console.log(data.status);
- var status = data.status.value;
- task.resolve();
- if (status == 101) {
- // 文件不存在,那就正常流程
- } else if (status == 100) {
- // 忽略上传过程,直接标识上传成功;
- uploader.skipFile(file);
- file.pass = true;
- } else if (status == 102) {
- // 部分已经上传到服务器了,但是差几个模块。
- file.missChunks = data.data;
- }
- });
- });
- return $.when(task);
- },
- beforeSend: function (block) {
- console.log("block")
- var task = new $.Deferred();
- var file = block.file;
- var missChunks = file.missChunks;
- var blockChunk = block.chunk;
- console.log("当前分块:" + blockChunk);
- console.log("missChunks:" + missChunks);
- if (missChunks !== null && missChunks !== undefined && missChunks !== '') {
- var flag = true;
- for (var i = 0; i < missChunks.length; i++) {
- if (blockChunk == missChunks[i]) {
- console.log(file.name + ":" + blockChunk + ":还没上传,现在上传去吧。");
- flag = false;
- break;
- }
- }
- if (flag) {
- task.reject();
- } else {
- task.resolve();
- }
- } else {
- task.resolve();
- }
- return $.when(task);
- }
- });
- // 实例化
- var uploader = WebUploader.create({
- pick: {
- id: '#picker',
- label: '点击选择文件'
- },
- formData: {
- uid: 0,
- md5: '',
- chunkSize: chunkSize,
- "Authorization": localStorage.token
- },
- //dnd: '#dndArea',
- //paste: '#uploader',
- swf: 'static/html/bigFileUpload/assets/Uploader.swf',
- chunked: true,
- chunkSize: chunkSize, // 字节 1M分块
- threads: 3,
- server: 'break/fileUpload',
- auto: false,
- // 禁掉全局的拖拽功能。这样不会出现图片拖进页面的时候,把图片打开。
- disableGlobalDnd: true,
- fileNumLimit: 1024,
- fileSizeLimit: 1024 * 1024 * 1024, // 200 M
- fileSingleSizeLimit: 1024 * 1024 * 1024 // 50 M
- });
- // 当有文件被添加进队列的时候
- uploader.on('fileQueued', function (file) {
- console.log("fileQueued");
- $thelist.append('<div id="' + file.id + '" class="item">' +
- '<h4 class="info">' + file.name + '</h4>' +
- '<p class="state">等待上传...</p>' +
- '</div>');
- });
- //当某个文件的分块在发送前触发,主要用来询问是否要添加附带参数,大文件在开起分片上传的前提下此事件可能会触发多次。
- uploader.onUploadBeforeSend = function (obj, data) {
- console.log("onUploadBeforeSend");
- var file = obj.file;
- data.md5 = file.md5 || '';
- data.uid = file.uid;
- };
- // 上传中
- uploader.on('uploadProgress', function (file, percentage) {
- getProgressBar(file, percentage, "FILE", "上传进度");
- });
- // 上传返回结果
- uploader.on('uploadSuccess', function (file) {
- var text = '已上传';
- if (file.pass) {
- text = "文件妙传功能,文件已上传。"
- }
- $('#' + file.id).find('p.state').text(text);
- });
- uploader.on('uploadError', function (file) {
- $('#' + file.id).find('p.state').text('上传出错');
- });
- uploader.on('uploadComplete', function (file) {
- // 隐藏进度条
- fadeOutProgress(file, 'MD5');
- fadeOutProgress(file, 'FILE');
- });
- // 文件上传
- $btn.on('click', function () {
- console.log("上传...");
- uploader.upload();
- console.log("上传成功");
- });
- /**
- * 生成进度条封装方法
- * @param file 文件
- * @param percentage 进度值
- * @param id_Prefix id前缀
- * @param titleName 标题名
- */
- function getProgressBar(file, percentage, id_Prefix, titleName) {
- var $li = $('#' + file.id), $percent = $li.find('#' + id_Prefix + '-progress-bar');
- // 避免重复创建
- if (!$percent.length) {
- $percent = $('<div id="' + id_Prefix + '-progress" class="progress progress-striped active">' +
- '<div id="' + id_Prefix + '-progress-bar" class="progress-bar" role="progressbar" style="width: 0%">' +
- '</div>' +
- '</div>'
- ).appendTo($li).find('#' + id_Prefix + '-progress-bar');
- }
- var progressPercentage = percentage * 100 + '%';
- $percent.css('width', progressPercentage);
- $percent.html(titleName + ':' + progressPercentage);
- }
- /**
- * 隐藏进度条
- * @param file 文件对象
- * @param id_Prefix id前缀
- */
- function fadeOutProgress(file, id_Prefix) {
- $('#' + file.id).find('#' + id_Prefix + '-progress').fadeOut();
- }
- </script>
2.API接口
- package org.triber.portal.breakPoint;
- import org.apache.commons.io.FileUtils;
- import org.apache.tomcat.util.http.fileupload.servlet.ServletFileUpload;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.data.redis.core.StringRedisTemplate;
- import org.springframework.http.ResponseEntity;
- import org.springframework.stereotype.Controller;
- import org.springframework.web.bind.annotation.RequestMapping;
- import org.springframework.web.bind.annotation.RequestMethod;
- import org.springframework.web.bind.annotation.ResponseBody;
- import javax.servlet.http.HttpServletRequest;
- import java.io.File;
- import java.io.IOException;
- import java.util.LinkedList;
- import java.util.List;
- /**
- * 断点续传上传大文件类
- */
- @Controller
- @RequestMapping(value = "/break")
- public class BreakPointController {
- private Logger logger = LoggerFactory.getLogger(BreakPointController.class);
- @Autowired
- private StringRedisTemplate stringRedisTemplate;
- @Autowired
- private StorageService storageService;
- /**
- * 秒传判断,断点判断
- *
- * @return
- */
- @RequestMapping(value = "checkFileMd5", method = RequestMethod.POST)
- @ResponseBody
- public Object checkFileMd5(String md5) throws IOException {
- Object processingObj = stringRedisTemplate.opsForHash().get(Constants.FILE_UPLOAD_STATUS, md5);
- if (processingObj == null) {
- return new ResultVo(ResultStatus.NO_HAVE);
- }
- String processingStr = processingObj.toString();
- boolean processing = Boolean.parseBoolean(processingStr);
- String value = stringRedisTemplate.opsForValue().get(Constants.FILE_MD5_KEY + md5);
- if (processing) {
- return new ResultVo(ResultStatus.IS_HAVE, value);
- } else {
- File confFile = new File(value);
- byte[] completeList = FileUtils.readFileToByteArray(confFile);
- List<String> missChunkList = new LinkedList<>();
- for (int i = 0; i < completeList.length; i++) {
- if (completeList[i] != Byte.MAX_VALUE) {
- missChunkList.add(i + "");
- }
- }
- return new ResultVo<>(ResultStatus.ING_HAVE, missChunkList);
- }
- }
- /**
- * 上传文件
- *
- * @param param
- * @param request
- * @return
- * @throws Exception
- */
- @RequestMapping(value = "/fileUpload", method = RequestMethod.POST)
- @ResponseBody
- public ResponseEntity fileUpload(MultipartFileParam param, HttpServletRequest request) {
- boolean isMultipart = ServletFileUpload.isMultipartContent(request);
- if (isMultipart) {
- logger.info("上传文件start。");
- try {
- // 方法1
- //storageService.uploadFileRandomAccessFile(param);
- // 方法2 这个更快点
- storageService.uploadFileByMappedByteBuffer(param);
- } catch (IOException e) {
- e.printStackTrace();
- logger.error("文件上传失败。{}", param.toString());
- }
- logger.info("上传文件end。");
- }
- return ResponseEntity.ok().body("上传成功。");
- }
- }
3.业务service的实现
- package org.triber.portal.breakPoint;
- import java.io.IOException;
- /**
- * 存储操作的service
- * Created by 超文 on 2017/5/2.
- */
- public interface StorageService {
- /**
- * 删除全部数据
- */
- void deleteAll();
- /**
- * 初始化方法
- */
- void init();
- /**
- * 上传文件方法1
- *
- * @param param
- * @throws IOException
- */
- void uploadFileRandomAccessFile(MultipartFileParam param) throws IOException;
- /**
- * 上传文件方法2
- * 处理文件分块,基于MappedByteBuffer来实现文件的保存
- *
- * @param param
- * @throws IOException
- */
- void uploadFileByMappedByteBuffer(MultipartFileParam param) throws IOException;
- }
实现
- package org.triber.portal.breakPoint;
- import org.apache.commons.io.FileUtils;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.beans.factory.annotation.Value;
- import org.springframework.data.redis.core.StringRedisTemplate;
- import org.springframework.stereotype.Service;
- import org.springframework.util.FileSystemUtils;
- import java.io.File;
- import java.io.IOException;
- import java.io.RandomAccessFile;
- import java.nio.MappedByteBuffer;
- import java.nio.channels.FileChannel;
- import java.nio.file.FileAlreadyExistsException;
- import java.nio.file.Files;
- import java.nio.file.Path;
- import java.nio.file.Paths;
- /**
- * Created by 超文 on 2017/5/2.
- */
- @Service
- public class StorageServiceImpl implements StorageService {
- private final Logger logger = LoggerFactory.getLogger(StorageServiceImpl.class);
- // 保存文件的根目录
- private Path rootPaht;
- @Autowired
- private StringRedisTemplate stringRedisTemplate;
- //这个必须与前端设定的值一致
- @Value("${breakpoint.upload.chunkSize}")
- private long CHUNK_SIZE;
- @Value("${breakpoint.upload.dir}")
- private String finalDirPath;
- @Autowired
- public StorageServiceImpl(@Value("${breakpoint.upload.dir}") String location) {
- this.rootPaht = Paths.get(location);
- }
- @Override
- public void deleteAll() {
- logger.info("开发初始化清理数据,start");
- FileSystemUtils.deleteRecursively(rootPaht.toFile());
- stringRedisTemplate.delete(Constants.FILE_UPLOAD_STATUS);
- stringRedisTemplate.delete(Constants.FILE_MD5_KEY);
- logger.info("开发初始化清理数据,end");
- }
- @Override
- public void init() {
- try {
- Files.createDirectory(rootPaht);
- } catch (FileAlreadyExistsException e) {
- logger.error("文件夹已经存在了,不用再创建。");
- } catch (IOException e) {
- logger.error("初始化root文件夹失败。", e);
- }
- }
- @Override
- public void uploadFileRandomAccessFile(MultipartFileParam param) throws IOException {
- String fileName = param.getName();
- String tempDirPath = finalDirPath + param.getMd5();
- String tempFileName = fileName + "_tmp";
- File tmpDir = new File(tempDirPath);
- File tmpFile = new File(tempDirPath, tempFileName);
- if (!tmpDir.exists()) {
- tmpDir.mkdirs();
- }
- RandomAccessFile accessTmpFile = new RandomAccessFile(tmpFile, "rw");
- long offset = CHUNK_SIZE * param.getChunk();
- //定位到该分片的偏移量
- accessTmpFile.seek(offset);
- //写入该分片数据
- accessTmpFile.write(param.getFile().getBytes());
- // 释放
- accessTmpFile.close();
- boolean isOk = checkAndSetUploadProgress(param, tempDirPath);
- if (isOk) {
- boolean flag = renameFile(tmpFile, fileName);
- System.out.println("upload complete !!" + flag + " name=" + fileName);
- }
- }
- @Override
- public void uploadFileByMappedByteBuffer(MultipartFileParam param) throws IOException {
- String fileName = param.getName();
- String uploadDirPath = finalDirPath + param.getMd5();
- String tempFileName = fileName + "_tmp";
- File tmpDir = new File(uploadDirPath);
- File tmpFile = new File(uploadDirPath, tempFileName);
- if (!tmpDir.exists()) {
- tmpDir.mkdirs();
- }
- RandomAccessFile tempRaf = new RandomAccessFile(tmpFile, "rw");
- FileChannel fileChannel = tempRaf.getChannel();
- //写入该分片数据
- long offset = CHUNK_SIZE * param.getChunk();
- byte[] fileData = param.getFile().getBytes();
- MappedByteBuffer mappedByteBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, offset, fileData.length);
- mappedByteBuffer.put(fileData);
- // 释放
- FileMD5Util.freedMappedByteBuffer(mappedByteBuffer);
- fileChannel.close();
- boolean isOk = checkAndSetUploadProgress(param, uploadDirPath);
- if (isOk) {
- boolean flag = renameFile(tmpFile, fileName);
- System.out.println("upload complete !!" + flag + " name=" + fileName);
- }
- }
- /**
- * 检查并修改文件上传进度
- *
- * @param param
- * @param uploadDirPath
- * @return
- * @throws IOException
- */
- private boolean checkAndSetUploadProgress(MultipartFileParam param, String uploadDirPath) throws IOException {
- String fileName = param.getName();
- File confFile = new File(uploadDirPath, fileName + ".conf");
- RandomAccessFile accessConfFile = new RandomAccessFile(confFile, "rw");
- //把该分段标记为 true 表示完成
- System.out.println("set part " + param.getChunk() + " complete");
- accessConfFile.setLength(param.getChunks());
- accessConfFile.seek(param.getChunk());
- accessConfFile.write(Byte.MAX_VALUE);
- //completeList 检查是否全部完成,如果数组里是否全部都是(全部分片都成功上传)
- byte[] completeList = FileUtils.readFileToByteArray(confFile);
- byte isComplete = Byte.MAX_VALUE;
- for (int i = 0; i < completeList.length && isComplete == Byte.MAX_VALUE; i++) {
- //与运算, 如果有部分没有完成则 isComplete 不是 Byte.MAX_VALUE
- isComplete = (byte) (isComplete & completeList[i]);
- System.out.println("check part " + i + " complete?:" + completeList[i]);
- }
- accessConfFile.close();
- if (isComplete == Byte.MAX_VALUE) {
- stringRedisTemplate.opsForHash().put(Constants.FILE_UPLOAD_STATUS, param.getMd5(), "true");
- stringRedisTemplate.opsForValue().set(Constants.FILE_MD5_KEY + param.getMd5(), uploadDirPath + "/" + fileName);
- return true;
- } else {
- if (!stringRedisTemplate.opsForHash().hasKey(Constants.FILE_UPLOAD_STATUS, param.getMd5())) {
- stringRedisTemplate.opsForHash().put(Constants.FILE_UPLOAD_STATUS, param.getMd5(), "false");
- }
- if (stringRedisTemplate.hasKey(Constants.FILE_MD5_KEY + param.getMd5())) {
- stringRedisTemplate.opsForValue().set(Constants.FILE_MD5_KEY + param.getMd5(), uploadDirPath + "/" + fileName + ".conf");
- }
- return false;
- }
- }
- /**
- * 文件重命名
- *
- * @param toBeRenamed 将要修改名字的文件
- * @param toFileNewName 新的名字
- * @return
- */
- public boolean renameFile(File toBeRenamed, String toFileNewName) {
- //检查要重命名的文件是否存在,是否是文件
- if (!toBeRenamed.exists() || toBeRenamed.isDirectory()) {
- logger.info("File does not exist: " + toBeRenamed.getName());
- return false;
- }
- String p = toBeRenamed.getParent();
- File newFile = new File(p + File.separatorChar + toFileNewName);
- //修改文件名
- return toBeRenamed.renameTo(newFile);
- }
- }
4.依赖的MD5工具类
- package org.triber.portal.breakPoint;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import java.io.File;
- import java.io.FileInputStream;
- import java.io.FileNotFoundException;
- import java.io.IOException;
- import java.lang.reflect.Method;
- import java.math.BigInteger;
- import java.nio.MappedByteBuffer;
- import java.nio.channels.FileChannel;
- import java.security.AccessController;
- import java.security.MessageDigest;
- import java.security.PrivilegedAction;
- /**
- * 文件md5值
- * Created by 超文 on 2016/10/10.
- * version 1.0
- */
- public class FileMD5Util {
- private final static Logger logger = LoggerFactory.getLogger(FileMD5Util.class);
- public static String getFileMD5(File file) throws FileNotFoundException {
- String value = null;
- FileInputStream in = new FileInputStream(file);
- MappedByteBuffer byteBuffer = null;
- try {
- byteBuffer = in.getChannel().map(FileChannel.MapMode.READ_ONLY, 0, file.length());
- MessageDigest md5 = MessageDigest.getInstance("MD5");
- md5.update(byteBuffer);
- BigInteger bi = new BigInteger(1, md5.digest());
- value = bi.toString(16);
- if (value.length() < 32) {
- value = "0" + value;
- }
- } catch (Exception e) {
- e.printStackTrace();
- } finally {
- if (null != in) {
- try {
- in.getChannel().close();
- in.close();
- } catch (IOException e) {
- logger.error("get file md5 error!!!", e);
- }
- }
- if (null != byteBuffer) {
- freedMappedByteBuffer(byteBuffer);
- }
- }
- return value;
- }
- /**
- * 在MappedByteBuffer释放后再对它进行读操作的话就会引发jvm crash,在并发情况下很容易发生
- * 正在释放时另一个线程正开始读取,于是crash就发生了。所以为了系统稳定性释放前一般需要检 查是否还有线程在读或写
- *
- * @param mappedByteBuffer
- */
- public static void freedMappedByteBuffer(final MappedByteBuffer mappedByteBuffer) {
- try {
- if (mappedByteBuffer == null) {
- return;
- }
- mappedByteBuffer.force();
- AccessController.doPrivileged(new PrivilegedAction<Object>() {
- @Override
- public Object run() {
- try {
- Method getCleanerMethod = mappedByteBuffer.getClass().getMethod("cleaner", new Class[0]);
- getCleanerMethod.setAccessible(true);
- sun.misc.Cleaner cleaner = (sun.misc.Cleaner) getCleanerMethod.invoke(mappedByteBuffer,
- new Object[0]);
- cleaner.clean();
- } catch (Exception e) {
- logger.error("clean MappedByteBuffer error!!!", e);
- }
- logger.info("clean MappedByteBuffer completed!!!");
- return null;
- }
- });
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
- }
5.分片实体
- package org.triber.portal.breakPoint;
- import org.springframework.web.multipart.MultipartFile;
- /**
- * Created by wenwen on 2017/4/16.
- * version 1.0
- */
- public class MultipartFileParam {
- // 用户id
- private String uid;
- //任务ID
- private String id;
- //总分片数量
- private int chunks;
- //当前为第几块分片
- private int chunk;
- //当前分片大小
- private long size = 0L;
- //文件名
- private String name;
- //分片对象
- private MultipartFile file;
- // MD5
- private String md5;
- public String getUid() {
- return uid;
- }
- public void setUid(String uid) {
- this.uid = uid;
- }
- public String getId() {
- return id;
- }
- public void setId(String id) {
- this.id = id;
- }
- public int getChunks() {
- return chunks;
- }
- public void setChunks(int chunks) {
- this.chunks = chunks;
- }
- public int getChunk() {
- return chunk;
- }
- public void setChunk(int chunk) {
- this.chunk = chunk;
- }
- public long getSize() {
- return size;
- }
- public void setSize(long size) {
- this.size = size;
- }
- public String getName() {
- return name;
- }
- public void setName(String name) {
- this.name = name;
- }
- public MultipartFile getFile() {
- return file;
- }
- public void setFile(MultipartFile file) {
- this.file = file;
- }
- public String getMd5() {
- return md5;
- }
- public void setMd5(String md5) {
- this.md5 = md5;
- }
- @Override
- public String toString() {
- return "MultipartFileParam{" +
- "uid='" + uid + '\'' +
- ", id='" + id + '\'' +
- ", chunks=" + chunks +
- ", chunk=" + chunk +
- ", size=" + size +
- ", name='" + name + '\'' +
- ", file=" + file +
- ", md5='" + md5 + '\'' +
- '}';
- }
- }
6.响应常量类
- package org.triber.portal.breakPoint;
- import com.fasterxml.jackson.annotation.JsonFormat;
- /**
- * 结果类型枚举
- * Created by 超文 on 2017/5/2.
- * version 1.0
- */
- @JsonFormat(shape = JsonFormat.Shape.OBJECT)
- public enum ResultStatus {
- /**
- * 1 开头为判断文件在系统的状态
- */
- IS_HAVE(100, "文件已存在!"),
- NO_HAVE(101, "该文件没有上传过。"),
- ING_HAVE(102, "该文件上传了一部分。");
- private final int value;
- private final String reasonPhrase;
- ResultStatus(int value, String reasonPhrase) {
- this.value = value;
- this.reasonPhrase = reasonPhrase;
- }
- public int getValue() {
- return value;
- }
- public String getReasonPhrase() {
- return reasonPhrase;
- }
- }
7.响应实体
- package org.triber.portal.breakPoint;
- /**
- * 统一返回结果pojo
- * Created by wenwen on 2017/4/23.
- * version 1.0
- */
- public class ResultVo<T> {
- private ResultStatus status;
- private String msg;
- private T data;
- public ResultVo(ResultStatus status) {
- this(status, status.getReasonPhrase(), null);
- }
- public ResultVo(ResultStatus status, T data) {
- this(status, status.getReasonPhrase(), data);
- }
- public ResultVo(ResultStatus status, String msg, T data) {
- this.status = status;
- this.msg = msg;
- this.data = data;
- }
- public ResultStatus getStatus() {
- return status;
- }
- public void setStatus(ResultStatus status) {
- this.status = status;
- }
- public String getMsg() {
- return msg;
- }
- public void setMsg(String msg) {
- this.msg = msg;
- }
- public T getData() {
- return data;
- }
- public void setData(T data) {
- this.data = data;
- }
- @Override
- public String toString() {
- return "ResultVo{" +
- "status=" + status +
- ", msg='" + msg + '\'' +
- ", data=" + data +
- '}';
- }
- }
8.常量类
- package org.triber.portal.breakPoint;
- import java.util.HashMap;
- import java.util.Map;
- /**
- * 常量表
- * Created by 超文 on 2017/05/02.
- * version 1.0
- */
- public interface Constants {
- /**
- * 异常信息统一头信息<br>
- * 非常遗憾的通知您,程序发生了异常
- */
- public static final String Exception_Head = "boom。炸了。";
- /**
- * 缓存键值
- */
- public static final Map<Class<?>, String> cacheKeyMap = new HashMap<>();
- /**
- * 保存文件所在路径的key,eg.FILE_MD5:1243jkalsjflkwaejklgjawe
- */
- public static final String FILE_MD5_KEY = "FILE_MD5:";
- /**
- * 保存上传文件的状态
- */
- public static final String FILE_UPLOAD_STATUS = "FILE_UPLOAD_STATUS";
- }
9.本机Redis配置
- #开发环境
- breakpoint:
- upload:
- dir: E:/data0/uploads/
- #1024*1024=1 048 576,5M=5 242 880
- chunkSize: 5 242 880
- spring:
- redis:
- host: 127.0.0.1
- port: 6379
- # password: test //密码我本机没有所以不配
- pool:
- max-active: 30
- max-idle: 10
- max-wait: 10000
- timeout: 0
- http:
- multipart:
- max-file-size: 10MB //可以自定义这些值
- max-request-size: 100MB
总结:
其实重要的也就是,页面js文件,和后台接口服务,MD5工具类
webUploader实现大文件分片,断点续传的更多相关文章
- ASP.NET CORE使用WebUploader对大文件分片上传,并通过ASP.NET CORE SignalR实时反馈后台处理进度给前端展示
本次,我们来实现一个单个大文件上传,并且把后台对上传文件的处理进度通过ASP.NET CORE SignalR反馈给前端展示,比如上传一个大的zip压缩包文件,后台进行解压缩,并且对压缩包中的文件进行 ...
- .NetCore+WebUploader实现大文件分片上传
项目要求通过网站上传大文件,比如视频文件,通过摸索实现了文件分片来上传,然后后台进行合并. 使用了开源的前台上传插件WebUploader(http://fex.baidu.com/webupload ...
- thinkphp+webuploader实现大文件分片上传
大文件分片上传,简单来说就是把大文件切分为小文件,然后再一个一个的上传,到最后由这些小文件再合并成原来的文件 webuploader下载地址及其文档:http://fex.baidu.com/webu ...
- 在React中使用WebUploader实现大文件分片上传的踩坑日记!
前段时间公司项目有个大文件分片上传的需求,项目是用React写的,大文件分片上传这个功能使用了WebUploader这个组件. 具体交互是: 1. 点击上传文件button后出现弹窗,弹窗内有选择文件 ...
- 30分钟玩转Net MVC 基于WebUploader的大文件分片上传、断网续传、秒传(文末附带demo下载)
现在的项目开发基本上都用到了上传文件功能,或图片,或文档,或视频.我们常用的常规上传已经能够满足当前要求了, 然而有时会出现如下问题: 文件过大(比如1G以上),超出服务端的请求大小限制: 请求时间过 ...
- 使用webuploader实现大文件分片上传
文件夹数据库处理逻辑 public class DbFolder { JSONObject root; public DbFolder() { this.root = new JSONObject() ...
- Webuploader 大文件分片上传
百度Webuploader 大文件分片上传(.net接收) 前阵子要做个大文件上传的功能,找来找去发现Webuploader还不错,关于她的介绍我就不再赘述. 动手前,在园子里找到了一篇不错的分片 ...
- php使用WebUploader做大文件的分块和断点续传
核心原理: 该项目核心就是文件分块上传.前后端要高度配合,需要双方约定好一些数据,才能完成大文件分块,我们在项目中要重点解决的以下问题. * 如何分片: * 如何合成一个文件: * 中断了从哪个分片开 ...
- php+html5实现无刷新上传,大文件分片上传,断点续传
核心原理: 该项目核心就是文件分块上传.前后端要高度配合,需要双方约定好一些数据,才能完成大文件分块,我们在项目中要重点解决的以下问题. * 如何分片: * 如何合成一个文件: * 中断了从哪个分片开 ...
随机推荐
- ubuntu 13.10 无法播放 mp3
添加源: #deb cdrom:[Ubuntu 13.10 _Saucy Salamander_ - Release i386 (20131016.1)]/ saucy main restricted ...
- C++Primer学习笔记《三》
数组名事实上就是一个常指针,指向数组元素中第一个的地址,在程序中假设要用指针遍历数组,不能直接用数组名来自增或自减.由于它是常量,一般先把数组名保存一份同类型的指针,然后再用这个指针来自增或是自减来实 ...
- BZOJ3174:[TJOI2013]拯救小矮人(DP)
Description 一群小矮人掉进了一个很深的陷阱里,由于太矮爬不上来,于是他们决定搭一个人梯.即:一个小矮人站在另一小矮人的 肩膀上,知道最顶端的小矮人伸直胳膊可以碰到陷阱口.对于每一个小矮人, ...
- 3171. [TJOI2013]循环格【费用流】
Description 一个循环格就是一个矩阵,其中所有元素为箭头,指向相邻四个格子.每个元素有一个坐标(行,列),其中左上角元素坐标为(0,0).给定一个起始位置(r,c) ,你可以沿着箭头防线在格 ...
- 如何利用java程序实现加密所需的公钥、密钥、数字证书
本篇的主要目的在于实现pdf的数字签名问题,只是作为我学习知识的总结. 1.数字签名算法的概述 本部分主要参考于:https://blog.csdn.net/lovelichao12/article/ ...
- sqoop数据迁移
3.1 概述 sqoop是apache旗下一款“Hadoop和关系数据库服务器之间传送数据”的工具. 导入数据:MySQL,Oracle导入数据到Hadoop的HDFS.HIVE.HBASE等数据存储 ...
- centos安装GD库失败
Error: Package: php-gd-5.6.11-1.el6.remi.x86_64 (remi-php56) Requires: gd-last(x86-64) >= 2.1.1 E ...
- JAVA语言编程思维入门
Java语言是一门强数据类型语言,也就是所有的数据有自己的数据类型,不能搞混淆.比如整数int 字符串String 不能用int a="字符串123";这样写是错的,因为数据类型不 ...
- 记一次jvm异常排查及优化
为方便自己查看,根据工作遇到的问题,转载并整理以下jvm优化内容 有次接到客服反馈,生产系统异常,无法访问.接到通知紧急上后台跟踪,查看了数据库死锁情况--正常,接着查看tomcat 内存溢出--正常 ...
- web前端时间戳转时间类型显示
1.jsp头部加:<%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %> ...