Find and delete duplicate files
作用:查找指定目录(一个或多个)及子目录下的所有重复文件,分组列出,并可手动选择或自动随机删除多余重复文件,每组重复文件仅保留一份。(支持文件名有空格,例如:"file name" 等)
实现:find遍历指定目录查找所有文件,并对找到的所有文件进行MD5校验,通过比对MD5值分类处理重复文件。
不足: find 遍历文件耗时;
MD5校验大文件耗时;
对所有文件校验比对耗时(可考虑通过比对文件大小进行第一轮的重复性筛选,此方式针对存放大量大文件的目录效果明显,本脚本未采用);
演示:

注释:
脚本执行过程中显示MD5校验过程,完毕后,统计信息如下:
Files: 校验的文件总数
Groups: 重复文件组的数量
Size:此处统计的大小为,多余文件的总大小,即将要删除的多余的重复文件的大小,换句话说就是,删除重复文件后,磁盘空间会节省的空间。
可在“Show detailed information ?”提示后,按键“y”,进行重复文件组的查看,以便确认,也可直接跳过,进入删除文件方式的选择菜单:
删除文件方式有两种,一种是手动选择方式(默认的方式),每次列出一组重复文件,手动选择欲留下的文件,其他文件将会被删除,若没有选择 则默认保留列表的第一个文件,演示如下:

另一种方式是自动选择方式,默认保留每组文件的第一个文件,其他重复文件自动删除。(为防止删除重要文件,建议使用第一种方式),演示如下:

支持文件名空格的情况,演示如下:

代码专区:
#!/bin/bash
#Author: LingYi
#Date:
#Func: Delete duplicate files
#EG : $ [ DIR1 DIR2 ... DIRn ]
#Define the mnt file, confirming the write authority by yourself.
md5sum_result_log="/tmp/$(date +%Y%m%d%H%M%S)"
echo -e "\033[1;31mMd5suming ...\033[0m"
-I {} md5sum {} | tee -a $md5sum_result_log
files_sum=$(cat $md5sum_result_log | wc -l)
# Define array, using the value of md5 as index, filename as element.
# Firstly, you must do advance declaration to make sure the it's supported by bash.
declare -A md5sum_value_arry
while read md5sum_value md5sum_filename
do
#Space in a file name, in order to support this case ,using the ‘+’ as the segmentation charater.
#So, if '+' appears in a file name, there will be problems. The use should choose the manual mode to delete redundant files.
md5sum_value_arry[$md5sum_value]="${md5sum_value_arry[$md5sum_value]}+$md5sum_filename"
(( _${md5sum_value}+= ))
done <$md5sum_result_log
# counting the duplicate file groups and the size of redundant files in this loop.
groups_sum=
repfiles_size=
for md5sum_value_index in ${!md5sum_value_arry[@]}
do
]]; then
let groups_sum++
need_print_indexes="$need_print_indexes $md5sum_value_index"
eval repfile_sum=\$\(\( \$_$md5sum_value_index - \)\)
repfile_size=$( ls -lS "`echo ${md5sum_value_arry[$md5sum_value_index]}|awk -F'+' '{print $2}'`" | awk '{print $5}')
repfiles_size=$(( repfiles_size + repfile_sum*repfile_size ))
fi
done
#Outputing the statistical information.
echo -e "\033[1;31mFiles: $files_sum Groups: $groups_sum \
Size: ${repfiles_size}B $((repfiles_size/))K $((repfiles_size//))M\[0m"
[[ $groups_sum -eq ]] && exit
#The use chooses whether to check the file grouping or not.
read -n -s -t -p 'Show detailed information ?' user_ch
[[ $user_ch == 'n' ]] && echo || {
[[ $user_ch == 'q' ]] && exit
for print_value_index in $need_print_indexes
do
echo -ne "\n\033[1;35m$((++i)) \033[0m"
eval echo -ne "\\\033[1\;34m$print_value_index [ \$_${print_value_index} ]:\\\033[0m"
echo ${md5sum_value_arry[$print_value_index]} | tr '+' '\n'
done | more
}
#The user can choose the way of deleting file here.
echo -e "\n\nManual Selection by default !"
echo -e " 1 Manual selection\n 2 Random selection"
echo -ne "\033[1;31m"
read -t USER_CH
echo -ne "\033[0m"
[[ $USER_CH == 'q' ]] && exit
[[ $USER_CH -ne ]] && USER_CH= || {
echo -ne "\033[31mWARNING: you have choiced the Random Selection mode, files will be deleted at random !\nAre you sure ?\033[0m"
read -t yn
[[ $yn == 'q' ]] && exit
[[ $yn !=
}
#Handle files according to the user's selection
echo -e "\033[31m\nWarn: keep the first file by default.\033[0m"
for exec_value_index in $need_print_indexes
do
#This loop contains an array of files that are about to be deleted.
,j=;i<$(echo ${md5sum_value_arry[$exec_value_index]} | grep -o '+' | wc -l); i++,j++))
do
file_choices_arry[i]="$(echo ${md5sum_value_arry[$exec_value_index]}|awk -F'+' '{print $J}' J=$j)"
done
eval file_sum=\$_$exec_value_index
]]; then
#If the user selects a manual mode, handle the duplicate file group one by one in a loop.
echo -e "\033[1;34m$exec_value_index\033[0m"
; j<${#file_choices_arry[@]}; j++))
do
echo "[ $j ] ${file_choices_arry[j]}"
done
read -p "Number of the file you want to keep: " num_ch
[[ $num_ch == 'q' ]] && exit
$((${#file_choices_arry[@]}-)) |
else
num_ch=
fi
#If the user selects the automatic deletion mode, then delete the redundant files
; n<${#file_choices_arry[@]}; n++))
do
[[ $n -ne $num_ch ]] && {
echo -ne "\033[1mDeleting file \" ${file_choices_arry[n]} \" ... \033[0m"
rm -f "${file_choices_arry[n]}"
[[ $? -eq ]] && echo -e "\033[1;32mOK" || echo -e "\033[1;31mFAIL"
echo -ne "\033[0m"
}
done
done
Find and delete duplicate files的更多相关文章
- Compare, sort, and delete duplicate lines in Notepad ++
Compare, sort, and delete duplicate lines in Notepad ++ Organize Lines: Since version 6.5.2 the app ...
- Android Duplicate files copied in APK
今天调试 Android 应用遇到这么个问题: Duplicate files copied in APK META-INF/DEPENDENCIES File 1: httpmime-4.3.2.j ...
- com.android.build.api.transform.TransformException: com.android.builder.packaging.DuplicateFileException: Duplicate files copied in APK assets/com.xx.xx
完整的Error 信息(关键部分) Error:Execution failed for task ':fanwe_o2o_47_mgxz_dingzhi:transformResourcesWith ...
- AndroidStudio使用第三方jar包报错(Error: duplicate files during packaging of APK)
http://www.kwstu.com/ArticleView/android_201410252131196692 错误描述: Error: duplicate files during pack ...
- Android Studio 错误 Duplicate files copied in APK META-INF/LICENSE.txt
1 .Duplicate files copied in APK META-INF/LICENSE.txt android { packagingOptions { exclude 'META-I ...
- Duplicate files copied in APK META-INF/LICENSE.txt
Error:Execution failed for task ':app:packageDebug'. > Duplicate files copied in APK META-INF/LIC ...
- Android Studio 错误 Duplicate files copied in APK META-INF/LICENSE.txt解决方案
My logcat: log Execution failed for task ':Prog:packageDebug'. Duplicate files copied in APK META-IN ...
- List or delete hidden files from command prompt(CMD)
In Windows, files/folders have a special attribute called hidden attribute. By setting this attribut ...
- 解决DuplicateFileException: Duplicate files copied in APK META-INF/LICENSE(或META-INF/DEPENDENCIES)
导入eclipse项目时报 Error:Execution failed for task ':app:transformResourcesWithMergeJavaResForDebug'.> ...
随机推荐
- VS 插件ReSharper10 破解注册方法(转)
ReSharper 10.0.0.1 Ultimate 完美破解补丁使用方法,本资源来自互联网,感谢吾乐吧软件站的分享. ReSharper是一款由jetbrains开发的针对C#, VB.NET, ...
- [OFC]Mellanox发布首个200Gb/s硅光子设备
[OFC]Mellanox发布首个200Gb/s硅光子设备 讯石光通讯网 发布时间:2016/4/6 8:18:20 编者:iccsz 点击143次 摘要:Mellanox日前在O ...
- linux分区机制(MBR GPT)简介
- linux 守护程序小记(指定进程不存在则启动 )
最近想在debian 下做个守护进程.用于守护指定的程序一直处于运行状态.网上查了下,有Crontab方式和写脚本执行方式. Crontab Crontab 是系统自带的,类似于Windows的计划任 ...
- CORS基础要点:关于dataType、contentType、withCredentials
事实上,面试时我喜欢问跨域,因为多数开发者都知道它并且常用,而我希望能从面试者的回答中知道他在这个问题的深入程度,进一步看看面试者研究问题的思维方式及钻研精神,然而确实难到了很多人,当然这也不是面试通 ...
- Linux网络编程-IO复用技术
IO复用是Linux中的IO模型之一,IO复用就是进程预先告诉内核需要监视的IO条件,使得内核一旦发现进程指定的一个或多个IO条件就绪,就通过进程进程处理,从而不会在单个IO上阻塞了.Linux中,提 ...
- IntelliJ idea创建Spring MVC的Maven项目
参考:http://my.oschina.net/gaussik/blog/385697?fromerr=Pie9IlFV 创建Maven Web项目 菜单File->New Project可进 ...
- 简单的浏览器调试——console命令
一.显示信息 <script type="text/javascript"> console.log('hello'); console.info('信息'); con ...
- OpenCV安装与配置
本文使用OpenCV2.48在win10平台下操作. 一,关于OpenCV OpenCV是开源C++计算机可视化处理库,它集成了很多计算机图形图像处理的功能.还有机器学习,字符识别,人脸识别,物体检测 ...
- AngularJs的$http发送POST请求,php无法接收Post的数据解决方案
最近在使用AngularJs+Php开发中遇到php后台无法接收到来自AngularJs的数据,在网上也有许多解决方法,却都点到即止.多番摸索后记录下解决方法:tips:当前使用的AngularJ ...