Given a list of directory info including directory path, and all the files with contents in this directory, you need to find out all the groups of duplicate files in the file system in terms of their paths.

A group of duplicate files consists of at least two files that have exactly the same content.

A single directory info string in the input list has the following format:

"root/d1/d2/.../dm f1.txt(f1_content) f2.txt(f2_content) ... fn.txt(fn_content)"

It means there are n files (f1.txtf2.txt ... fn.txt with content f1_contentf2_content ... fn_content, respectively) in directory root/d1/d2/.../dm. Note that n >= 1 and m >= 0. If m = 0, it means the directory is just the root directory.

The output is a list of group of duplicate file paths. For each group, it contains all the file paths of the files that have the same content. A file path is a string that has the following format:

"directory_path/file_name.txt"

Example 1:

Input:
["root/a 1.txt(abcd) 2.txt(efgh)", "root/c 3.txt(abcd)", "root/c/d 4.txt(efgh)", "root 4.txt(efgh)"]
Output:
[["root/a/2.txt","root/c/d/4.txt","root/4.txt"],["root/a/1.txt","root/c/3.txt"]]
 class Solution {
public static List<List<String>> findDuplicate(String[] paths) {
Map<String, List<String>> map = new HashMap<>();
for(String path : paths) {
String[] tokens = path.split(" ");
for(int i = ; i < tokens.length; i++) {
String file = tokens[i].substring(, tokens[i].indexOf('('));
String content = tokens[i].substring(tokens[i].indexOf('(') + , tokens[i].indexOf(')'));
map.putIfAbsent(content, new ArrayList<>());
map.get(content).add(tokens[] + "/" + file);
}
}
return map.values().stream().filter(e -> e.size() > ).collect(Collectors.toList());
}
}

Follow Up questions:

  1. Imagine you are given a real file system, how will you search files? DFS or BFS ?

The answer depends on the tree structure. If the branching factor (n) and depth (d) are high, then BFS will take up a lot of memory O(d^n). For DFS, the space complexity is generally the height of the tree - O(d).

  1. If the file content is very large (GB level), how will you modify your solution?
  2. If you can only read the file by 1kb each time, how will you modify your solution?
  3. What is the time complexity of your modified solution? What is the most time consuming part and memory consuming part of it? How to optimize?
  4. How to make sure the duplicated files you find are not false positive?

Question 1:

core idea: DFS

Reason: if depth of directory is not too deeper, which is suitable to use DFS, comparing with BFS.

Question-2:

If the file content is very large (GB level), how will you modify your solution?

Answer:

core idea: make use of meta data, like file size before really reading large content.

Two steps:

  • DFS to map each size to a set of paths that have that size: Map<Integer, Set>
  • For each size, if there are more than 2 files there, compute hashCode of every file by MD5, if any files with the same size have the same hash, then they are identical files: Map<String, Set>, mapping each hash to the Set of filepaths+filenames. This hash id's are very very big, so we use the Java library BigInteger.

To optimize Step-2. In GFS, it stores large file in multiple "chunks" (one chunk is 64KB). we have meta data, including the file size, file name and index of different chunks along with each chunk's checkSum(the xor for the content). For step-2, we just compare each file's checkSum.

Disadvantage: there might be flase positive duplicates, because two different files might share the same checkSum.

Question-3:

If you can only read the file by 1kb each time, how will you modify your solution?

Answer:

  • makeHashQuick Function is quick but memory hungry, might likely to run with java -Xmx2G or the likely to increase heap space if RAM avaliable.
  • we might need to play with the size defined by "buffSize" to make memory efficient.

Question-4:

What is the time complexity of your modified solution? What is the most time-consuming part and memory consuming part of it? How to optimize?

Answer:

  • hashing part is the most time-consuming and memory consuming.
  • optimize as above mentioned, but also introduce false positive issue.

Question-5:

How to make sure the duplicated files you find are not false positive?

Answer:

Question-2-Answer-1 will avoid it.
We need to compare the content chunk by chunk when we find two "duplicates" using checkSum.

In preparing for my Dropbox interview, I came across this problem and really wanted to find the ideas behind the follow up questions (as these were the questions that the interviewer was most interested in, not the code itself). Since this is the only post with the follow up discussion, i'll comment here! @yujun gave a great solution above and I just wanted to add a bit more to help future interviewees.

To find duplicate files, given input of String array is quite easy. Loop through each String and keep a HashMap of Strings to Set/Collection of Strings: mapping the contents of each file to a set of paths with filename concatenated.

For me, instead of given a list of paths, I was given a Directory and asked to return List of List of duplicate files for all under it. I chose to represent a Directory like:

class Directory{
List<Directory> subDirectories;
List<File> files;
}

Given a directory, you are asked how you can find duplicate files given very large files. The idea here is that you cannot store contents in memory, so you need to store the file contents in disk. So you can hash each file content and store the hash as a metadata field for each file. Then as you perform your search, store the hash instead the file's contents in memory. So the idea is you can do a DFS through the root directory and create a HashMap<String, Set<String>> mapping each hash to the Set of filepaths + filenames that correspond to that hash's content.

(Note: You can choose BFS / DFS to traverse the Path. I chose DFS as it is more memory efficient and quicker to code up.)

Follow Up: This is great, but it requires you to compute the hash for every single file once, which can be expensive for large files. Is there anyway you can avoid computing the hash for a file?

One approach is to also maintain a metadata field for each file's size on disk. Then you can take a 2 pass approach:

  1. DFS to map each size to a set of paths that have that size
  2. For each size, if there are more than 2 files there, compute hash of every file, if any files with the same size have the same hash, then they are identical files.

This way, you only compute hashes if you have multiple files with the same size. So when you do a DFS, you can create a HashMap<Integer, Set<String>>, mapping each file's size to the list of file paths that have that size. Loop through each String in each set, get its hash, check if it exists in your set, if so, add it to your List<String> res otherwise add it into the set. In between each key (switching file sizes), you can add your res to your List<List<String>>.

Just want to share my humble opinions for discussion:
If anyone has a better solution, I would appreciate it if you'd like to correct and enlighten me:-)
Question 2:
In real-world file system, we usually store large file in multiple "chunks" (in GFS, one chunk is 64 MB),so we have meta data recording the file size,file name and index of different chunks along with each chunk's checkSum (the xor for the content).
So when we upload a file, we record the meta data as mentioned above.
When we need to check for duplicates, we could simply check the meta data:
1.Check if files are of the same size;
2.if step 1 passes, compare the first chunk's checkSum
3.if step 2 passes, check the second checkSum
...
and so on.
There might be false positive duplicates, because two different files might share the same checkSum.

Question 3:
In the way mentioned above, we could read the meta data instead of the entire file, and compare the information KB by KB.

Question 5:
Using checkSum, we could quickly and accurately find out the non-duplicated files. But to totally avoid getting the false positive, we need to compare the content chunk by chunk when we find two "duplicates" using checkSum.

Find Duplicate File in System的更多相关文章

  1. LC 609. Find Duplicate File in System

    Given a list of directory info including directory path, and all the files with contents in this dir ...

  2. [LeetCode] Find Duplicate File in System 在系统中寻找重复文件

    Given a list of directory info including directory path, and all the files with contents in this dir ...

  3. [Swift]LeetCode609. 在系统中查找重复文件 | Find Duplicate File in System

    Given a list of directory info including directory path, and all the files with contents in this dir ...

  4. LeetCode Find Duplicate File in System

    原题链接在这里:https://leetcode.com/problems/find-duplicate-file-in-system/description/ 题目: Given a list of ...

  5. [leetcode-609-Find Duplicate File in System]

    https://discuss.leetcode.com/topic/91430/c-clean-solution-answers-to-follow-upGiven a list of direct ...

  6. 609. Find Duplicate File in System

    Given a list of directory info including directory path, and all the files with contents in this dir ...

  7. 【leetcode】609. Find Duplicate File in System

    题目如下: Given a list of directory info including directory path, and all the files with contents in th ...

  8. 【LeetCode】609. Find Duplicate File in System 解题报告(Python & C++)

    作者: 负雪明烛 id: fuxuemingzhu 个人博客: http://fuxuemingzhu.cn/ 目录 题目描述 题目大意 解题方法 日期 题目地址:https://leetcode.c ...

  9. HDU 3269 P2P File Sharing System(模拟)(2009 Asia Ningbo Regional Contest)

    Problem Description Peer-to-peer(P2P) computing technology has been widely used on the Internet to e ...

随机推荐

  1. Web大文件分片上传

    在Web应用系统开发中,文件上传和下载功能是非常常用的功能,今天来讲一下JavaWeb中的文件上传和下载功能的实现. 先说下要求: PC端全平台支持,要求支持Windows,Mac,Linux 支持所 ...

  2. LA 7043 International Collegiate Routing Contest 路由表 字典树离散化+bitset 银牌题

    题目链接:给你n(n<=3e4)个路由地址(注意有子网掩码现象), 路由地址:128.0.0.0/1的形式 要求你输出一个路由集合,其是给定的路由集合的补集,且个数越少越好 #include & ...

  3. [Luogu] 树

    https://www.luogu.org/problemnew/show/P4092 树剖 + 线段树区间修改,单点查询 #include <bits/stdc++.h> using n ...

  4. 【集训队作业2018】count

    CSP后第一发Blog. 这道题没有一下子过掉,开始还推出了错的结论.在错误的生成函数里绕了好久-- 很显然的转笛卡尔树,一个笛卡尔树对应一种序列.只要考虑一个笛卡尔树是否合法. 贪心地填数发现,从根 ...

  5. Windows操作系统中的I/O(读/写 输入/输出)

    导言 写一个Windows平台下的应用程序大多时候都是离不开读写文件,网络通信的. 比如一个服务应用程序来说,它可能从网络适配器接受用户的请求,对请求进行处理计算,最终将用户端所需的数据返回,中间可能 ...

  6. windows 安装python2.7

    下载:https://www.python.org/downloads/release/python-2716/ 安装即可. 设置环境变量 进入C:\Python27,修改python.exe 为py ...

  7. Navicat连接的某个表一直加载并且不能关闭

    问题: 今天下午突然发现数据库的一张表一直加载,也出不来数据,并且也不能关闭.解决办法: 在Navicat中中执行如下命令: SHOW PROCESSLIST; 如果state列中有lock字眼,通过 ...

  8. ZR#712

    消灭砖块 题意: 很多块砖分布在一个 $ m \times m $ 的矩阵中,他可以消掉以他为左上角顶点的一个 $ n \times n $ 的矩阵里的所有砖块.计算可以消掉最多的砖块数(只能消一次) ...

  9. Linux 删除文件夹和文件的命令(强制删除包括非空文件)

    linux删除目录很简单,很多人还是习惯用rmdir,不过一旦目录非空,就陷入深深的苦恼之中,现在使用rm -rf命令即可.直接rm就可以了,不过要加两个参数-rf 即:rm -rf 目录名字-r 就 ...

  10. Spring AOP:Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException

    1 报错 Exception encountered during context initialization - cancelling refresh attempt: org.springfra ...