【Paper Reading】Deep Supervised Hashing for fast Image Retrieval
what has been done:
This paper proposed a novel Deep Supervised Hashing method to learn a compact similarity-presevering binary code for the huge body of image data.
Data sets:
CIFAR-10: 60,000 32*32 belonging to 10 mutually exclusively categories(6000 images per category)
NUS-WIDE: 269,648 from Flickr, warpped to 64*64
content based image retrieval: visually similar or semantically similar.
Traditional method: calculate the distance between the query image and the database images.
Problem: time and memory
Solution: hashing methods(map image to compact binary codes that approximately preserve the data structure in the original space)
Problem: performace depends on the features used, more suitable for dealing with the visiual similarity search rather than the sematically similarity search.
Solution: CNNs, the CNNs successful applications of CNNs in various tasks imply that the feature learned by CNNs can well capture the underlying sematic structure of images in spite of significant appearance variations.
Related works:
Locality Sensitive Hashing(LSH):use random projections to produce hashing bits
cons: requires long codes to achieve satisfactory performance.(large memory)
data-dependent hashing methods: unsupervised vs supervised
unsupervised methods: only make use of unlabelled training data to lean hash functions
- spectral hashing(SH): minimizes the weighted hamming distance of image pairs
- Iterative Quantization(ITQ): minimize the quantization error on projected image descriptors so as to allievate the information loss
supervised methods: take advantage of label inforamtion thus can preserve semantic similarity
- CCA-ITQ: an extension of iterative quantization
- predictable discriminative binary code: looks for hypeplanes that seperate categories with large margin as hash function.
- Minimal Loss Hashing(MLH): optimize upper bound of a hinge-like loss to learn the hash functions
problem: the above methods use linear projection as hash functions and can only deal with linearly seperable data.
solution: supervised hashing with kernels(KSH) and Binary Reconstructive Embedding(BRE).
Deep hashing: exploits a non-linear deep networks to produce binary code.
Problem : most hash methods relax the binary codes to real-values in optimizations and quantize the model outputs to produce binary codes. However there is no guarantee that the optimal real-valued codes are still optimal after quantization .
Solution: DIscrete Graph Hashing(DGH) and Supervided Discrete Hashing(DSH) are proposed to directly optimize the binary codes.
Problem : Use hand crafted feature and cannot capture the semantic information.
Solution: CNNs base hashing method
Our goal: similar images should be encoded to similar binary codes and the binary codes should be computed efficiently.
Loss function:
Relaxation:
Implementation details:
Network structure:
3*卷积层:
3*池化层:
2*全连接层:
Training methodology:
- generate images pairs online by exploiting all the image pairs in each mini-batch. Allivate the need to store the whole pair-wise similarity matrix, thus being scalable to large-scale data-sets.
- Fine-tune vs Train from scratch
Experiment:
CIFAR-10
GIST descriptors for conventional hashing methods
NUS-WIDE
225-D normalized block-wise color moment features
Evalutaion Metrics
mAP: mean Average Precision
precision-recall curves(48-bit)
mean precision within Hamming radius 2 for different code lengths
Network ensembles?
Comparison with state-of-the-art method
CNNH: trainin the model to fit pre-computed discriminative binary code. binary code generation and the network learning are isolated
CLBHC: train the model with a binary-line hidden layer as features for classification, encoding dissimilar images to similar binary code would not be punished.
DNNH: used triplet-based constraints to describe more complex semantic relations, training its networks become more diffucult due to the sigmoid non-linearlity and the parameterized piece-wise threshold function used in the output layer.
Combine binary code generation with network learning
Comparision of Encoding Time
【Paper Reading】Deep Supervised Hashing for fast Image Retrieval的更多相关文章
- 【Paper Reading】Learning while Reading
Learning while Reading 不限于具体的书,只限于知识的宽度 这个系列集合了一周所学所看的精华,它们往往来自不只一本书 我们之所以将自然界分类,组织成各种概念,并按其分类,主要是因为 ...
- 【Paper Reading】Object Recognition from Scale-Invariant Features
Paper: Object Recognition from Scale-Invariant Features Sorce: http://www.cs.ubc.ca/~lowe/papers/icc ...
- 【Paper Reading】Bayesian Face Sketch Synthesis
Contribution: 1) Systematic interpretation to existing face sketch synthesis methods. 2) Bayesian fa ...
- 【Paper Reading】Improved Textured Networks: Maximizing quality and diversity in Feed-Forward Stylization and Texture Synthesis
Improved Textured Networks: Maximizing quality and diversity in Feed-Forward Stylization and Texture ...
- 【资料总结】| Deep Reinforcement Learning 深度强化学习
在机器学习中,我们经常会分类为有监督学习和无监督学习,但是尝尝会忽略一个重要的分支,强化学习.有监督学习和无监督学习非常好去区分,学习的目标,有无标签等都是区分标准.如果说监督学习的目标是预测,那么强 ...
- 【文献阅读】Deep Residual Learning for Image Recognition--CVPR--2016
最近准备用Resnet来解决问题,于是重读Resnet的paper <Deep Residual Learning for Image Recognition>, 这是何恺明在2016-C ...
- 【文献阅读】Augmenting Supervised Neural Networks with Unsupervised Objectives-ICML-2016
一.Abstract 从近期对unsupervised learning 的研究得到启发,在large-scale setting 上,本文把unsupervised learning 与superv ...
- 【CS-4476-project 6】Deep Learning
AlexNet / VGG-F network visualized by mNeuron. Project 6: Deep LearningIntroduction to Computer Visi ...
- 【论文阅读】Deep Mixture of Diverse Experts for Large-Scale Visual Recognition
导读: 本文为论文<Deep Mixture of Diverse Experts for Large-Scale Visual Recognition>的阅读总结.目的是做大规模图像分类 ...
随机推荐
- xunsearch实战经验总结
一.定义好配置文件(非常关键) a):如果需要做精确搜索建议对字段设定index=self,tokenizer = full,不然xunsearch会对字段做分词处理: b):数字区间搜索需设定 ty ...
- PHP 闭包之变量作用域
在项目中,难免会遇到闭包的形式,那么在闭包中,变量的作用域到底是怎么样的呢.下面有几个简单的例子. e1 function test_1() { $a = 'php'; $func = funct ...
- LOJ——#2256. 「SNOI2017」英雄联盟
https://loj.ac/problem/2256 题目描述 正在上大学的小皮球热爱英雄联盟这款游戏,而且打的很菜,被网友们戏称为「小学生」.现在,小皮球终于受不了网友们的嘲讽,决定变强了,他变强 ...
- HDU 4358
看了题解那个弱化版后,马上就去做了HDU 3333这道题,发现有可用的地方.于是往这方面想,主要是处理如何确定一个数出现K次的问题.想到了从左往右把每个数出现的次数记下来,但感觉不是这样,呃,再看别人 ...
- [React Router] Prevent Navigation with the React Router Prompt Component
In this lesson we'll show how to setup the Prompt component from React Router. We'll prompt with a s ...
- shell文本过滤编程(十一):paste命令
[版权声明:转载请保留出处:blog.csdn.net/gentleliu. Mail:shallnew at 163 dot com] 从字面上能够看出.paste命令和cut命令功能相反,cut命 ...
- 【VC编程技巧】窗口☞3.5对单文档或者多文档程序制作启动画面
(一)概要: 文章描写叙述了如何通过Visual C++ 2012或者Visual C++ .NET,为单文档或者多文档程序制作启动画面.在Microsoft Visual Studio 6.0中对于 ...
- Codeforces Round #272 (Div. 2) 题解
Codeforces Round #272 (Div. 2) A. Dreamoon and Stairs time limit per test 1 second memory limit per ...
- Redis学习笔记(十二) 高级命令:服务器管理命令
原文链接:http://doc.redisfans.com/server/index.html save 执行一个同步操作,将redis实例的所有数据以rdb的形式保存到硬盘,一般来说,生产环境很少执 ...
- POJ 3411 DFS
大致题意: 有n座城市和m(1<=n,m<=10)条路.现在要从城市1到城市n.有些路是要收费的,从a城市到b城市,如果之前到过c城市,那么只要付P的钱,如果没有去过就付R的钱.求的是最少 ...