Perceptual Generative Adversarial Networks for Small Object Detection

2017CVPR 新鲜出炉的paper,这是针对small object detection的一篇文章,采用PGAN来提升small object detection任务的performance。

最近也没做object detection,只是别人推荐了这篇paper,看了摘要觉得通俗易懂就往下看了。。。最后发现还是没怎么搞懂,只是明白PGAN的模型。如果理解有误的地方,请指出。

言归正传,PGAN为什么对small object有效?具体是这样,small object 不好检测,而large object好检测,那PGAN就让generator 学习一个映射,把small object 的features 映射成 large object 的features,然后就好检测了。PGAN呢,主要就看它的generator。

传统GAN中的generator是学习从随机噪声到图像的映射,也就是generator可以把一个噪声变成图片,而PGAN的思想是让generator把small object 变成 large object,这样就有利于检测了。 来看看文章中的原话都是怎么介绍generator的:

  1. we address the small object detection problem by developing a single architecture that internally lifts representations of small objects to “super-resolved” ones, achieving similar characteristics as large objects
  2. Perceptual Generative Adversarial Network (Perceptual GAN) model that improves small object detection through narrowing representation difference of small objects from the large ones.
  3. generator learns to transfer perceived poor representations of the small objects to super-resolved ones
  4. The Perceptual GAN aims to enhance the representations of small objects to be similar to those of large object
  5. the generator is a deep residual based feature generative model which transforms the original poor features of small objects to highly discriminative ones by introducing fine-grained details from lower-level layers, achieving “super-resolution” on the intermediate representations

    6.传统的generator G represents a generator that learns to map data z from the noise distribution pz(z) to the distribution pdata(x) over data x,而PGAN的generator中 x and z are the representations for large objects and small objects
  6. The generator network aims to generate super-resolved representations for small objects to improve detection accurac
  7. the generator as a deep residual learning network that augments the representations of small objects to super-resolved ones by introducing more fine-grained details absent from the small objects through residual learning

文章在不同地方不断的重复了一个意思,就是generator学习的是一个映射,这个映射就是把假(small object)的变成真(large object)的

来看看generator长什么样子

分两个部分,这里就没看懂是什么意思了,或许和object detection有关了。最终得出的结果是Super-Resolved Features 这个就很像Large Objects Featuresle. 如图,左下角是G生成的,左上角是真实的:

讲完了generator 就到discriminator了,这里的discrimintor和传统的GAN也有不一样的地方。

在这里,加入了一个新的loss,叫做perceptual loss ,PGAN也因此而得名(我猜的,很明显嘛)这个loss我也是没看明白的地方,贴原文大家看看吧(有理解的这部分的同学,请在评论区讲一讲,供大家学习)

1. justify the detection accuracy benefiting from the generated super-resolved features with a perceptual loss

看完paper感觉作者没有很直接说提出PGAN是inspired by哪些文章~不过GAN(2014 Goodfellow)

【文献阅读】Perceptual Generative Adversarial Networks for Small Object Detection –CVPR-2017的更多相关文章

  1. Paper Reading: Perceptual Generative Adversarial Networks for Small Object Detection

    Perceptual Generative Adversarial Networks for Small Object Detection 2017-07-11  19:47:46   CVPR 20 ...

  2. Perceptual Generative Adversarial Networks for Small Object Detection

    Perceptual Generative Adversarial Networks for Small Object Detection 感知生成对抗网络用于目标检测 论文链接:https://ar ...

  3. 文献阅读报告 - Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks

    paper:Gupta A , Johnson J , Fei-Fei L , et al. Social GAN: Socially Acceptable Trajectories with Gen ...

  4. CIAGAN: Conditional Identity Anonymization Generative Adversarial Networks阅读笔记

    CIAGAN: Conditional Identity Anonymization Generative Adversarial Networks 2020 CVPR 2005.09544.pdf ...

  5. 生成对抗网络(Generative Adversarial Networks,GAN)初探

    1. 从纳什均衡(Nash equilibrium)说起 我们先来看看纳什均衡的经济学定义: 所谓纳什均衡,指的是参与人的这样一种策略组合,在该策略组合上,任何参与人单独改变策略都不会得到好处.换句话 ...

  6. 语音合成论文翻译:2019_MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis

    论文地址:MelGAN:条件波形合成的生成对抗网络 代码地址:https://github.com/descriptinc/melgan-neurips 音频实例:https://melgan-neu ...

  7. StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks 论文笔记

    StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks  本文将利 ...

  8. 论文笔记之:Semi-Supervised Learning with Generative Adversarial Networks

    Semi-Supervised Learning with Generative Adversarial Networks 引言:本文将产生式对抗网络(GAN)拓展到半监督学习,通过强制判别器来输出类 ...

  9. 《Self-Attention Generative Adversarial Networks》里的注意力计算

    前天看了 criss-cross 里的注意力模型  仔细理解了  在: https://www.cnblogs.com/yjphhw/p/10750797.html 今天又看了一个注意力模型 < ...

随机推荐

  1. linux 服务器信息查看

    写项目总结报告,需要统计需要系统的配合 1.# uname -a   (Linux查看版本当前操作系统内核信息) Linux localhost.localdomain 2.4.20-8 #1 Thu ...

  2. Java多线程设计模式(4)线程池模式

    前序: Thread-Per-Message Pattern,是一种对于每个命令或请求,都分配一个线程,由这个线程执行工作.它将“委托消息的一端”和“执行消息的一端”用两个不同的线程来实现.该线程模式 ...

  3. JAVA生成问答式验证码图片,支持加减算法

    原文:http://liuguihua0823.iteye.com/blog/1511355 import java.awt.Color; import java.awt.Font; import j ...

  4. Android AIDL实例解析

    AIDL这项技术在我们的开发中一般来说并不是很常用,虽然自己也使用新浪微博的SSO登录,其原理就是使用AIDL,但是自己一直没有动手完整的写过AIDL的例子,所以就有了这篇简单的文章. AIDL(An ...

  5. 关于Android导入开源项目:Error:Unable to load class 'org.gradle.api.publication.maven.internal.DefaultMavenFa

    在导入开源项目时,有可能会要求需要maven插件,并报出一下错误: 对于没有安装maven插件的开发者来说,要去下载一个maven插件也许不困难,但是,有时候可能会像我一样懒,不去下载插件,那么我们可 ...

  6. easyUI样式之easyui-switchbutton

    HTML文件 <tr> <th>是否发送短信:</th> <td> <input id="sendTxt" name=&quo ...

  7. 本地启动tomcat的时候报内存溢出错误:java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: PermGen space

    问题分析: PermGen space的全称是Permanent Generation space,是指内存的永久保存区域,这块内存主要是被JVM存放Class和Meta信息的,Class在被Load ...

  8. C#字符串操作大全

    ===============================字符串基本操作================================ 一.C#中字符串的建立过程 例如定义变量 strT=&qu ...

  9. ANGULARJS: UNDERSTANDING DIRECTIVE SCOPE

    https://www.3pillarglobal.com/insights/angularjs-understanding-directive-scope --------------------- ...

  10. SELinux的Docker安全性

    原文译自:http://opensource.com/business/14/7/docker-security-selinux 这篇文章基于我今年在DockerCon一个讲座,它将讨论我们当前听到的 ...