Xie C, Tan M, Gong B, et al. Adversarial Examples Improve Image Recognition.[J]. arXiv: Computer Vision and Pattern Recognition, 2019.

@article{xie2019adversarial,

title={Adversarial Examples Improve Image Recognition.},

author={Xie, Cihang and Tan, Mingxing and Gong, Boqing and Wang, Jiang and Yuille, Alan L and Le, Quoc V},

journal={arXiv: Computer Vision and Pattern Recognition},

year={2019}}

为了让网络更稳定, 作者关注

\[\arg \min_{\theta} [\mathbb{E}_{(x, y)\sim \mathbb{D}}(L(\theta,x,y)+\max_{\epsilon \in \mathbb{S}}L(\theta,x+\epsilon,y)],
\]

实际上就是一种对抗训练. 但是如果只是普通的训练样本加上对应的adversarial samples混合训练效果并不好. 于是, 作者提出这种情况的原因是Batchnorm造成的, 只需要针对普通样本和对抗样本在训练的时候分别设置不同的batchnorm模块即可.

主要内容

作者认为, 普通训练样本和对抗训练样本所属的分布不同, 此时用同一个batchnorm效果不好, 所以提出在训练的时候添加一个额外的batchnorm, 专门用于为对杨训练样本使用, 而在非训练截断, 只是用普通的Batchnorm.

每一次训练步骤如下:

  1. 从普通训练样本中采样 batch \(x^c\)以及对应的标签\(y\);
  2. 根据相应算法(本文采用PGD)生成对抗训练样本\(x^a\)(额外的batchnorm);
  3. 计算损失\(L^c(\theta, x^c, y)\)(普通的batchnorm);
  4. 计算损失\(L^a(\theta, x^a, y)\)(额外的batchnorm);
  5. backward: \(L^c(\theta, x^c, y) + L^a(\theta, x^a, y)\), 并更新\(\theta\).

实验概述

数据集: ImageNet-A, ImageNet-C, Stylized-ImageNet.

5.2: AdvProp, 85.2% top-1 accuracy on ImageNet(Fig4);

在打乱的ImageNet数据集合上测试(Table4, mCE(mean corruption , lower is better));

探究adversarial attacks 强度对网络分类正确率的影响:当一个网络的“适应性"较弱的时候, 强度小反而效果好, ”适应性”较强的时候, 强度高更好(Table2);

比较AdvProp与一般的对抗训练的效果差异(Fig5);

“适应性”强的网络, AdvProp的作用越小;

AutoAugment 与 Advprop的比较(Table 6);

不同的adversarial attacks的影响(Table 7);

代码

代码未经测试.



"""
white-box attacks:
iFGSM
PGD
""" import torch
import torch.nn as nn class WhiteBox: def __init__(self, net, epsilon:float, times:int, criterion=None):
self.net = net
self.epsilon = epsilon
self.times = times
if not criterion:
self.criterion = nn.CrossEntropyLoss()
else:
self.criterion = criterion pass @staticmethod
def calc_jacobian(loss, inp):
jacobian = torch.autograd.grad(loss, inp, retain_graph=True)[0]
return jacobian @staticmethod
def sgn(matrix):
return torch.sign(matrix) @staticmethod
def pre(out):
return torch.argmax(out, dim=1) def fgsm(self, inp, y):
inp.requires_grad_(True)
out = self.net(inp)
loss = self.criterion(out, y)
delta = self.sgn(self.calc_jacobian(loss, inp))
flag = False
inp_new = inp.data
for i in range(self.times):
inp_new = inp_new + self.epsilon * delta
out_new = self.net(inp_new)
if self.pre(out_new) != y:
flag = True
break
return flag, inp_new def ifgsm(self, inps, ys):
N = len(inps)
adversarial_samples = []
for i in range(N):
flag, inp_new = self.fgsm(
inps[[i]], ys[[i]]
)
if flag:
adversarial_samples.append(inp_new) return torch.cat(adversarial_samples), \
len(adversarial_samples) / N def pgd(self, inp, y, perturb):
boundary_low = inp - perturb
boundary_up = inp + perturb
inp.requires_grad_(True)
out = self.net(inp)
loss = self.criterion(out, y)
delta = self.sgn(self.calc_jacobian(loss, inp))
flag = False
inp_new = inp.data
for i in range(self.times):
inp_new = torch.clamp(
inp_new + delta,
boundary_low,
boundary_up
)
out_new = self.net(inp_new)
if self.pre(out_new) != y:
flag = True
break
return flag, inp_new def ipgd(self, inps, ys, perturb):
N = len(inps)
adversarial_samples = []
for i in range(N):
flag, inp_new = self.pgd(
inps[[i]], ys[[i]],
perturb
)
if flag:
adversarial_samples.append(inp_new) return torch.cat(adversarial_samples), \
len(adversarial_samples) / N


"""
black-box attack
see Practical Black-Box Attacks against Machine Learning.
""" import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader class Synthetic(Dataset): def __init__(self, data, labels):
self.data = data
self.labels = labels def __len__(self):
return len(self.data) def __getitem__(self, index):
return self.data[index], self.labels[index] class Blackbox: def __init__(self, oracle, substitute, data, trainer, lamb):
self.oracle = oracle
self.substitute = substitute
self.data = []
self.trainer = trainer
self.lamb = lamb
self.update(data) def update(self, data):
labels = self.oracle(data)
self.data.append(Synthetic(data, labels)) def train(self):
self.trainer(self.substitute, self.data, self.lamb) class Trainer: def __init__(self,
lr, weight_decay,
batch_size, shuffle=True, **kwargs):
"""
:param lr: learning rate
:param weight_decay:
:param batch_size: batch_size for dataloader
:param shuffle: shuffle for dataloader
:param kwargs: other configs for dataloader
"""
self.kwargs = {"batch_size":batch_size,
"shuffle":shuffle}
self.kwargs.update(kwargs)
self.criterion = nn.CrossEntropyLoss
self.opti = self.optim(lr=lr, weight_decay=weight_decay) @quireone
def optim(self, parameters, **kwargs):
"""
quireone is decorator defined below
:param parameters: net.parameteres()
:param kwargs: other configs
:return:
"""
return torch.optim.SGD(parameters, **kwargs) def dataloader(self, dataset):
return DataLoader(dataset, **self.kwargs) @staticmethod
def calc_jacobian(out, inp):
jacobian = torch.autograd.grad(out, inp, retain_graph=True)[0]
return jacobian @staticmethod
def sgn(matrix):
return torch.sign(matrix) def newdata(self, outs, inps, labels, lamb):
data = inps.data
for i in range(len(labels)):
out = outs[i, labels[i]]
data += lamb * self.sgn(self.calc_jacobian(out, inps)) return data def train(self, net, criterion, opti, dataloader, lamb=None,
update=False):
"""
:param net:
:param criterion:
:param opti:
:param dataloader:
:param lamb: lambda for update S
:param update: if True, train will return the new data
:return:
"""
if update:
assert lamb is not None, "lamb needed when updating"
newd = torch.tensor([])
for i, data in enumerate(dataloader):
inps, labels = data
inps.requires_grad_(True)
outs = net(inps)
loss = criterion(outs, labels) if update:
new_samples = self.newdata(outs, inps, labels, lamb)
newd = torch.cat((newd, new_samples)) opti.zerograd()
loss.backward()
opti.step()
if update:
return newd def __call__(self, substitute, data, lamb):
N = len(data)
opti = self.opti(substitute.parameters())
for i, item in enumerate(data):
dataloader = self.dataloader(data)
if i is N-1:
return self.train(substitute, self.criterion,
opti, dataloader, lamb, True)
else:
self.train(substitute, self.criterion,
opti, dataloader) def quireone(func): #a decorator, for easy to define optimizer
def wrapper1(*args, **kwargs):
def wrapper2(arg):
result = func(arg, *args, **kwargs)
return result
wrapper2.__doc__ = func.__doc__
wrapper2.__name__ = func.__name__
return wrapper2
return wrapper1


"""
Adversarial Examples Improve Image Recognition
""" import torch
import torch.nn as nn class Mixturenorm1d(nn.Module): def __init__(self, rel, num_features:int, *args, **kwargs):
super(Mixturenorm1d, self).__init__()
self.norm1 = nn.BatchNorm1d(num_features,
*args, **kwargs)
self.norm2 = nn.BatchNorm1d(num_features,
*args, **kwargs)
self.rel = rel def forward(self, x):
if self.rel.adv and self.rel.training:
return self.norm2(x)
else:
return self.norm1(x) def __setattr__(self, name, value):
"""
we should redefine the setattr method,
or self.rel will be regard as a child of Mixturenorm.
Hence, if we call instance.modules() or instance.children(),
RecursionError: maximum recursion depth exceeded will be raised.
:param name:
:param value:
:return:
"""
if name is "rel":
object.__setattr__(self, name, value)
else:
super(Mixturenorm1d, self).__setattr__(name, value) class Mixturenorm2d(nn.Module): def __init__(self, rel, num_features:int, *args, **kwargs):
super(Mixturenorm2d, self).__init__()
self.norm1 = nn.BatchNorm2d(num_features,
*args, **kwargs)
self.norm2 = nn.BatchNorm2d(num_features,
*args, **kwargs)
self.rel = rel
def forward(self, x):
if self.rel.adv and self.rel.training:
return self.norm2(x)
else:
return self.norm1(x) def __setattr__(self, name, value):
if name is "rel":
object.__setattr__(self, name, value)
else:
super(Mixturenorm2d, self).__setattr__(name, value) if __name__ == "__main__": class Testnet(nn.Module): def __init__(self):
super(Testnet, self).__init__()
self.flag = False
self.dense = nn.Sequential(
nn.Linear(10, 20),
Mixturenorm1d(self, 20),
nn.ReLU(),
nn.Linear(20, 1)
) def forward(self, x, adv=False):
self.adv = adv
return self.dense(x) x = torch.rand((3, 10))
test = Testnet()
out = test(x, True)

Adversarial Examples Improve Image Recognition的更多相关文章

  1. Adversarial Examples for Semantic Segmentation and Object Detection 阅读笔记

    Adversarial Examples for Semantic Segmentation and Object Detection (语义分割和目标检测中的对抗样本) 作者:Cihang Xie, ...

  2. 文本adversarial examples

    对文本对抗性样本的研究极少,近期论文归纳如下: 文本对抗三个难点: text data是离散数据,multimedia data是连续数据,样本空间不一样: 对text data的改动可能导致数据不合 ...

  3. 论文阅读 | Generating Fluent Adversarial Examples for Natural Languages

    Generating Fluent Adversarial Examples for Natural Languages   ACL 2019 为自然语言生成流畅的对抗样本 摘要 有效地构建自然语言处 ...

  4. 《Explaining and harnessing adversarial examples》 论文学习报告

    <Explaining and harnessing adversarial examples> 论文学习报告 组员:裴建新   赖妍菱    周子玉 2020-03-27 1 背景 Sz ...

  5. Limitations of the Lipschitz constant as a defense against adversarial examples

    目录 概 主要内容 Huster T., Chiang C. J. and Chadha R. Limitations of the lipschitz constant as a defense a ...

  6. Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples

    Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples 目录 概 主要内容 实验 ...

  7. Certified Robustness to Adversarial Examples with Differential Privacy

    目录 概 主要内容 Differential Privacy insensitivity Lemma1 Proposition1 如何令网络为-DP in practice Lecuyer M, At ...

  8. Generating Adversarial Examples with Adversarial Networks

    目录 概 主要内容 black-box 拓展 Xiao C, Li B, Zhu J, et al. Generating Adversarial Examples with Adversarial ...

  9. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

    目录 概 主要内容 Obfuscated Gradients BPDA 特例 一般情形 EOT Reparameterization 具体的案例 Thermometer encoding Input ...

随机推荐

  1. WebService学习总览

    [1]WebService简介 https://blog.csdn.net/xtayfjpk/article/details/12256663 [2]CXF中Web服务请求处理流程 https://b ...

  2. 银联acp手机支付总结

    总结: 1.手机调用后台服务端接口,获取银联返回的流水号tn 银联支付是请求后台,后台向银联下单,返回交易流水号,然后返回给用户,用户通过这个交易流水号,向银联发送请求,获取订单信息,然后再填写银行卡 ...

  3. 【编程思想】【设计模式】【创建模式creational】Borg/Monostate

    Python版 https://github.com/faif/python-patterns/blob/master/creational/borg.py #!/usr/bin/env python ...

  4. MyBatis通过注解实现映射中的嵌套语句和嵌套结果

    案例描述:查看订单或购物车订单信息的同时查询出该订单中所有书籍的信息. 一.嵌套语句 @Select("select* from shopcart where shopcartid = #{ ...

  5. 【力扣】973. 最接近原点的 K 个点

    我们有一个由平面上的点组成的列表 points.需要从中找出 K 个距离原点 (0, 0) 最近的点. (这里,平面上两点之间的距离是欧几里德距离.) 你可以按任何顺序返回答案.除了点坐标的顺序之外, ...

  6. 莫烦python教程学习笔记——learn_curve曲线用于过拟合问题

    # View more python learning tutorial on my Youtube and Youku channel!!! # Youtube video tutorial: ht ...

  7. 使用Modbus批量读取寄存器地址

    使用modbus单点读取地址是轮询可能会导致效率很低,频繁发送读取报文会导致plc响应时间拉长,批量读取可大大减少数据通信的过程,每次读取完成后,在内存中异步处理返回来的数据数组. modbus 功能 ...

  8. 学习 27 门编程语言的长处,提升你的 Python 代码水平

    Python猫注:Python 语言诞生 30 年了,如今的发展势头可谓如火如荼,这很大程度上得益于其易学易用的优秀设计,而不可否认的是,Python 从其它语言中偷师了不少.本文作者是一名资深的核心 ...

  9. 开发中Design Review和Code Review

    一.Design Review 详解 翻译为设计评审,也就是对需求设计进行审核,防止出现异常问题,例如下面的这些 可用性 外部依赖有哪些?如果这些外部依赖崩溃了我们有什么处理措施? 我们SLA是什么? ...

  10. MySQL数据库如何查看数据文件的存放位置

    SHOW GLOBAL VARIABLES;