TVM量化小结手册
TVM量化小结手册
文章目录
- Offical References
- TVM quantization roadmap
- INT8 quantization proposal
- Quantization Story - 2019-09
- Quantization Development
- Quantization Framework supported by TVM
- TF Quantization Related
- Pytorch Quantization Related
- MXNet related
- Tensor Core Related
- Related Commit
- Speed Up
- Comparison
- Automatic Integer Quantization
- Accepting Pre-quantized Integer models
- Speed Profile Tools
- Devices Attributes
- Copartner
- Alibaba
TVM里面关于量化的资料非常的多,虽然很有价值,但是极其零散,对于散户不是非常友好。这里汇总一下。
OFFICAL REFERENCES
TVM QUANTIZATION ROADMAP
INT8 QUANTIZATION PROPOSAL
- INT8 quantization proposal - 2018-07
- This document presents the high-level overview of quantization process, and presents a proposal for implementing that in TVM.
- introduce background on quantization
- INT8 Quantization - Code generation for backends - 2018-07
- This thread only focuses on implementation of quantized layers in TVM.
QUANTIZATION STORY - 2019-09
QUANTIZATION DEVELOPMENT
- [RFC] Search-based Automated Quantization - 2020-01-22
- I proposed a new quantization framework, which brings hardware and learning method in the loop.
- Brought the idea from some existing quantization frameworks, I choose to adopt the annotation-calibration-realization 3-phases design:
- Annotation: The annotation pass rewrites the graph and inserts simulated quantize operation according to the rewrite function of each operator. The simulated quantize operation simulates the rounding error and saturating error of quantizing from float to integer,
- Calibration: The calibration pass will adjust thresholds of simulated quantize operations to reduce the accuracy dropping.
- Realization: The realization pass transforms the simulation graph, which computes with float32 actually, to a real low-precision integer graph.
QUANTIZATION FRAMEWORK SUPPORTED BY TVM
TF QUANTIZATION RELATED
TVM support all Pre-quantized TFLite hosted
- The performance is evaluated on C5.12xlarge Cascade lake machine, supported Intel VNNI
- not autotuned the models yet.
PYTORCH QUANTIZATION RELATED
- How to convert the model to a quantized one through relay?
- telling how to set qconfig for torch.quantization.get_default_qconfig(‘fbgemm’)
- Quantized model accuracy benchmark: PyTorch vs TVM
- telling how to convert quantized pytorch model to tvm model
- compare between accuracy and speed for resent18、resent5、mobilenet-v2、moblienet-v3、inception_v3 and googlenet.
- include STATIC QUANTIZATION WITH EAGER MODE IN PYTORCH: pytorch’s quantization turorial.
- gap_quantization
- Placeholder for GAP8 export and quantization module for PyTorch
- include squeezenet-v1.1’ s quantization file.
MXNET RELATED
- Model Quantization for Production-Level Neural Network Inference
- The below CPU performance is from an AWS EC2 C5.24xlarge instance with custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake).
- The model quantization delivers more stable speedup over all models, such as 3.66X for ResNet 50 v1, 3.82X for ResNet 101 v1 and 3.77X for SSD-VGG16, which is very close to the theoretical 4X speedup from INT8.
the accuracy from Apache/MXNet quantization solution is very close to FP32 models without the request of retaining the mode. In Figure 8, MXNet ensured only a small reduction in accuracy, less than 0.5%.
TENSOR CORE RELATED
- [RFC][Tensor Core] Optimization of CNNs on Tensor Core
- [Perf] Enhance cudnn and cublas backend and enable TensorCore
RELATED COMMIT
- [OPT] Low-bit Quantization #2116
- Benchmarking Quantization on Intel CPU
- [RFC][Quantization] Support quantized models from TensorflowLite#2351
- After initial investigation and effort, in the Mobilenet V1 model, INT8 can get speed up about 30% when compared with FP32 on ARM CPU.
- [TFLite] Support TFLite FP32 Relay frontend. #2365
- This is the first PR of #2351 to support importing exist quantized int8 TFLite model. The base version of Tensorflow / TFLite is 1.12.
- [Strategy] Support for Int8 schedules - CUDA/x86 #5031
- Recently introduce op strategy currently has some issues with task extraction with AutoTVM. This PR fixes them for x86/CUDA.
- [Torch, QNN] Add support for quantized models via QNN #4977
SPEED UP
COMPARISON
AUTOMATIC INTEGER QUANTIZATION
- The inference time is longer after int8 quantization
- TVM-relay.quantize vs quantization of other Framework
- TVM FP32、TVM int8、TVM int8 quantization + AutoTVM,MXNet
Quantization int8 slower than int16 on skylake CPU
- The int8 is always slower than int16 before and after the auto-tuning
- Target: llvm -mcpu=skylake-avx512
- Problem is solved by creating the int8 task explicitly
- create the task topi_x86_conv2d_NCHWc_int8
- set output dtype to int32, input dtype=uint8, weight dtype=int8
- TVM FP32、TVM int8、TVM int8 quantization , MXNet, TF1.13
- 含测试代码
8bit@Cuda: AutoTVMvs TensorRT vs MXNet
- In this post, we show how to use TVM to automatically optimize of quantized deep learning models on CUDA.
ACCEPTING PRE-QUANTIZED INTEGER MODELS
- Is there any speed comparison of quantization on cpu
- discuss a lot about speed comparison among torch-fp32, torch-int8, tvm-fp32, tvm-int16, tvm-int8
SPEED PROFILE TOOLS
Node Name Ops Time(us) Time(%) Start Time End Time Shape Inputs Outputs
--------- --- -------- ------- ---------- -------- ----- ------ -------
1_NCHW1c fuse___layout_transform___4 56.52 0.02 15:24:44.177475 15:24:44.177534 (1, 1, 224, 224) 1 1
_contrib_conv2d_nchwc0 fuse__contrib_conv2d_NCHWc 12436.11 3.4 15:24:44.177549 15:24:44.189993 (1, 1, 224, 224, 1) 2 1
relu0_NCHW8c fuse___layout_transform___broadcast_add_relu___layout_transform__ 4375.43 1.2 15:24:44.190027 15:24:44.194410 (8, 1, 5, 5, 1, 8) 2 1
_contrib_conv2d_nchwc1 fuse__contrib_conv2d_NCHWc_1 213108.6 58.28 15:24:44.194440 15:24:44.407558 (1, 8, 224, 224, 8) 2 1
relu1_NCHW8c fuse___layout_transform___broadcast_add_relu___layout_transform__ 2265.57 0.62 15:24:44.407600 15:24:44.409874 (64, 1, 1) 2 1
_contrib_conv2d_nchwc2 fuse__contrib_conv2d_NCHWc_2 104623.15 28.61 15:24:44.409905 15:24:44.514535 (1, 8, 224, 224, 8) 2 1
relu2_NCHW2c fuse___layout_transform___broadcast_add_relu___layout_transform___1 2004.77 0.55 15:24:44.514567 15:24:44.516582 (8, 8, 3, 3, 8, 8) 2 1
_contrib_conv2d_nchwc3 fuse__contrib_conv2d_NCHWc_3 25218.4 6.9 15:24:44.516628 15:24:44.541856 (1, 8, 224, 224, 8) 2 1
reshape1 fuse___layout_transform___broadcast_add_reshape_transpose_reshape 1554.25 0.43 15:24:44.541893 15:24:44.543452 (64, 1, 1) 2 1
DEVICES ATTRIBUTES
COPARTNER
Please go tvmai/meetup-slides for more recently info what ohter copartners have done for tvm.
ALIBABA
- 记录一下2019
- 介绍阿里在TVM上的发展历程
- 在今年(2019年)4月份的时候,我又回来和同事一起搞ARM CPU量化优化了,因为这是有业务要用的。我们一起吭哧吭哧搞了一段时间,可以很高兴的说我们比QNNPack更快,在Mobilenet V1上是1.61x TFLite,1.27X QNNPACK,Mobilenet V2是2X TFLite, 1.34X QNNPack。
- TVM@AliOS
TVM量化小结手册的更多相关文章
- TVM vs TensorRT比较
TVM vs TensorRT比较 如果理解正确的话,TensorRT和TVM会加快预测速度. TensorRT优化预测GPU和TVM优化预测几乎所有平台支持GPU,ARM,Mobile... 两者在 ...
- ANN中乘积量化与多维倒排小结
目前特征向量的比对加速优化能极大缩短比对耗时,改善用户体验. 优化的途径主要有两种,一是使用指令集(SSE,AVX)加速运算.二是使用ANN替代暴力搜索. 乘积量化和倒排索引组合是ANN中效果较好且实 ...
- TVM设计与构架构建
TVM设计与构架构建 本文档适用于希望了解TVM体系结构和/或在项目上进行积极开发的开发人员.该页面的组织如下: 实例编译流程Example Compilation Flow描述TVM把一个模型的高级 ...
- React JS快速开始手册
怎样用React JS构建一个用户界面?本文将快速地给你一个React JS的概览.代码,请君移步react-starter 概念 React只有很少的API,这使得它很容易去学习与理解.当然,使用它 ...
- js中各种跨域问题实战小结(一)
什么是跨域?为什么要实现跨域呢? 这是因为JavaScript出于安全方面的考虑,不允许跨域调用其他页面的对象.也就是说只能访问同一个域中的资源.我觉得这就有必要了解下javascript中的同源策略 ...
- Eclipse上GIT插件EGIT使用手册
http://blog.csdn.net/luckarecs/article/details/7427605 Eclipse上GIT插件EGIT使用手册 一_安装EGIT插件 http://dow ...
- sql编程小结
对照mysql5.1手册,对这几天学的sql编程进行小结,主要涉及触发器.存储过程.权限管理.主从分离等,权当抛砖引玉,高手请略过. 一.触发器 通俗的说就是在指定的数据表增删改的前或后触发执行特定的 ...
- Git版本控制软件结合GitHub从入门到精通常用命令学习手册(转)
简要参考:http://www.tuicool.com/articles/mEvaq2 http://gitref.org/zh/index.html GIT 学习手册简介 本站为 Git 学习参考手 ...
- Solaris 命令 小结
Solaris 命令 小结 prstat -a 系统进程监控 Solaris 10默认的shell是sh,可以改成bash #useradd -m -d /home/dave dave -s /bin ...
随机推荐
- 通过钉钉网页上的js学习xss打cookie
做完了一个项目,然后没啥事做,无意看到了一个钉钉的外部链接: 题外话1: 查看源码,复制其中的代码: try { var search = location.search; if (search &a ...
- 03 Django web服务开发 - URL路由
Django中的APP -Django中的一个APP就是一个应用的意思 -项目可以包含多个APP(多个应用) -一个项目对应一个网站(生活服务网站) 一个APP队形网站的一个应用(二手交易,家政服务) ...
- 用vue-cli3搭建vue项目
1.在nodejs官网下载node安装包,并进行安装:http://nodejs.cn/download/,在环境变量进行配置,并添加node_global和node_cache路径. 2.在D盘新建 ...
- Windows核心编程 第三章 内核对象
第3章内核对象 在介绍Windows API的时候,首先要讲述内核对象以及它们的句柄.本章将要介绍一些比较抽象的概念,在此并不讨论某个特定内核对象的特性,相反只是介绍适用于所有内核对象的特性. 首先介 ...
- 【SpringMVC】数据校验时,抛出javax.validation.UnexpectedTypeException: HV000030: No validator could be found for type: java.util.Date.
老魏原创,转载请留言. 原因:给Javabean中的字段注解多余或者错误导致. @NotEmpty @Past @DateTimeFormat(pattern="yyyy-MM-dd&quo ...
- java+selenium使用JS、键盘滑动滚动条
本篇文章介绍如何使用JS和键盘对象对页面进行滑动滚动条-------------主要针对java做自动化测试的同学 一:使用键盘对象操作滚动条 //导包 import org.openqa.selen ...
- linux网络编程中INADDR_ANY的含义
INADDR_ANY选项 网络编程中常用到bind函数,需要绑定IP地址,这时可以设置INADDR_ANY INADDR_ANY就是指定地址为0.0.0.0的地址,这个地址事实上表示不确定地址,或&q ...
- 【近取 key】功能规格说明书
目录 前置信息说明 概念介绍 记忆宫殿 A4纸背单词法 词图 单词掌握程度相关 用户和典型场景 系统功能设计 主页 词图相关功能 创建词图 查看词图 复习词图 个人控制台相关功能 我的词图 统计信息 ...
- CMMI V2.0丨如何通过CMMI真正在企业中的实施规模化敏捷开发
在过去的几年中,敏捷开发已经从一个利基概念(利基是指针对企业的优势细分出来的市场,这个市场不大,而且没有得到令人满意的服务.产品推进这个市场,有盈利的基础.)转变为全球许多大公司采用的标准实践. 通过 ...
- [算法] O(n^2)排序算法的效率比较
选择.插入排序 main.cpp 1 #include <iostream> 3 #include "SortTestHelper.h" 4 5 using names ...