AI佳作解读系列(三)——深度学习中的合成数据研究
Below are some investigation resources for synthetic datasets:
1. Synthetic datasets vs. real images for computer vision algorithm evaluation?
2. Synthetic for Computer Vision(-2017)
https://github.com/unrealcv/synthetic-computer-vision
(Recommended)
该链接里收藏了2017年以前应用合成的/人造的图像数据进行计算机视觉研究的一些进展,值得一览!
3. Deep learning with synthetic data will make AI accessible to the masses(2018.7)
In the world of AI, data is king. It’s what powers the deep learning machines that have become the go-to method for solving many challenging real-world AI problems. The more high quality data we have, the better our deep learning models perform.
Tech’s big 5: Google, Amazon, Microsoft, Apple, and Facebook are all in an amazing position to capitalize on this. They can collect data more efficiently and at a larger scale than anyone else, simply due to their abundant resources and powerful infrastructure. These tech behemoths are using the data collected from you and most everyone you know using their services to train their AI. The rich keep getting richer!
The massive data sets of images and videos amassed by these companies have become a strong competitive advantage, a moat that keeps smaller businesses from breaking into their market. It’s hard for a startup or individual, with significantly less resources, to get enough data to compete even if their product is great. High quality data is always expensive in both time and money to acquire, two resources that smaller organizations can’t afford to spend liberally.
This advantage will be overturned by the advent of synthetic data. It’s being disrupted by the ability for anyone to create and leverage synthetic data to train computers across many use cases, including retail, robotics, autonomous vehicles, commerce and much more.
Synthetic data is computer-generated data that mimics real data; in other words, data that is created by a computer, not a human. Software algorithms can be designed to create realistic simulated, or “synthetic,” data. You may have seen Unityor Unreal Engine before, game engines which make it easy to create video games and virtual simulations. These game engines can be used to create large synthetic data sets. The synthetic data can then be used to train our AI models in the same way we normally do with real-world data.
A recent paper from Nvidia showcases how to do this. Their general procedure is shown in the figure below where they have generate synthetic images by randomising every possible variable including the image scene, objects, lighting position and intensity, texture, shapes and scale.
Being able to create high quality data so quickly and easily puts the little guys back in the game. Many early-stage startups can now solve their cold start problem (i.e starting out with little or no data) by creating data simulators to generate contextually relevant data with quality labels in order to train their algorithms.
The flexibility and versatility of simulation make it especially valuable and much safer to train and test autonomous vehicles in these highly variable conditions. Simulated data can also be more easily labeled as it is created by computers, therefore saving a lot of time. It’s cheap, inexpensive, and even allows one to explore niche applications where data would normally be extremely challenging to acquire, such as the health or satellite imaging fields.
The challenge and opportunity for startups competing against incumbents with inherent data advantage is to leverage the best visual data with correct labels to train computers accurately for diverse use cases. Simulating data will level the playing field between large technology companies and startups. Over time, large companies will probably also create synthetic data to augment their real data, and one day this may tilt the playing field again. In either case, technology is advancing more rapidly than ever before and the future of AI is bright.
4. At a Glance: Generative Models & Synthetic Data
https://mighty.ai/blog/at-a-glance-generative-models-synthetic-data/
5. Deep learning with synthetic data will democratize-the-tech-industry
https://techcrunch.com/2018/05/11/deep-learning-with-synthetic-data-will-democratize-the-tech-industry/
The visual data sets of images and videos amassed by the most powerful tech companies have been a competitive advantage, a moat that keeps the advances of machine learning out of reach from many. This advantage will be overturned by the advent of synthetic data.
The world’s most valuable technology companies, such as Google, Facebook, Amazon and Baidu, among others, are applying computer vision and artificial intelligence to train their computers. They harvest immense visual data sets of images, videos and other visual data from their consumers.
These data sets have been a competitive advantage for major tech companies, keeping out of reach from many the advances of machine learning and the processes that allow computers and algorithms to learn faster.
Now, this advantage is being disrupted by the ability for anyone to create and leverage synthetic data to train computers across many use cases, including retail, robotics, autonomous vehicles, commerce and much more.
Synthetic data is computer-generated data that mimics real data; in other words, data that is created by a computer, not a human. Software algorithms can be designed to create realistic simulated, or “synthetic,” data.
This synthetic data then assists in teaching a computer how to react to certain situations or criteria, replacing real-world-captured training data. One of the most important aspects of real or synthetic data is to have accurate labels so computers can translate visual data to have meaning.
Since 2012, we at LDV Capital have been investing in deep technical teams that leverage computer vision, machine learning and artificial intelligence to analyze visual data across any business sector, such as healthcare, robotics, logistics, mapping, transportation, manufacturing and much more. Many startups we encounter have the “cold start” problem of not having enough quality labelled data to train their computer algorithms. A system cannot draw any inferences for users or items about which it hasn’t yet gathered sufficient information.
Startups can gather their own contextually relevant data or partner with others to gather relevant data, such as retailers for data of human shopping behaviors or hospitals for medical data. Many early-stage startups are solving their cold start problem by creating data simulators to generate contextually relevant data with quality labels in order to train their algorithms.
Big tech companies do not have the same challenge gathering data, and they exponentially expand their initiatives to gather more unique and contextually relevant data.
Cornell Tech professor Serge Belongie, who has been doing research in computer vision for more than 25 years, says,
In the past, our field of computer vision cast a wary eye on the use of synthetic data, since it was too fake in appearance. Despite the obvious benefits of getting perfect ground truth annotations for free, our worry was that we’d train a system that worked great in simulation but would fail miserably in the wild. Now the game has changed: the simulation-to-reality gap is rapidly disappearing. At the very minimum, we can pre-train very deep convolutional neural networks on near-photorealistic imagery and fine tune it on carefully selected real imagery.
AiFi is an early-stage startup building a computer vision and artificial intelligence platform to deliver a more efficient checkout-free solution to both mom-and-pop convenience stores and major retailers. They are building a checkout-free store solution similar to Amazon Go.
![](https://techcrunch.com/wp-content/uploads/2018/05/dims.jpg?w=680)
Amazon.com Inc. employees shop at the Amazon Go store in Seattle. ©Amazon Go; Photographer: Mike Kane/Bloomberg via Getty Images
As a startup, AiFi had the typical cold start challenge with a lack of visual data from real-world situations to start training their computers, versus Amazon, which likely gathered real-life data to train its algorithms while Amazon Go was in stealth mode.
![](https://techcrunch.com/wp-content/uploads/2018/05/aifi-avatar-final.jpg?w=680)
Avatars help train AiFi shopping algorithms. ©AiFI
AiFi’s solution of creating synthetic data has also become one of their defensible and differentiated technology advantages. Through AiFi’s system, shoppers will be able to come into a retail store and pick up items without having to use cash, a card or scan barcodes.
These smart systems will need to continuously track hundreds or thousands of shoppers in a store and recognize or “re-identify” them throughout a complete shopping session.
![](https://techcrunch.com/wp-content/uploads/2018/05/aifi2_image.jpg?w=680)
AiFi store simulation with synthetic data. ©AiFi
Ying Zheng, co-founder and chief science officer at AiFi, previously worked at Apple and Google. She says,
The world is vast, and can hardly be described by a small sample of real images and labels. Not to mention that acquiring high-quality labels is both time-consuming and expensive, and sometimes infeasible. With synthetic data, we can fully capture a small but relevant aspect of the world in perfect detail. In our case, we create large-scale store simulations and render high-quality images with pixel-perfect labels, and use them to successfully train our deep learning models. This enables AiFi to create superior checkout-free solutions at massive scale.
Robotics is another sector leveraging synthetic data to train robots for various activities in factories, warehouses and across society.
Josh Tobin is a research scientist at OpenAI, a nonprofit artificial intelligence research company that aims to promote and develop friendly AI in such a way as to benefit humanity as a whole. Tobin is part of a team working on building robots that learn. They have trained entirely with simulated data and deployed on a physical robot, which, amazingly, can now learn a new task after seeing an action done once.
They developed and deployed a new algorithm called one-shot imitation learning, allowing a human to communicate how to do a new task by performing it in virtual reality. Given a single demonstration, the robot is able to solve the same task from an arbitrary starting point and then continue the task.
![](https://techcrunch.com/wp-content/uploads/2018/05/stacking_demo.gif?w=680)
©Open AI
Their goal was to learn behaviors in simulation and then transfer these learnings to the real world. The hypothesis was to see if a robot can do precise things just as well from simulated data. They started with 100 percent simulated data and thought that it would not work as well as using real data to train computers. However, the simulated data for training robotic tasks worked much better than they expected.
Tobin says,
Creating an accurate synthetic data simulator is really hard. There is a factor of 3-10x in accuracy between a well-trained model on synthetic data versus real-world data. There is still a gap. For a lot of tasks the performance works well, but for extreme precision it will not fly — yet.
Many large technology companies, auto manufacturers and startups are racing toward delivering the autonomous vehicle revolution. Developers have realized there aren’t enough hours in a day to gather enough real data of driven miles needed to teach cars how to drive themselves.
One solution that some are using is synthetic data from video games such as Grand Theft Auto; unfortunately, some say that the game’s parent company Rockstar is not happy about driverless cars learning from their game.
![](https://techcrunch.com/wp-content/uploads/2018/05/gta5_capture.jpg?w=680)
A street in GTA V (left) and its reconstruction through capture data (right). ©Intel Labs,Technische Universität Darmstadt
May Mobility is a startup building a self-driving microtransit service. Their CEO and founder, Edwin Olson, says,
One of our uses of synthetic data is in evaluating the performance and safety of our systems. However, we don’t believe that any reasonable amount of testing (real or simulated) is sufficient to demonstrate the safety of an autonomous vehicle. Functional safety plays an important role.
The flexibility and versatility of simulation make it especially valuable and much safer to train and test autonomous vehicles in these highly variable conditions. Simulated data can also be more easily labeled as it is created by computers, therefore saving a lot of time.
Jan Erik Solem is the CEO and co-founder of Mapillary*, helping create better maps for smarter cities, geospatial services and automotive. According to Solem,
Having a database and an understanding of what places look like all over the world will be an increasingly important component for simulation engines. As the accuracy of the trained algorithms improves, the level of detail and diversity of the data used to power the simulation matters more and more.
Neuromation is building a distributed synthetic data platform for deep learning applications. Their CEO, Yashar Behzadi says,
To date, the major platform companies have leveraged data moats to maintain their competitive advantage. Synthetic data is a major disruptor, as it significantly reduces the cost and speed of development, allowing small, agile teams to compete and win.
The challenge and opportunity for startups competing against incumbents with inherent data advantage is to leverage the best visual data with correct labels to train computers accurately for diverse use cases. Simulating data will level the playing field between large technology companies and startups. Over time, large companies will probably also create synthetic data to augment their real data, and one day this may tilt the playing field again. Many speakers at the annual LDV Vision Summit in May in NYC will enlighten us as to how they are using simulated data to train algorithms to solve business problems and help computers get closer to general artificial intelligence.
6. Synthetic Datasets for Training AI
http://www.immersivelimit.com/blog/synthetic-datasets-for-training-ai
Training an AI to Recognize Cigarette Butts
http://www.immersivelimit.com/blog/training-an-ai-to-recognize-cigarette-butts
(Recommended)
7.Using synthetic data for deep learning video recognition
Sumary:
Data Created to Mimic Reality
Synthetic data is digitally created data that mimics real-world sensory input. Objects, backgrounds, camera attributes, and environments can be combined to create millions of images for training modern AI algorithms
Advantages of AI Models Built Using Synthetic Data
From: Neuromation
AI佳作解读系列(三)——深度学习中的合成数据研究的更多相关文章
- AI佳作解读系列(一)——深度学习模型训练痛点及解决方法
1 模型训练基本步骤 进入了AI领域,学习了手写字识别等几个demo后,就会发现深度学习模型训练是十分关键和有挑战性的.选定了网络结构后,深度学习训练过程基本大同小异,一般分为如下几个步骤 定义算法公 ...
- AI佳作解读系列(二)——目标检测AI算法集杂谈:R-CNN,faster R-CNN,yolo,SSD,yoloV2,yoloV3
1 引言 深度学习目前已经应用到了各个领域,应用场景大体分为三类:物体识别,目标检测,自然语言处理.本文着重与分析目标检测领域的深度学习方法,对其中的经典模型框架进行深入分析. 目标检测可以理解为是物 ...
- AI佳作解读系列(五) - 目标检测二十年技术综述
计算机视觉中的目标检测,因其在真实世界的大量应用需求,比如自动驾驶.视频监控.机器人视觉等,而被研究学者广泛关注. 上周四,arXiv新出一篇目标检测文献<Object Detection ...
- AI佳作解读系列(六) - 生成对抗网络(GAN)综述精华
注:本文来自机器之心的PaperWeekly系列:万字综述之生成对抗网络(GAN),如有侵权,请联系删除,谢谢! 前阵子学习 GAN 的过程发现现在的 GAN 综述文章大都是 2016 年 Ian G ...
- AI佳作解读系列(四)——数据增强篇
前言 在深度学习的应用过程中,数据的重要性不言而喻.继上篇介绍了数据合成(个人认为其在某种程度上可被看成一种数据增强方法)这个主题后,本篇聚焦于数据增强来介绍几篇杰作! (1)NanoNets : H ...
- 深度学习中的Data Augmentation方法(转)基于keras
在深度学习中,当数据量不够大时候,常常采用下面4中方法: 1. 人工增加训练集的大小. 通过平移, 翻转, 加噪声等方法从已有数据中创造出一批"新"的数据.也就是Data Augm ...
- 让AI简单且强大:深度学习引擎OneFlow技术实践
本文内容节选自由msup主办的第七届TOP100summit,北京一流科技有限公司首席科学家袁进辉(老师木)分享的<让AI简单且强大:深度学习引擎OneFlow背后的技术实践>实录. 北京 ...
- 深度学习中的序列模型演变及学习笔记(含RNN/LSTM/GRU/Seq2Seq/Attention机制)
[说在前面]本人博客新手一枚,象牙塔的老白,职业场的小白.以下内容仅为个人见解,欢迎批评指正,不喜勿喷![认真看图][认真看图] [补充说明]深度学习中的序列模型已经广泛应用于自然语言处理(例如机器翻 ...
- 深度学习中的Normalization模型
Batch Normalization(简称 BN)自从提出之后,因为效果特别好,很快被作为深度学习的标准工具应用在了各种场合.BN 大法虽然好,但是也存在一些局限和问题,诸如当 BatchSize ...
随机推荐
- Docker进阶之四:镜像管理
一.什么是镜像? 简单说,Docker镜像是一个不包含Linux内核而又精简的Linux操作系统. 二.镜像从哪里来? Docker Hub是由Docker公司负责维护的公共注册中心,包含大量的容 ...
- 【设计模式+原型理解】第三章:javascript五种继承父类方式
[前言] 我们都知道,面向对象(类)的三大特征:封装.继承.多态 继承:子类继承父类的私有属性和公有方法 封装:把相同的代码写在一个函数中 多态: ->重载:JS严格意义上是没有重载,但可以通过 ...
- RESTful 规范
RESTful 规范 前言 rest 是一种软件架构风格,如果使用的是 rest 接口,那么就可以说你的接口是 restful. rest接口是围绕''资源''展开的,利用 HTTP 的协议,其实 r ...
- Linux通配符应用详解
一.强大的“*” “*”在通配符中是最常用的一种,代表0个.一个或多个字符.在使用中有三种情况,表示三种不同的含义. 1.单独的“*” 这里指的是只有“*”出现的情况,默认为单独的一个,当然连续敲两个 ...
- 基础知识:if判断、while循环、for循环
今日学习内容 1.if 判断(流程控制的一种) 写重复的代码是程序员最不耻的行为,所以我们需要while循环和for循环 ,^_^! ...
- [TCP/IP] 网络层-抓包分析IP数据包首部
ip数据包的结构:首部+数据部分 1.版本(v4或者v6)+首部长度(固定的20字节,所以就没有)+区分服务优先级(我的例子是 assured forwarding 31 0x1a 26,保证转发) ...
- 【spring实战第五版遇到的坑】4.2.3中LDAP内嵌服务器不启动的问题
按照4.2.3中的指导一步一步的去做,在登录界面进行登录时,报错了,报错信息是LDAP服务器连接不上. 后来查了一些资源发现还需要加入一些其他的依赖,如下: <dependency> &l ...
- CRM实施失败?请注意这6大问题及对策!
据Gartner调查显示:约50%的CRM项目不能达到客户期望.这一点与很多其他的IT项目状况非常类似,大多出于管理问题,供应商服务能力,项目执行不善等. 另据一个在线CRM论坛调查其失败的原因:67 ...
- (一) Keras 一元线性回归
视频学习来源 https://www.bilibili.com/video/av40787141?from=search&seid=17003307842787199553 笔记 环境为 an ...
- Windows Management Instrumentation 服务卸载并重新创建
SC delete Winmgmt sc create Winmgmt binPath= "C:\Windows\System32\svchost.exe -k netsvcs" ...