The picture above is funny.

But for me it is also one of those examples that make me sad about the outlook for AI and for Computer Vision. What would it take for a computer to understand this image as you or I do? I challenge you to think explicitly of all the pieces of knowledge that have to fall in place for it to make sense. Here is my short attempt:

  • You recognize it is an image of a bunch of people and you understand they are in a hallway
  • You recognize that there are 3 mirrors in the scene so some of those people are "fake" replicas from different viewpoints.
  • You recognize Obama from the few pixels that make up his face. It helps that he is in his suit and that he is surrounded by other people with suits.
  • You recognize that there's a person standing on a scale, even though the scale occupies only very few white pixels that blend with the background. But, you've used the person's pose and knowledge of how people interact with objects to figure it out.
  • You recognize that Obama has his foot positioned just slightly on top of the scale. Notice the language I'm using: It is in terms of the 3D structure of the scene, not the position of the leg in the 2D coordinate system of the image.
  • You know how physics works: Obama is leaning in on the scale, which applies a force on it. Scale measures force that is applied on it, that's how it works => it will over-estimate the weight of the person standing on it.
  • The person measuring his weight is not aware of Obama doing this. You derive this because you know his pose, you understand that the field of view of a person is finite, and you understand that he is not very likely to sense the slight push of Obama's foot.
  • You understand that people are self-conscious about their weight. You also understand that he is reading off the scale measurement, and that shortly the over-estimated weight will confuse him because it will probably be much higher than what he expects. In other words, you reason about implications of the events that are about to unfold seconds after this photo was taken, and especially about the thoughts and how they will develop inside people's heads. You also reason about what pieces of information are available to people.
  • There are people in the back who find the person's imminent confusion funny. In other words you are reasoning about state of mind of people, and their view of the state of mind of another person. That's getting frighteningly meta.
  • Finally, the fact that the perpetrator here is the president makes it maybe even a little more funnier. You understand what actions are more or less likely to be undertaken by different people based on their status and identity.

I could go on, but the point here is that you've used a HUGE amount of information in that half second when you look at the picture and laugh. Information about the 3D structure of the scene, confounding visual elements like mirrors, identities of people, affordances and how people interact with objects, physics (how a particular instrument works,  leaning and what that does), people, their tendency to be insecure about weight, you've reasoned about the situation from the point of view of the person on the scale, what he is aware of, what his intents are and what information is available to him, and you've reasoned about people reasoning about people. You've also thought about the dynamics of the scene and made guesses about how the situation will unfold in the next few seconds visually, how it will unfold in the thoughts of people involved, and you reasoned about how likely or unlikely it is for people of particular identity/status to carry out some action. Somehow all these things come together to "make sense" of the scene.

It is mind-boggling that all of the above inferences unfold from a brief glance at a 2D array of R,G,B values. The core issue is that the pixel values are just a tip of a huge iceberg and deriving the entire shape and size of the icerberg from prior knowledge is the most difficult task ahead of us. How can we even begin to go about writing an algorithm that can reason about the scene like I did? Forget for a moment the inference algorithm that is capable of putting all of this together; How do we even begin to gather data that can support these inferences (for example how a scale works)? How do we go about even giving the computer a chance?

Now consider that the state of the art techniques in Computer Vision are tested on things like Imagenet (task of assigning 1-of-k labels for entire images), or Pascal VOC detection challenge (+ include bounding boxes). There is also quite a bit of work on pose estimation, action recognition, etc., but it is all specific, disconnected, and only half works. I hate to say it but the state of CV and AI is pathetic when we consider the task ahead, and when we think about how we can ever go from here to there. The road ahead is long, uncertain and unclear.

I've seen some arguments that all we need is lots more data from images, video, maybe text and run some clever learning algorithm: maybe a better objective function, run SGD, maybe anneal the step size, use adagrad, or slap an L1 here and there and everything will just pop out. If we only had a few more tricks up our sleeves! But to me, examples like this illustrate that we are missing many crucial pieces of the puzzle and that a central problem will be as much about obtaining the right training data in the right form to support these inferences as it will be about making them.

Thinking about the complexity and scale of the problem further, a seemingly inescapable conclusion for me is that we may also need embodiment, and that the only way to build computers that can interpret scenes like we do is to allow them to get exposed to all the years  of (structured, temporally coherent) experience we have,  ability to interact with the world, and some magical active learning/inference architecture that I can barely even imagine when I think backwards about what it should be capable of.

In any case, we are very, very far and this depresses me. What is the way forward? :( Maybe I should just do a startup. I have a really cool idea for a mobile local social iPhone app.

from: http://karpathy.github.io/2012/10/22/state-of-computer-vision/

计算机视觉和人工智能的状态:我们已经走得很远了 The state of Computer Vision and AI: we are really, really far away.的更多相关文章

  1. ACM一年记,总结报告(希望自己可以走得很远)

    一. 知识点梳理 (一) 先从工具STL说起: 容器学习了:stack,queue,priority_queue,set/multiset,map/multimap,vector. 1.stack: ...

  2. 2019年度【计算机视觉&机器学习&人工智能】国际重要会议汇总

    简介 每年全世界都会举办很多计算机视觉(Computer Vision,CV). 机器学习(Machine Learning,ML).人工智能(Artificial Intelligence ,AI) ...

  3. 在计算机视觉与人工智能领域,顶级会议比SCI更重要(内容转)

    很多领域,SCI是王道,尤其在中国,在教师科研职称评审和学生毕业条件中都对SCI极为重视,而会议则充当了补充者的身份.但是在计算机领域,尤其是人工智能与机器学习领域里,往往研究者们更加青睐于会议 我无 ...

  4. paper 156:专家主页汇总-计算机视觉-computer vision

    持续更新ing~ all *.files come from the author:http://www.cnblogs.com/findumars/p/5009003.html 1 牛人Homepa ...

  5. 边缘计算、区块链、5G,哪个能走的更远

    频繁出现的新词汇5G.区块链.边缘计算,这些都代表了什么,又能给我们的生活带来什么巨大的改变么?抉择之时已至,能够走向未来的真的只有一个吗? "没有什么能够阻挡,你对自由的向往....&qu ...

  6. 计算机视觉中的边缘检测Edge Detection in Computer Vision

    计算机视觉中的边缘检测   边缘检测是计算机视觉中最重要的概念之一.这是一个很直观的概念,在一个图像上运行图像检测应该只输出边缘,与素描比较相似.我的目标不仅是清晰地解释边缘检测是怎样工作的,同时也提 ...

  7. AI-Azure上的认知服务之Computer Vision(计算机视觉)

    使用 Azure 的计算机视觉服务,开发人员可以访问用于处理图像并返回信息的高级算法. 主要包含如下高级算法: 标记视觉特性Tag visual features 检测对象Detect objects ...

  8. 如何创建Azure Face API和计算机视觉Computer Vision API

    在人工智能技术飞速发展的当前,利用技术手段实现人脸识别.图片识别已经不是什么难事.目前,百度.微软等云计算厂商均推出了人脸识别和计算机视觉的API,其优势在于不需要搭建本地环境,只需要通过网络交互,就 ...

  9. 【29】带你了解计算机视觉(Computer vision)

    计算机视觉(Computer vision) 计算机视觉是一个飞速发展的一个领域,这多亏了深度学习. 深度学习与计算机视觉可以帮助汽车,查明周围的行人和汽车,并帮助汽车避开它们. 还使得人脸识别技术变 ...

随机推荐

  1. JS对Json对象Distinct

    Json对象去重 今日有一个需求如下: 从数据库中取出数据源转化成json字符串绑定到隐藏域中,取出的json字符串如下: string data="[{"CompanyName& ...

  2. Ubuntu 14.04安装Chromium浏览器并添加Flash插件Pepper Flash Player

    安装方法Ubuntu 14.04及衍生版本用户命令: 因为默认库里面有Chromium和Pepper Flash Player,安装非常容易,打开终端,输入以下命令: sudo apt-get upd ...

  3. System.IO.StreamWriter

    string path = @"D:\a.txt"; System.IO.StreamWriter swOut = new System.IO.StreamWriter(path, ...

  4. xubuntu install nodejs

    1.安装依赖sudo apt-get install g++ curl libssl-dev apache2-utils git-core 2.去官网获取最新版本 sudo wget http://n ...

  5. [原创汉化]linux前端神器 WebStorm8 汉化

    只汉化了linux版本 因为linux的工具没win多 不过汉化应该都通用的,自行尝试下. 汉化的不是很完全.有时间放出完全版本来.汉化是个体力活 转载随易,汉化不易,且转且注明 截图: http:/ ...

  6. shell脚本积累

    统计当前目录下文件夹的大小 for d in $(ls) do du -sh ./$d done 获取之前日期date  +"%Y%m%d" -d  "-n days&q ...

  7. Hibernate各种主键生成策略与配置详解【附1--<generator class="foreign">】

    1.assigned 主键由外部程序负责生成,在 save() 之前必须指定一个.Hibernate不负责维护主键生成.与Hibernate和底层数据库都无关,可以跨数据库.在存储对象前,必须要使用主 ...

  8. 单例模式(.NET)

    问题描述:         单例模式 Singleton Pattern 问题解决: (1)单例模式简介: Singleton模式要求一个类有且仅有一个实例,并且提供了一个全局的访问点.这就提出了一个 ...

  9. MySQL性能优化的最佳20+条经验(转)

    今天,数据库的操作越来越成为整个应用的性能瓶颈了,这点对于Web应用尤其明显.关于数据库的性能,这并不只是DBA才需要担心的事,而这更是我 们程序员需要去关注的事情.当我们去设计数据库表结构,对操作数 ...

  10. solr4.5部署

    一.服务器部署 1.solr自带jetty服务器上部署 cd到solr-4.5.0\example目录下,运行java -jar start.jar即可运行jetty服务器.访问http://loca ...