编者按:2014年度以人机交互为主题的顶级会议ACM
SIGCHI已经落下帷幕。微软研究院在此次会议的入选论文总数仅次于卡耐基•梅隆大学,位列第二。此次会议中,有哪些创新想法或技术让人眼前一亮?听微软亚洲研究院主管研究员Darren
Edge给我们娓娓道来。

作者:Darren Edge

微软亚洲研究院人机交互组主管研究员

如果这篇文章有幸被你阅读到,那么我猜想它十有八九是被发布在了某个网页上。很可能你在用物理鼠标或者触摸屏上下滚动网页来阅读,打开网页需要一款浏览器,而浏览器要运行在操作系统之上,操作系统离不开硬件支持,这些硬件又被包在一个物理“计算机”内(例如个人电脑、平板电脑或移动电话)。但你有思考过这些技术是如何根据用户的知识、技能、需求和价值观等理论而设计的吗?换句话说,你可曾想过,研究人员是如何在实验室和自然环境下理解人们在现实中的使用场景并对其做出响应的?当你在探讨这些问题的时候,你其实已经在思考人机交互(Human-Computer
Interaction,HCI)这个课题了,而且我相信正在思考的绝对不只你一人。

每年,多达3,500位研究员会从世界各地聚集在一起,参加一项以人机交互为主题的顶级国际会议——CHI大会(发音类似于“kai”)。今年的CHI
2014
在加拿大安大略省的多伦多市举行。为期6天的活动包括2天的研讨会,有多达15个平行活动可选,于是选择参加哪些会议成了困扰参会者的难题。由于不是每个对人机交互有兴趣的人都有机会前往CHI,而且每个参会的人也都只能参与CHI的很小一部分,我希望借此机会和大家分享一下我个人关注到的亮点。

更高效的案头工作

研讨会

我参与此次CHI大会的首个环节是以“周边互动:塑造研究与设计空间(Peripheral
Interaction: Shaping the Research and Design
Space
)”为主题的研讨会。这次研讨会让我尤其兴奋,因为“周边互动(peripheral
interaction)”这个术语正是我在2008年的博士论文《周边互动的有形用户界面(Tangible
User Interfaces for Peripheral
Interaction
)》中首次提出的。在立场文件(position
paper
)中,我回顾了自己早前对周边互动的定义:用户与其工作空间和注意力范围内的周边对象进行快速而频繁的互动,进而提出了一个用于描述周边互动质量的一般性框架。身处桌面工作空间的情境下,这些周边互动的质量显得更为重要——尽管移动和遍布式计算已经有了长足的进步,我们还要花大量时间在办公桌上使用传统PC和笔记本电脑进行工作。

从工作的角度看,CHI上展示的几个项目试图让桌面交互更加流畅、高效,以提升工作的效率,特别是其中两个项目对如何改善普通键盘的效用提供了思路。第一种是“击键-悬停-滑动(Type-Hover-Swipe)”,这种改良的机械键盘可以识别与按键接触或者按键上悬停的手势。这个项目由微软剑桥研究院的Stuart
Taylor及其同事完成,并且有幸赢得了最佳论文奖(Best Paper Awards)的殊荣,另一个键盘项目被称为GestKeyboard,可以在未改良的键盘上识别轻抚键盘的手势,并且可以与常规打字动作无缝结合。这个项目的第一作者是2011年曾在微软亚洲研究院人机交互组当过实习生的Haimo
Zhang。

更全身心的玩耍

Exercise Tracking项目展示视频的截图

案头工作是人机交互研究的传统研究领域,近年来,不少研究者的研究兴趣转向了案头工作以外的互动设计,不局限于工作。其中最令人兴奋的人机交互研究趋势是体育锻炼类游戏(exertion
gaming,或简称为exergaming),将体育锻炼、游戏和社交互动融入在一项活动中。

由Dan Morris和微软雷德蒙研究院其他同事合作的Exercise Tracking
系统借助可穿戴传感器来寻找、识别和计算重复性练习的次数。另一个项目也颇有意思,它基于运动的系统也成为本届大会的本科生研究项目大赛优胜者,我有幸(也有压力)担任了评委。Kyongwon
Seo基于自主的康复设计运用了微软Kinect设备和游戏化元素,鼓励中风患者在自己的家中继续康复。这也是会议期间展示的诸多Kinect创新用法之一。

另外一个令我特别感兴趣的系统则关注更加娱乐化、体态化的互动。这个项目名为VacuumTouch,是作者Taku
Hachisu在2013年在微软亚洲研究院实习期间与人机交互组研究员Masaaki
Fukumoto合作的成果。他们的系统在一个特定方面比常规的触控表面更具吸力——它采用一只空气泵和一套电磁阀空气阀,借助吸力来移动和限制用户的手指。

更情境化的学习

智能字幕(Smart Subtitles)项目展示视频的截图

作为一名研究人员,我一直密切关注人机交互及相关领域的技术前沿,希望能够不断磨砺自己在模型、方法、概念、理论和框架等方面的想法和能力。在像CHI这样的会议上,这项工作变得十分容易——因为你需要做的就是聆听并向在场的顶级专家学习。然而,在平常的工作生活中,人们往往比较难以找到时间和动机去学习,因为学习毕竟不是一件易事。在以前的研究中,我就曾试图探索如何利用一天当中短暂而稀疏的空余时间进行“微学习”,帮助人们解决学习第二语言的难题。在CHI
2014期间,我发现了几个在解决现实环境下终身学习问题的、有潜力的项目。

有两篇论文特别关注了情境化词汇学习的挑战。第一个项目是智能字幕(Smart
Subtitles
),它能提供专为语言学习者设计的交互式视频字幕。第二个项目是WADE,这是一个集成开发环境(IDE),它可以自动修改现有软件应用程序的用户界面,例如将用户界面上的标签和文本翻译成另一种语言。在研究母语和非母语识别的论文中,有一篇研究了实时多方对话中自动化文稿(Automated
Transcripts
)分享的效果。这项技术不仅在开会时使用会很方便,也将有助于像我这样的外国人在北京的日常交流,因为我的英式口音往往很难让别人理解。

除了学习特定领域的知识和技能外,学习如何管理时间和精力也是很重要的。有一个项目受到了微软亚洲研究院与韩国科技和未来规划部(MSIP)联合计划的资助,微软亚洲研究院副研究员Koji
Yatani与来自KAIST的同事共同探索“大学生如何迷上智能手机”。他们发现,面临成瘾危险的学生每天使用智能手机的平均总量高达111次,合计4个小时。并不是所有参与者都认为自己花在这方面的时间是富有成效或有明确的目的。这使我想起了一个项目,可以用有趣的方式追回一部分在这方面损失的时间,并用于更有意义的活动。这就是正在进行中的个人任务自包研究(Selfsourcing
Personal Tasks
),该项目由Jaime
Teevan和他在微软雷德蒙研究院的同事们合作进行,旨在帮助人们将众包方式运用于他们自身,将大量的个人信息搜寻任务分解成易于管理的“微任务”。就像“微学习”一样,该项目有助于在短期和长期内维持用户的动机和投入。这些方法通过降低有成果和有目的手机互动的门槛,让人们感觉到自己对智能手机使用掌握了更大的控制权,而不是被智能手机所控制。

更吸引人的演示

正如我前面提到的,要想最大限度地利用参加CHI的机会,最严峻的挑战之一在于从众多平行活动中做出取舍。如果你感兴趣的演讲出现在平行会议中,决策过程就会进一步复杂化,意味着你必须制定一个复杂的会议转场计划,但这又会给你和每场会议中的演讲者带来不便。

研究人员一直在努力解决会议环节编排的问题,让各个由密切相关演讲组成的环节在时间上不相互重叠。

现在Lydia
Chilton及其合作者已经能够帮助人们简化这一过程——其方法就是利用人群的力量——尤其是参与召开筹备会的议程委员会(PC)成员。她的Frenzy系统在CHI
2014期间赢得了荣誉提名奖(Honorable Mention Award),可用来对论文进行环节分组,并形成最终的CHI
2014会议议程。我在“演示技术”论文环节中,直接体验到了Frenzy的好处,因为该系统帮助PC委员会成员成功地将我的两篇基于PPT演示的论文与我们微软雷德蒙研究院的同事提交的一篇密切相关的论文安排在一起。

这三个相关论文涉及到的都是具有挑战性的活动:对演示的叙事结构加以规划(TurningPoint);通过结构化的筹备和排练来准备演示(PitchPerfect);以及为现场观众进行软件演示(DemoWiz)。显然,微软公司内进行着很多与演示相关的工作,并且为PowerPoint这样的产品指明了有趣的新方向。敬请期待!

我要推荐的最后一个项目用充满乐趣、引人入胜的方法介绍了一篇优秀的论文,让所有人都从中受益——事实也是如此,它赢得了“观众评选最佳演说奖(a
People’s Choice Best Talk Award)”。Panelrama项目提出了一个跨设备Web应用的全新开发模型,并且借助一款演示应用和最新的可穿戴技术进行了演示。这个项目意味着我们对跨设备体验的思考方式又向前迈进了一大步。演示者Jishuo
Yang把额外的设备接入他的演示应用时,组件立即动态地重新分配,以确保界面控制板与交互设备之间的最佳契合。在最后的演示中,Jishuo用他的手表读取即时信息和控制幻灯片进度,用头戴式显示器阅读讲稿,用手机查看幻灯片列表,并用连接到投影屏幕的笔记本电脑播放演示文稿的当前页面。总的来说,这个对强大跨设备交互功能的演示给人留下深刻印象,而所用的技术则都是相对简单的HTML扩展。

结语

以上这些就是我在CHI
2014大会期间所观察和学习到的人机交互领域最前沿的技术成果,我们的研究人员关注到了人们工作、生活、娱乐和学习的方方面面,或许这仍然只是窥探到这个充满魅力的研究领域的冰山一隅,更多奇妙的科技还有待富于创新、敢于挑战的研究员们去触碰、去发现。

__________________________________________________________________________________________________________________

英文原版

If you are reading this blog post, there is
a very good chance that you are Human. There is also a good chance
that you are reading this post using a physical mouse or touch
screen to scroll down a Web page, which is displayed in a Web
browser, which runs on an Operating System, which performs
computation in hardware, which is packaged inside a physical
“Computer” (for example, a PC, tablet, or mobile phone). Have you
ever thought about how such technologies are designed based on
theories about future users’ knowledge, skills, needs, and values?
Or how researchers conduct studies both in the laboratory and “in
the wild” to understand when, where, how, and why people use and
respond to technology in practice? If you have thought about these
questions, then you have been thinking about Human-Computer
Interaction (HCI). And you are not alone.

Each year, up to 3500 people from around
the world gather together for the premiere international conference
on Human-Computer Interaction – the ACM SIGCHI Conference on Human
Factors in Information Systems. In the spirit of simplification,
this is normally just abbreviated to CHI (pronounced “kai”). Having
your work accepted as a paper or note at CHI is a significant
achievement – only 23% of submissions to CHI 2014 were accepted
(465 out of 2036). Each year, Best Paper Awards and Honorable
Mention Awards are also given to the top-rated 1% and 5% of papers
respectively. At the conference, one of the co-authors of each
paper then presents the work to an audience of people eager to
learn about the latest and greatest HCI research. However, as the
saying goes, all work and no play makes for a dull conference!
Fortunately at CHI, the evening schedule is just as packed as the
daytime program, with more receptions, events, and parties to
attend than there are hours in the night. This year the Korea HCI
party was particularly good, with excellent location, atmosphere,
and people. The free drinks and snacks also helped! All of this
gives me very high hopes for next year’s CHI 2015 in
Seoul.

This year though, CHI was held in the city
of Toronto in Ontario, Canada. CHI 2014 was the 32nd CHI
conference since its establishment in 1982 and larger and more
impressive than ever. Spanning six days including two days of
workshops, with up to fifteen parallel tracks to choose among, it
is always hard to decide which sessions to attend. Since not
everybody who is interested in HCI gets the chance to travel to
CHI, and not everybody who attends CHI gets to see more than a
small fraction of the overall program, I would like to share my
personal highlights, or CHIlights!

As I have already explained, you need to
work hard and play hard to make the most of CHI. It is also
important to learn from the experience of being in the audience, as
well as give your best effort if you are responsible for
presenting. However, working, playing, learning, and presenting are
not just things researchers do at conferences – they are
fundamental activities of human life. Since I have a special
interest in all four of these activities, I naturally seek out HCI
research that aims to transform these activities for the better.
This also makes these activities an appropriate framework with
which to present my personal experience of CHI 2014, which I’ll now
do in 20 projects.

More Efficient Desk Work

My first engagement at CHI 2014 was in a
workshop on the theme of “Peripheral
Interaction: Shaping the Research and Design
Space
”. This was especially exciting for me since
I coined the term peripheral interaction with my 2008 PhD
dissertation on “Tangible
User Interfaces for Peripheral Interaction
”. In
my position paper [1], I refer back to my earlier
definitions of peripheral interaction as one in which users perform
fast, frequent interactions with objects on the periphery of their
workspace and attention, and propose a framework for describing the
qualities of peripheral interaction in general. These qualities are
more relevant than ever when considered in the context of a desktop
workspace; despite all the advances in mobile and ubiquitous
computing, we still spend a great deal of time working with
conventional PCs and laptops at desks and tables.

Several CHI projects attempt to make
desktop interaction more fluid and efficient, with two in
particular thinking about how to increase the utility of regular
keyboards. The first, Type-Hover-Swipe
[2], is a modified mechanical keyboard that can recognize hand
gestures both on and above the keys. This work by Stuart Taylor and
other colleagues from Microsoft Research Cambridge also has the
distinction of winning a Best Paper Award. The second keyboard
project, GestKeyboard
[3], can recognize stroking gestures across the keys of an
unmodified keyboard, in a way that can be seamlessly combined with
regular typing. The first author of this work, Haimo Zhang, was an
intern in the Microsoft Research Asia (MSRA) HCI Group in
2011.

Jumping from the keyboard to the mouse,
Phillip Pasqual and Jacob Wobbrock have investigated how
Kinematic
Template Matching
[4] can be used to predict the
endpoint of a mouse pointing operation and make target selection
even easier. Phillip was an intern in the MSRA HCI Group in 2012.
Finally, Stephen Fitchett, an MSRA Fellowship winner and MSRA HCI
intern in 2010, conducted a longitudinal field evaluation of his
Finder
Highlights
[5] system. The resulting paper won an
Honorable Mention Award for demonstrating improved desktop file
retrieval in real-world use. It is fair to say that past MSRA HCI
interns, along with our Microsoft Research colleagues in Cambridge,
are playing a significant role in inventing the desktop of the
future.

More Embodied Play

While desk work is a traditional area of
interest for HCI, more recently there has been a shift towards the
design of interactions beyond the desktop context and for purposes
other than work. One of the most exciting trends in HCI right now
is exertion gaming, or exergaming, in which the benefits of
exercise, gaming, and social interaction are all be combined into a
single activity. There were three paper sessions at CHI 2014
dedicated to exergaming, as well as two workshops, a panel, and a
Special Interact Group (SIG). Much of the original and current work
in the area was conducted by Florian ‘Floyd’ Mueller, another past
MSRA Fellowship winner and MSRA HCI intern all the way back in
2009. Floyd and I have continued collaborating over these past five
years, with our CHI 2014 paper on Exertion
Cards
[6] helping to support the creative game
design process in workshop settings. The cards can help support the
design of concepts like the LumaHelm – an interactive bicycle
helmet expertly
demonstrated
in the Interactivity section of the
CHI 2014 program. This was a definite CHIlight!

Towards the exertion end of the exergaming
spectrum, the RecoFit
[7] system from Dan Morris and other colleagues at Microsoft
Research Redmond uses a wearable sensor to find, recognize, and
count repetitive exercises. This would make for a great exergaming
platform! The presentation by Dan and accompanying live demo by
coauthor Scott Saponas was also the most energized talk of the
conference, and rightly won a People’s Choice Best Talk Award.
Another movement-based system was also the winner of the
undergraduate Student Research Competition, which I had the
pleasure (and pressure) of judging. Kyongwon Seo’s project on
Autonomy-Based Rehabilitation Design [8] used the Microsoft Kinect
device along with elements of gamification to encourage people
recovering from stroke to continue their rehabilitation at home.
This was one of many innovative uses of Kinect to be showcased at
the conference.

Two further systems of particular interest
looked at more playful and embodied interaction, with the hands and
feet respectively. The first, VacuumTouch
[9], was the outcome of Taku Hachisu’s 2013 internship with MSRA
HCI researcher Masaaki Fukumoto. Their system is more attractive
than regular touch surfaces in one specific way – it uses an air
pump and solenoid air valves to move and immobilize the user’s
finger using the power of suction. Another playful twist on an
established interaction modality is the tangible interface from
Dominik Schmidt (MSRA HCI intern 2011) and collaborators – their
Kickables
[10] system is a tangible interface operated with your feet. Even
after 32 years, HCI researchers are still inventing new ways for
use our bodily skills to act with technology, as well as new ways
for technology to act back at us through our full range of bodily
senses. After all, isn’t this what Human-Computer Interaction is
all about?

More Contextual Learning

As a researcher, I am constantly tracking
the state of the art in HCI and related fields so that I can
continue building my mental toolkit of models, methods, concepts,
theories, and frameworks. At conferences like CHI this is easy,
because all you have to do (during the paper sessions, at least) is
listen and learn from the top experts in the field. However, during
regular working life it is more difficult to find the time and
motivation to learn, because learning is hard. This doesn’t just
apply to esoteric academic theories, but to many areas of knowledge
and skill development. For example, in my previous research, I have
explored how “microlearning” in short, sparse fragments of free
time throughout the day could help people tackle the daunting but
desirable challenge of learning a second language. I saw several
promising projects at CHI 2014 that address the issue of lifelong
learning in real-world contexts.

Two papers in particular address the
challenge of contextual vocabulary learning. The first,
Smart
Subtitles
[11], provides interactive video
subtitles designed for language learners. The second,
WADE
[12], is an Integrated Development Environment (IDE) that can
automatically modify the user interface of existing software
applications, e.g., to translate UI labels and text into another
language. The first authors of these respective papers, Geza Kovacs
(first year PhD student, Stanford) and Xiaojun Meng (second year
PhD student, NUS), will both be joining me for internships in the
MSRA HCI Group this summer. We will be working hard on some
exciting projects that I hope you will see at CHI 2015 in Seoul!
This will also be the first time CHI is located in Asia, opening up
a whole new range of cultural and linguistic experiences for CHI
attendees. However, there are also likely to be several times for
each attendee where conversations are impeded by language
differences between native and non-native speakers. In one of three
related papers, 2010 MSRA HCI intern Ge Gao (now at Cornell
University) investigated the effects of sharing Automated
Transcripts
[13] on real-time multiparty
conversations. This could come in very handy not just in Seoul, but
in my day-to-day interactions in Beijing, since my British accent
is often hard for others to understand (although not as hard as my
Chinese!).

In addition to learning domain-specific
knowledge and skills, it is also important to learn more general
strategies for managing your time and attention. In one project
funded by MSRA’s collaboration program with the Korean Ministry of
Science, ICT, and Future Planning (MSIP), MSRA HCI researcher Koji
Yatani collaborated with colleagues from KAIST to explore how
college students were Hooked on
Smartphones
[14]. They found that students at
risk of addiction used their Smartphones for a daily average of 111
sessions totaling four hours of use. Not all of the participants
felt that they spent this time productively or with a clear purpose
in mind. One particular project that I thought offered an
interesting way to claim back some of this lost time for more
meaningful activities was the Work-In-Progress (WIP) on
Selfsourcing
Personal Tasks
[15] from Jaime Teevan and other
colleagues at Microsoft Research Redmond. This project helps people
to apply the methods of crowdsourcing to themselves by decomposing
large personal information tasks into manageable microtasks. Just
like microlearning, this can help to sustain user motivation and
engagement both throughout the day and over the long term. By
lowering the barrier to productive and purposeful mobile
interaction, these approaches could help make people feel like they
are more in control of their smartphone use, rather than feeling
like their smartphone has control over them.

More Engaging Presentations

As I mentioned earlier, one of the biggest
challenges of getting the most out of CHI is deciding which of the
many parallel tracks to attend. The decision-making process is
complicated further when talks relating to your interests are shown
in parallel sessions, meaning that you have to create an intricate
session-switching plan that inconveniences both you and the
presenters in each session. Researchers have long struggled with
the problem of scheduling conference sessions such that each
session has closely related talks and does not overlap with closely
related sessions. Now, Lydia Chilton and collaborators have now
helped streamline this process by harnessing the power of the crowd
– in particular, the crowd of Program Committee (PC) members at the
PC meeting in which papers are discussed and accepted for
publication (or not). Her Frenzy
[16] system won an Honorable Mention Award at CHI 2014 and builds
on earlier crowdsourcing work we collaborated on during her
internship in the MSRA HCI Group in 2011. It was also used to
create the grouping of papers into the sessions that formed the
final CHI 2014 conference program. I experienced the benefits of
Frenzy directly in the “Presentation Technologies” paper session,
since the system helped PC members to successfully group my two
presentation-based papers alongside a closely related paper from
our colleagues in Microsoft Research Redmond.

These three related papers cover the
challenging activities of planning the narrative structure of a
presentation (TurningPoint
[17]), preparing to deliver a presentation through structured
preparation and rehearsal (PitchPerfect
[18]), and performing a software demonstration to a live audience
and (DemoWiz
[19]). The presenters and first authors of the first two projects,
Larissa Pschetz and Ha Trinh respectively, were both MSRA HCI
interns in 2013 working with both Koji Yatani and me. Both papers
also won Honorable Mention Awards – well done Ha and Larissa! The
presenter and first author of the third project, Pei-Yun Chi, was
also a Microsoft Research intern working with Bongshin Lee and
Steven Drucker in the Redmond lab. Clearly, there is a lot of
presentation-related work happening within Microsoft that suggests
interesting new directions for products like PowerPoint. Stay
tuned!

It is only fitting that the 20th
and final project represents great work that we can all learn from,
communicated through a playful and engaging presentation that won a
People’s Choice Best Talk Award. Proposing a new development model
for cross-device Web applications, and demonstrating this using a
presentation application and the latest wearable technologies, the
Panelrama
[20] project represents a significant step forward in how we think
about cross-device experiences. As the presenter Jishuo Yang
connected additional devices to his presentation application, the
components dynamically redistributed to ensure the best fit between
interface panels and interaction devices. In the end, Jishuo was
presenting with timing information and slide control on his watch,
speaking notes on his head-mounted display, the presentation slide
list on his mobile phone, and the current presentation slide on his
laptop connected to the projection screen. Overall, it was a very
impressive demonstration of powerful cross-device interaction
capabilities, enabled by relatively simple HTML extensions. This
work was conducted by Jishuo in collaboration with Microsoft
Research alumnus Daniel Wigdor at the University of Toronto,
meaning that they didn’t have to travel far to share their
far-reaching ideas.

--

This has been a summary of my CHI 2014
experience in 20 projects. When I started assembling this list I
was unsure whether I would be able to select 20 projects of
personal interest all with least some connection to Microsoft
Research. As it happens, I was able to choose 19, and the remaining
paper still used the Microsoft Kinect technology! Of these 20
projects, eight were archival Papers or Notes coauthored by
Microsoft Researchers. Among them, these eight papers received one
of the seven Best Paper Awards and two of the five Honorable
Mention Awards given to Microsoft Research papers at CHI 2014.
These eight papers also represent just one fifth of the 34 papers
coauthored by Microsoft Researchers in total, which represents a
substantial 7.5% of the final Papers and Notes program. This made
Microsoft Research the second-to-top institution in terms of total
papers at CHI 2014, narrowly surpassed by Carnegie Mellon
University with 38 papers. This isn’t too bad given that we divide
our time between advancing the state-of-the-art in research and
contributing to future generations of Microsoft products. And we
can always aim for the top spot at CHI 2015!

Finally, of these 20 projects, I am pleased
to say that 14 were from current members and past and future
interns of the MSRA HCI Group. It is always a pleasure to work with
outstanding interns on projects that make a contribution to
Microsoft, but it is especially rewarding to see these interns
growing as HCI researchers within the CHI community at large. I
would now like to conclude my CHI 20-14 report by thanking all of
our past interns for their great work. I would also like to thank
you, the reader, for using your Human-Computer Interaction skills
to make it to the end of this rather lengthy blog post. I hope you
found it interesting. Now, time to get back to work on projects for
CHI 2015.

作者简介

 Darren Edge现任微软亚洲研究院人机交互组主管研究员。Darren的研究主要以设计为导向、注重对重要的人为活动在主观和抽象上分析,并希望能以此开发交互的系统帮助实现这些活动间更好的转换。他的研究兴趣集中在探索技术如何支持跨越不同领域的学习和沟通,如第二语言学习、体育锻炼类游戏、案头工作、和现场演示。

Darren于2008加入微软亚洲研究院。在此之前,他就读于剑桥大学,先后获得计算机科学和管理研究学士学位、计算机博士学位。

____________________________________________________________________________

相关阅读

CHIP 2013: 人机交互领域那些令人兴奋的新技术

欢迎关注

微软亚洲研究院人人网主页http://page.renren.com/600674137

微软亚洲研究院微博http://t.sina.com.cn/msra

漫谈2014年人机交互(CHI)大会的更多相关文章

  1. MDCC 2014移动开发人员大会參会实录

    MDCC 2014移动开发人员大会參会实录 详细讲什么我就不反复了,各大媒体的编辑整理的比我的好! 我就晒晒图!后面有惊喜哦! 会场地点:早上七点多.天色有点暗,主要是阴天的原因. watermark ...

  2. 从CVPR 2014看计算机视觉领域的最新热点

    编者按:2014年度计算机视觉方向的顶级会议CVPR上月落下帷幕.在这次大会中,微软亚洲研究院共有15篇论文入选.今年的CVPR上有哪些让人眼前一亮的研究,又反映出哪些趋势?来听赴美参加会议的微软亚洲 ...

  3. 从CVPR 2014看计算机视觉领域的最新热点

    2014看计算机视觉领域的最新热点" title="从CVPR 2014看计算机视觉领域的最新热点"> 编者按:2014年度计算机视觉方向的顶级会议CVPR上月落下 ...

  4. 探索Kinect的更多可能——亲历第十九届机器人世界杯RoboCup

    作者:微软亚洲研究院资深项目经理 吴国斌 2015年7月19日,第十九届RoboCup机器人世界杯足球赛,在中国合肥隆重开幕.来自全球七十六个国家和地区的一百余支代表队参加了决赛,他们优秀的作品给观众 ...

  5. CHI 2015大会:着眼于更加个性化的人机交互

    2015大会:着眼于更加个性化的人机交互" title="CHI 2015大会:着眼于更加个性化的人机交互"> 本周,人机交互领域的顶级盛会--2015年ACM C ...

  6. 2014年Tizen开发者峰会上海征稿启事!

    本次征稿面向大中华用户: “Tizen开发者,应用程序开发人员.isv平台设计师.运营商.厂商.硬件厂商.软件厂商,开源爱好者,和从事Tizen的工作人员” 2014年Tizen开发者峰会 这一次,亚 ...

  7. chinacloud大数据新闻

    2015年大数据发展八大趋势   (0 篇回复) “数据很丰满,信息很骨感”:Sight Machine想用大数据的方法,打碎两者间的屏障   (0 篇回复) 百度携大数据"圈地" ...

  8. 【机器学习Machine Learning】资料大全

    昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...

  9. 【转】自学成才秘籍!机器学习&深度学习经典资料汇总

      小编都深深的震惊了,到底是谁那么好整理了那么多干货性的书籍.小编对此人表示崇高的敬意,小编不是文章的生产者,只是文章的搬运工. <Brief History of Machine Learn ...

随机推荐

  1. Git 学习 day01

    Tips:最近的工作中需要用到版本控制工具git,所以准备开一个分类用来记录下自己学到的知识,以备以后温习 在安装完git之后需要设置用户名和用户邮箱: $ git config --global u ...

  2. Spring Cloud Zuul 网关服务的fallback

    当我们的zuul进行路由分发时,如果后端服务没有启动,或者调用超时,这时候我们希望Zuul提供一种降级功能,而不是将异常暴露出来. Spring cloud zuul提供这种降级功能,操作步骤如下: ...

  3. ASP.NET ZERO 学习 导航菜单

    定义PageNames和PermissionName PageNames : Web/App_Start/Navigation/PageNames.cs public const string Das ...

  4. Struts 2的下载和安装

    一.为Web应用增加Struts 2支持 下载和安装Struts 2步骤: 登录http://struts.apache.org/download.cgi站点,下载Struts 2的最新版,下载时有以 ...

  5. 项目在eclipse中正常,在idea中报错

    一直用的eclipse,但公司很多员工用的都是idea,便想试试,谁知导入maven项目后一直报错,最后发现编译后target中没有dao中的xml文件,导致监听器加载资源时一直报错, 最后经过反复查 ...

  6. 使用tcpdump查看HTTP请求响应 详细信息 数据

    安装tcpdump: sudo yum install tcpdump 查看get请求: tcpdump -s 0 -A 'tcp dst port 80 and tcp[((tcp[12:1] &a ...

  7. Bless All

    # php code $i = 2333 $myJXOI = JXOI() while($i == 2333){ ++myJXOI.score , ++myJXOI.rp , --myJXOI.常数 ...

  8. js变量的相关要点

    如果变量在函数内没有声明(没有使用 var 关键字),该变量为全局变量. JavaScript 变量生命周期在它声明时初始化. 局部变量在函数执行完毕后销毁. 全局变量在页面关闭后销毁.

  9. Docker Compose文件详解 V2

    Compose file reference 语法: web:      build: ./web      ports:      - "5000:5000"      volu ...

  10. Docker添加root用户

    0 环境 系统环境:centos7 服务器:阿里云 1 正文 1 进入rabbitmq容器中 docker exec -i -t 563 bin/bash 2 添加用户(用户名和密码) rabbitm ...