Spatial Sound Research

What are our goals?

The basic goal of our research is to develop cost-effective methods for synthesizing fully three-dimensional spatial sound. Our approach is based on measuring, understanding, and modeling the effects of the human body on incident sound waves. To that end, we have developed a unique facility for high-spatial-resolution HRTF measurement, a variety of tools for HRTF analysis and display, and a family of physically-based structural HRTF models that can be customized to individual listeners.

Measuring the HRTF of a KEMAR manikin

Support for our research comes from the National Science Foundation and from several industrial affiliates. We are currently collaborating with colleagues at the University of Maryland and Duke University on an NSF-sponsored research program whose goal is to use computer vision techniques to obtain accurate models of the body, which will in turn be used to provide the boundary conditions for computing individualized HRTFs numerically.

What is the problem?

The sizes and shapes of torsos, heads and particularly the pinnae vary substantially from person to person. Since these factors contribute significantly to the HRTF, individualized or custom HRTF are needed to obtain a faithful perception of spatial location.

Size and shape of pinnae vary from person to person

One of the problems that we are currently addressing is the development of a parameterized HRTF model that can be easily customized for individual listeners. By providing the acoustic cues with which the listener is familiar, such a model will produce significantly more realistic and convincing spatial sound.

What is our approach?

Our research is based on the belief that the HRTF can be modeled by a physically-based model employing a small number of free parameters. We anticipate that these parameters can be adapted or customized to individual listeners by correlation with a small number of properly chosen anthropometric measurements.

Measuring the response of an isolated pinna
Left: the measurement system Right: closeup view of a pinna mold

Based on these premises, we are proceeding to develop and validate HRTF models using a combination of the physical and mathematical approaches. Since our models have to provide the proper sound localization cues to human listeners, we perform psychoacoustical experiments to validate their performance.

What have we accomplished?

First, we have shown that structural models can be effective in synthesizing spatial sound (Brown and Duda 98). We have shown that a spherical model of the head provides strong range cues for close sources (Duda and Martens 98), and that the parameters for this model can be accurately estimated from anthropometry (Algazi, Avendano and Duda 01). We have demonstrated that an ellipsoidal head model can account for the variations of the interaural time difference with elevation (Duda, Avendano and Algazi 99), and that an ellipsoidal torso model can provide additional elevation cues (Avendano, Algazi and Duda 99). Furthermore, this modeling work has revealed the existence of previously unrecognized, low-frequency binaural cues for elevation (Algazi, Avendano and Duda 01). Finally, we have shown that the complex behavior of the contralateral pinna need not be reproduced in detail, but can be effectively approximated by applying head shadow and delay to the transfer function for the ipsilateral pinna (Avendano, Duda and Algazi 99). In general, our progress is documented in more than fifteen .

We have also built a measurement facility that has enabled us to obtain accurate, high-resolution HRTF measurements. Small loudspeakers are attached at 5o intervals in azimuth around a computer-controlled rotating hoop. The hoop can be rotated about the interaural axis in 5.625o increments in elevation over a range of 270o. The HRTF data is collected by measuring the head-related impulse responses (HRIRs), either using Golay-code based hardware (Crystal River Engineering's SnapshotTM system) or using maximum-length sequences generated by Tucker-Davis Technology's System II.

Measuring the HRTF of a human subject

We have used this facility to measure HRTFs for more than 50 different subjects. These measurements are being organized as anHRTF databasethat includes anthropometric data extracted from digital photographs. This database, which will soon be made available to interested researchers, is providing us with the information needed for systematic study of individual differences in HRTFs. We believe that this will provide us with the basis for replacing the time-consuming process of measuring HRTFs acoustically with the ability to compute HRTFs from imagery.

Spatial Sound Research的更多相关文章

  1. 微软Hololens学院教程-Hologram 220-空间声音(Spatial sound )【本文是老版本,与最新的微软教程有出入】

    这是老版本的教程,为了不耽误大家的时间,请直接看原文,本文仅供参考哦! 原文链接https://developer.microsoft.com/EN-US/WINDOWS/HOLOGRAPHIC/ho ...

  2. HoloLens开发手记 - Unity之Spatial Sounds 空间声音

    本文主要讲述如何在项目中使用空间声音特性.我们主要讲述必须的插件组件和Unity声音组件和属性的设置来确保空间声音的实现. Enabling Spatial Sound in Unity 在Unity ...

  3. HoloLens开发手记 - Unity development overview 使用Unity开发概述

    Unity Technical Preview for HoloLens最新发行版为:Beta 24,发布于 09/07/2016 开始使用Unity开发HoloLens应用之前,确保你已经安装好了必 ...

  4. SCI&EI 英文PAPER投稿经验【转】

    英文投稿的一点经验[转载] From: http://chl033.woku.com/article/2893317.html 1. 首先一定要注意杂志的发表范围, 超出范围的千万别投,要不就是浪费时 ...

  5. 微软Hololens学院教程- Holograms 101: Introduction with Device【微软教程已经更新,本文是老版本】

    这是老版本的教程,为了不耽误大家的时间,请直接看原文,本文仅供参考哦!原文链接:https://developer.microsoft.com/EN-US/WINDOWS/HOLOGRAPHIC/ho ...

  6. 【Holograms 101D】一步步用Unity 开发 Hologram

    转载请注明出处: copperface:[Holograms 101D]一步步用Unity 开发 Hologram Holograms 101 该教程将带领你走完 Hologram 创建 的全过程.整 ...

  7. Hololens 开发环境配置

    安装 Hololens SDK 转自 Vangos Pterneas, 4 Apr 2016 CPOL    5.00 (1 vote) vote 1vote 2vote 3vote 4vote 5 ...

  8. HoloLens开发手记-开发概述Development overview

    开发HoloLens全息应用将使用UWP平台(Universal Windows Platform),所有的HoloLens应用都是Win10通用应用,所有UWP通用应用都可以在HoloLens上运行 ...

  9. Introducing Project Kinect for Azure

    https://www.linkedin.com/pulse/introducing-project-kinect-azure-alex-kipman/ Hello everyone! Microso ...

随机推荐

  1. 李宏毅机器学习笔记2:Gradient Descent(附带详细的原理推导过程)

    李宏毅老师的机器学习课程和吴恩达老师的机器学习课程都是都是ML和DL非常好的入门资料,在YouTube.网易云课堂.B站都能观看到相应的课程视频,接下来这一系列的博客我都将记录老师上课的笔记以及自己对 ...

  2. jackson实现java对象转支付宝/微信模板消息

    一.支付宝消息模板大致长这样 { "to_user_id": "", "telephone": "xxxxx", &qu ...

  3. 【RAY TRACING THE REST OF YOUR LIFE 超详解】 光线追踪 3-3 蒙特卡罗 (三)

    开学人倍忙,趁着第二周周末,我们继续图形相关的博客  Preface 今天我们来介绍一些理论方面的东西,为Monte Carlo 应用到我们的光线追踪器做铺垫 我们今天会介绍两章的东西,因为有一章内容 ...

  4. asp.net获取浏览器端操作系统名称

    /// <summary>/// 获取浏览器端操作系统名称/// </summary>/// <returns></returns>public sta ...

  5. JS-排序详解-快速排序

    说明 时间复杂度指的是一个算法执行所耗费的时间 空间复杂度指运行完一个程序所需内存的大小 稳定指,如果a=b,a在b的前面,排序后a仍然在b的前面 不稳定指,如果a=b,a在b的前面,排序后可能会交换 ...

  6. BZOJ.4530.[BJOI2014]大融合(LCT)

    题目链接 BZOJ 洛谷 详见这 很明显题目是要求去掉一条边后两边子树sz[]的乘积. LCT维护的是链的信息,那么子树呢? 我们用s_i[x]来记录轻边连向x的子树的和(记作虚儿子),那么sum[x ...

  7. Shiro笔记(四)编码/加密

    Shiro笔记(四)编码/加密 一.编码和解码 //base64编码.解码 @Test public void testBase64(){ String str="tang"; b ...

  8. 潭州课堂25班:Ph201805201 爬虫基础 第十五课 js破解 二 (课堂笔记)

    PyExecJs使用 PyExecJS是Ruby的ExecJS移植到Python的一个执行JS代码的库. 安装 pip install PyExecJS 例子 >>> import ...

  9. 潭州课堂25班:Ph201805201 并发(通信) 第十三课 (课堂笔记)

    from multiprocessing import Process # 有个 url 列表 ,有5个 url ,一次请求是1秒,5个5秒 # 要求1秒把 url 请求完, a = [] # 在进程 ...

  10. java引用类型简述

    主要内容: 1.引用类型简述 2.对象的可达性 3.软引用的垃圾回收分析 4.WeakHashMap分析 5.ThreadLocal内存泄漏分析 1.引用类型简述 在Java语言中除了基本数据类型外, ...