这个其实就是从Audio_processing.h中拿出来的。
APM should be placed in the signal chain as close to the audio hardware abstraction layer (HAL) as possible.
APM accepts only 16-bit linear PCM audio data in frames of 10 ms.Multiple channels should be interleaved.
Audio Processing instantiation and configuration:
AudioProcessing * apm = AudioProcessing :: Create (0);
apm-> level_estimator () -> Enable (true); // Enable retries estimation component
apm-> echo_cancellation () -> Enable (true); // Enable echo cancellation module
apm-> echo_cancellation () -> enable_metrics (true); //
apm-> echo_cancellation () -> enable_drift_compensation (true); // Enable clock compensation module (sound capture device clock frequency clock frequency and playback devices may be different)
apm-> gain_control () -> Enable (true); // Enable gain control module, client must enable Oh!
apm-> high_pass_filter () -> Enable (true); // high-pass filter components, DC offset and low frequency noise filtering, client must be enabled
apm-> noise_suppression () -> Enable (true); // noise suppression components, client must be enabled
apm-> voice_detection () -> Enable (true); // enable voice detection component, to detect whether there voices
apm-> voice_detection () -> set_likelihood (VoiceDetection :: kModerateLikelihood); // Set the voice detection threshold, the threshold bigger voice less likely to be ignored, some noise may be treated the same voice.
apm-> Initialize (); // Reserved internal state set by the user in all cases of re-initialization apm, to start processing a new audio stream. After creating the first stream does not necessarily need to call this method.
2.AudioProcessing workflow:
AudioProcessing is event-driven, event into the Initialize event, capturing audio event, rendering the audio event.
Initialize event:
apm-> set_sample_rate_hz (sample_rate_hz); // set the sample rate of local and remote audio stream
apm-> echo_cancellation () -> set_device_sample_rate_hz (); // set the sample rate audio equipment, we assume that the audio capture and playback device using the same sampling rate. (Must be called when the drift component is enabled)
apm-> set_num_channels (num_capture_input_channels, num_capture_output_channels);
// set the local and remote audio stream of the number of channels
Play event:
apm-> AnalyzeReverseStream (& far_frame)); // analysis of 10ms frame data far end of the audio stream, these data provide a reference for echo suppression. (Enable echo suppression when calling needs)
Capture events:
apm-> gain_control () -> set_stream_analog_level (capture_level);
apm-> set_stream_delay_ms (delay_ms + extra_delay_ms); // set the delay in milliseconds between local and remote audio streams of. This delay is the time difference and the distal end of the audio stream between the local audio streams, calculated as:
delay = (t_render – t_analyze) + (t_process – t_capture) ;
Among them
t_analyze end audio stream is time to AnalyzeReverseStream () method;
t_render is just similar to the distal end of the audio frame playback time;
t_capture local audio capture time frame;
t_process is the same time frame was given to local audio ProcessStream () method.
apm-> echo_cancellation () -> set_stream_drift_samples (drift_samples); // Set the difference between the audio device to capture and playback sampling rate. (Must be called when the drift component is enabled)
int err = apm-> ProcessStream (& near_frame); // processing audio streams, including all aspects of the deal. (Such as gain adjustment, echo cancellation, noise suppression, voice activity detection, high throughput rate without decoding Oh! Do for pcm data processing)
capture_level = apm-> gain_control () -> stream_analog_level (); // under emulation mode, you must call this method after ProcessStream, get the recommended analog value of new audio HAL.
stream_has_voice = apm-> voice_detection () -> stream_has_voice (); // detect whether there is a voice, you must call this method after ProcessStream
ns_speech_prob = apm-> noise_suppression () -> speech_probability (); // returns the internal voice priority calculated the probability of the current frame.
3.AudioProcessing release
AudioProcessing :: Destroy (apm);
apm = NULL;
另一个示例
AudioProcessing* apm = AudioProcessing::Create(0);
apm->set_sample_rate_hz(32000);
Super-wideband processing.
// Mono capture and stereo render.
apm->set_num_channels(1, 1);
apm->set_num_reverse_channels(2);
apm->high_pass_filter()->Enable(true);
apm->echo_cancellation()->enable_drift_compensation(false);
apm->echo_cancellation()->Enable(true);
apm->noise_reduction()->set_level(kHighSuppression);
apm->noise_reduction()->Enable(true);
apm->gain_control()->set_analog_level_limits(0, 255);
apm->gain_control()->set_mode(kAdaptiveAnalog);
apm->gain_control()->Enable(true);
apm->voice_detection()->Enable(true);
// Start a voice call...
// ... Render frame arrives bound for the audio HAL ...
apm->AnalyzeReverseStream(render_frame);
// ... Capture frame arrives from the audio HAL ...
// Call required set_stream_ functions.
apm->set_stream_delay_ms(delay_ms);
apm->gain_control()->set_stream_analog_level(analog_level);
apm->ProcessStream(capture_frame);
// Call required stream_ functions.
analog_level = apm->gain_control()->stream_analog_level();
has_voice = apm->stream_has_voice();
// Repeate render and capture processing for the duration of the call...
// Start a new call...
apm->Initialize();
// Close the application...
AudioProcessing::Destroy(apm);
apm = NULL;
参考:
- webrtc中APM(AudioProcessing module)的使用
一,实例化和配置 AudioProcessing* apm = AudioProcessing::Create(0); //这里的0指的是channelID,只是一个标注那个通道的表示 apm-> ...
- android studio 中移除module和恢复module
一.移除Android Studio中module 在Android Studio中想要删除某个module时,在Android Studio中选中module,右键发现没有delete,如图: An ...
- Android IOS WebRTC 音视频开发总结(八十七)-- WebRTC中丢包重传NACK实现分析
本文主要介绍WebRTC中丢包重传NACK的实现,作者:weizhenwei ,文章最早发表在编风网,微信ID:befoio 支持原创,转载必须注明出处,欢迎关注我的微信公众号blacker(微信ID ...
- Android IOS WebRTC 音视频开发总结(八十六)-- WebRTC中RTP/RTCP协议实现分析
本文主要介绍WebRTC中的RTP/RTCP协议,作者:weizhenwei ,文章最早发表在编风网,微信ID:befoio 支持原创,转载必须注明出处,欢迎关注我的微信公众号blacker(微信ID ...
- webrtc中的带宽自适应算法
转自:http://www.xuebuyuan.com/1248366.html webrtc中的带宽自适应算法分为两种: 1, 发端带宽控制, 原理是由rtcp中的丢包统计来动态的增加或减少带宽,在 ...
- Node.js中exports,module.exports以及require方法
在Node.js中,使用module.exports.f = ...与使用exports.f = ...是一样的,此时exports就是module.exports的一种简写方式.但是,需要注意的是, ...
- WebRTC中的NetEQ
NetEQ使得WebRTC语音引擎能够快速且高解析度地适应不断变化的网络环境,确保了音质优美且缓冲延迟最小,其集成了自适应抖动控制以及丢包隐藏算法. WebRTC和NetEQ概述 WebRTC Web ...
- [转载]Pytorch中nn.Linear module的理解
[转载]Pytorch中nn.Linear module的理解 本文转载并援引全文纯粹是为了构建和分类自己的知识,方便自己未来的查找,没啥其他意思. 这个模块要实现的公式是:y=xAT+*b 来源:h ...
- ULPFEC在WebRTC中的实现[转载]
一.WebRTC对抗网络丢包的两种手段 丢包重传(NACK)和前向纠错(FEC).FEC是一种前向纠错技术,发送端将负载数据加上一定的冗余纠错码一起发送,接收端根据接收到的纠错码对数据进行差错 ...
随机推荐
- angularjs-$http.post请求传递参数,后台Controller接受不到原因
现象回显 js文件 app.controller('indexCtrl', function($scope, $state, $http) { $scope.login = function() { ...
- nginx+Memcached 缓存设计
单页面缓存方案 单静态页缓存 解决问题场景 常见的缓存设计利用System.Web.Cache 保存在内存内,效率高,可以减轻数据库访问的压力.但是Web除了获取数据之外,还有呈现页面渲染,生成HTM ...
- PJAX的实现与应用
一.前言 web发展经历了一个漫长的周期,最开始很多人认为Javascript这们语言是前端开发的累赘,是个鸡肋,那个时候人们还享受着从一个a链接蹦 到另一个页面的web神奇魔术.后来随着JavaSc ...
- [转]SpringMVC Controller介绍及常用注解
一.简介 在SpringMVC 中,控制器Controller 负责处理由DispatcherServlet 分发的请求,它把用户请求的数据经过业务处理层处理之后封装成一个Model ,然后再把该Mo ...
- Lodash.js的库
1.orderBy _order(数组,排序对象,["asc"]升序或者["desc"]降序)
- Java 动态代理机制详解
在学习Spring的时候,我们知道Spring主要有两大思想,一个是IoC,另一个就是AOP,对于IoC,依赖注入就不用多说了,而对于Spring的核心AOP来说,我们不但要知道怎么通过AOP来满足的 ...
- 《Linux常用命令》笔记
① ifconfig 查看IP状态; ② ls 查看当前路径文件信息,参数: -l 查看文件的详细信息与ll效果一样; -a 查看文件的全部信息; ③ man 查询当前指令的信息,查询可用字母q退出; ...
- Java对象的深拷贝和浅拷贝、集合的交集并集
http://blog.csdn.net/lian_1988/article/details/45970927 http://www.cnblogs.com/yxnchinahlj/archive/2 ...
- Installing Selenium and ChromeDriver on Ubuntu
I recently got Selenium, Google Chrome, and ChromeDriver installed and working on a DigitalOcean ins ...
- STM32F412应用开发笔记之二:基本GPIO控制
NUCLEO-F412ZG板子上的元器件并没有完全焊接,除去ST-LINK部分和电源部分后,还有用一个USB主机接口,三个LED灯和两个按钮,不过很多功能引脚都已经引到了插针.查看原理图可发现,由原理 ...