下面将以实现一个音频通话功能为示例详细介绍VoiceEngine的使用,在文末将附上相应源码的下载地址。这里参考的是voiceengine\voe_cmd_test。

第一步是创建VoiceEngine和相关的sub-apis

// Create VoiceEngine related instance
webrtc::VoiceEngine* ptrVoE = NULL;
ptrVoE = webrtc::VoiceEngine::Create(); webrtc::VoEBase* ptrVoEBase = NULL;
ptrVoEBase = webrtc::VoEBase::GetInterface(ptrVoE); webrtc::VoECodec* ptrVoECodec = NULL;
ptrVoECodec = webrtc::VoECodec::GetInterface(ptrVoE); webrtc::VoEAudioProcessing* ptrVoEAp = NULL;
ptrVoEAp = webrtc::VoEAudioProcessing::GetInterface(ptrVoE); webrtc::VoEVolumeControl* ptrVoEVolume = NULL;
ptrVoEVolume = webrtc::VoEVolumeControl::GetInterface(ptrVoE); webrtc::VoENetwork* ptrVoENetwork = NULL;
ptrVoENetwork = webrtc::VoENetwork::GetInterface(ptrVoE); webrtc::VoEFile* ptrVoEFile = NULL;
ptrVoEFile = webrtc::VoEFile::GetInterface(ptrVoE); webrtc::VoEHardware* ptrVoEHardware = NULL;
ptrVoEHardware = webrtc::VoEHardware::GetInterface(ptrVoE);

然后可以选择设置tracefile的路径,这里我们还会对麦克风以及回放的声音做一个录制,所以也一并指明路径。

//Set Trace File and Record File
const std::string trace_filename = "webrtc_trace.txt";
VoiceEngine::SetTraceFilter(kTraceAll);
error = VoiceEngine::SetTraceFile(trace_filename.c_str());
if (error != )
{
printf("ERROR in VoiceEngine::SetTraceFile\n");
return error;
}
error = VoiceEngine::SetTraceCallback(NULL);
if (error != )
{
printf("ERROR in VoiceEngine::SetTraceCallback\n");
return error;
}
const std::string play_filename = "recorded_playout.wav";
const std::string mic_filename = "recorded_mic.wav";

接下来是初始化,获取VoiceEngine的版本号

//Init
error = ptrVoEBase->Init();
if (error != )
{
printf("ERROR in VoEBase::Init\n");
return error;
}
error = ptrVoEBase->RegisterVoiceEngineObserver(my_observer);
if (error != )
{
printf("ERROR in VoEBase:;RegisterVoiceEngineObserver\n");
return error;
}
printf("Version\n");
char tmp[];
error = ptrVoEBase->GetVersion(tmp);
if (error != )
{
printf("ERROR in VoEBase::GetVersion\n");
return error;
}
printf("%s\n", tmp);

这里同时还注册了一个VoiceEngineObserver对象,可以根据相应的error code输出信息,比如当检测到键盘敲击的噪音时会给出提示。这个类的定义如下:

class MyObserver : public VoiceEngineObserver
{
public:
virtual void CallbackOnError(int channel, int err_code);
}; void MyObserver::CallbackOnError(int channel, int err_code)
{
// Add printf for other error codes here
if (err_code == VE_TYPING_NOISE_WARNING)
{
printf(" TYPING NOISE DETECTED \n");
}
else if (err_code == VE_TYPING_NOISE_OFF_WARNING)
{
printf(" TYPING NOISE OFF DETECTED \n");
}
else if (err_code == VE_RECEIVE_PACKET_TIMEOUT)
{
printf(" RECEIVE PACKET TIMEOUT \n");
}
else if (err_code == VE_PACKET_RECEIPT_RESTARTED)
{
printf(" PACKET RECEIPT RESTARTED \n");
}
else if (err_code == VE_RUNTIME_PLAY_WARNING)
{
printf(" RUNTIME PLAY WARNING \n");
}
else if (err_code == VE_RUNTIME_REC_WARNING)
{
printf(" RUNTIME RECORD WARNING \n");
}
else if (err_code == VE_SATURATION_WARNING)
{
printf(" SATURATION WARNING \n");
}
else if (err_code == VE_RUNTIME_PLAY_ERROR)
{
printf(" RUNTIME PLAY ERROR \n");
}
else if (err_code == VE_RUNTIME_REC_ERROR)
{
printf(" RUNTIME RECORD ERROR \n");
}
else if (err_code == VE_REC_DEVICE_REMOVED)
{
printf(" RECORD DEVICE REMOVED \n");
}
}

以上完成了前期准备的工作,下面首先开始对网络的设置。如果是在本机上进行测试的话,ip地址直接写127.0.0.1即可,同时要注意的是,remote port和local port要一致。

//Network Settings
int audiochannel;
audiochannel = ptrVoEBase->CreateChannel();
if (audiochannel < )
{
printf("ERROR in VoEBase::CreateChannel\n");
return audiochannel;
}
VoiceChannelTransport* voice_channel_transport = new VoiceChannelTransport(ptrVoENetwork, audiochannel);
char ip[] = "127.0.0.1";
int rPort = ;//remote port
int lPort = ;//local port
error = voice_channel_transport->SetSendDestination(ip, rPort);
if (error != )
{
printf("ERROR in set send ip and port\n");
return error;
}
error = voice_channel_transport->SetLocalReceiver(lPort);
if (error != )
{
printf("ERROR in set receiver and port\n");
return error;
}

上面出现的VoiceChannelTransport类的定义如下

// Helper class for VoiceEngine tests.
class VoiceChannelTransport : public webrtc::test::UdpTransportData
{
public:
VoiceChannelTransport(VoENetwork* voe_network, int channel); virtual ~VoiceChannelTransport(); // Start implementation of UdpTransportData.
void IncomingRTPPacket(const int8_t* incoming_rtp_packet,
const size_t packet_length,
const char* /*from_ip*/,
const uint16_t /*from_port*/) override; void IncomingRTCPPacket(const int8_t* incoming_rtcp_packet,
const size_t packet_length,
const char* /*from_ip*/,
const uint16_t /*from_port*/) override;
// End implementation of UdpTransportData. // Specifies the ports to receive RTP packets on.
int SetLocalReceiver(uint16_t rtp_port); // Specifies the destination port and IP address for a specified channel.
int SetSendDestination(const char* ip_address, uint16_t rtp_port); private:
int channel_;
VoENetwork* voe_network_;
webrtc::test::UdpTransport* socket_transport_;
}; VoiceChannelTransport::VoiceChannelTransport(VoENetwork* voe_network,
int channel)
: channel_(channel),
voe_network_(voe_network)
{
uint8_t socket_threads = ;
socket_transport_ = webrtc::test::UdpTransport::Create(channel, socket_threads);
int registered = voe_network_->RegisterExternalTransport(channel, *socket_transport_);
#if !defined(WEBRTC_ANDROID) && !defined(WEBRTC_IOS)
if (registered != )
return;
#else
assert(registered == );
#endif
} VoiceChannelTransport::~VoiceChannelTransport()
{
voe_network_->DeRegisterExternalTransport(channel_);
webrtc::test::UdpTransport::Destroy(socket_transport_);
} void VoiceChannelTransport::IncomingRTPPacket(
const int8_t* incoming_rtp_packet,
const size_t packet_length,
const char* /*from_ip*/,
const uint16_t /*from_port*/)
{
voe_network_->ReceivedRTPPacket(channel_, incoming_rtp_packet, packet_length, PacketTime());
} void VoiceChannelTransport::IncomingRTCPPacket(
const int8_t* incoming_rtcp_packet,
const size_t packet_length,
const char* /*from_ip*/,
const uint16_t /*from_port*/)
{
voe_network_->ReceivedRTCPPacket(channel_, incoming_rtcp_packet, packet_length);
} int VoiceChannelTransport::SetLocalReceiver(uint16_t rtp_port)
{
static const int kNumReceiveSocketBuffers = ;
int return_value = socket_transport_->InitializeReceiveSockets(this, rtp_port);
if (return_value == )
{
return socket_transport_->StartReceiving(kNumReceiveSocketBuffers);
}
return return_value;
} int VoiceChannelTransport::SetSendDestination(const char* ip_address, uint16_t rtp_port)
{
return socket_transport_->InitializeSendSockets(ip_address, rtp_port);
}

完成了网络的设置后,进行编解码器的设置。这里简单的由用户选择使用哪一个编码器,当然还可以进一步对编码器的参数进行设置

//Setup Codecs
CodecInst codec_params;
CodecInst cinst;
for (int i = ; i < ptrVoECodec->NumOfCodecs(); ++i)
{
int error = ptrVoECodec->GetCodec(i, codec_params);
if (error != )
{
printf("ERROR in VoECodec::GetCodec\n");
return error;
}
printf("%2d. %3d %s/%d/%d \n", i, codec_params.pltype, codec_params.plname, codec_params.plfreq, codec_params.channels);
}
printf("Select send codec: ");
int codec_selection;
scanf("%i", &codec_selection);
ptrVoECodec->GetCodec(codec_selection, cinst);
error = ptrVoECodec->SetSendCodec(audiochannel, cinst);
if (error != )
{
printf("ERROR in VoECodec::SetSendCodec\n");
return error;
}

接下来进行录制设备和播放设备的设置

//Setup Devices
int rd(-), pd(-);
error = ptrVoEHardware->GetNumOfRecordingDevices(rd);
if (error != )
{
printf("ERROR in VoEHardware::GetNumOfRecordingDevices\n");
return error;
}
error = ptrVoEHardware->GetNumOfPlayoutDevices(pd);
if (error != )
{
printf("ERROR in VoEHardware::GetNumOfPlayoutDevices\n");
return error;
} char dn[] = { };
char guid[] = { };
printf("\nPlayout devices (%d): \n", pd);
for (int j = ; j < pd; ++j)
{
error = ptrVoEHardware->GetPlayoutDeviceName(j, dn, guid);
if (error != )
{
printf("ERROR in VoEHardware::GetPlayoutDeviceName\n");
return error;
}
printf(" %d: %s \n", j, dn);
} printf("Recording devices (%d): \n", rd);
for (int j = ; j < rd; ++j)
{
error = ptrVoEHardware->GetRecordingDeviceName(j, dn, guid);
if (error != )
{
printf("ERROR in VoEHardware::GetRecordingDeviceName\n");
return error;
}
printf(" %d: %s \n", j, dn);
} printf("Select playout device: ");
scanf("%d", &pd);
error = ptrVoEHardware->SetPlayoutDevice(pd);
if (error != )
{
printf("ERROR in VoEHardware::SetPlayoutDevice\n");
return error;
}
printf("Select recording device: ");
scanf("%d", &rd);
getchar();
error = ptrVoEHardware->SetRecordingDevice(rd);
if (error != )
{
printf("ERROR in VoEHardware::SetRecordingDevice\n");
return error;
}

然后对音频预处理功能进行设置,这里作为示例,把各种预处理功能都enable了

//Audio Processing
error = ptrVoECodec->SetVADStatus(, );//FIX:why not use audio channel
if (error != )
{
printf("ERROR in VoECodec::SetVADStatus\n");
return error;
}
error = ptrVoEAp->SetAgcStatus();
if (error != )
{
printf("ERROR in VoEAudioProcess::SetAgcStatus\n");
return error;
}
error = ptrVoEAp->SetEcStatus();
if (error != )
{
printf("ERROR in VoEAudioProcess::SetEcStatus\n");
return error;
}
error = ptrVoEAp->SetNsStatus();
if (error != )
{
printf("ERROR in VoEAudioProcess::SetNsStatus\n");
return error;
}
error = ptrVoEAp->SetRxAgcStatus(audiochannel, );
if (error != )
{
printf("ERROR in VoEAudioProcess::SetRxAgcStatus\n");
return error;
}
error = ptrVoEAp->SetRxNsStatus(audiochannel, );
if (error != )
{
printf("ERROR in VoEAudioProcess::SetRxNsStatus\n");
return error;
}

至此,就可以开始发送、接收、录制了

//Start Receive
error = ptrVoEBase->StartReceive(audiochannel);
if (error != )
{
printf("ERROR in VoEBase::StartReceive\n");
return error;
}
//Start Playout
error = ptrVoEBase->StartPlayout(audiochannel);
if (error != )
{
printf("ERROR in VoEBase::StartPlayout\n");
return error;
}
//Start Send
error = ptrVoEBase->StartSend(audiochannel);
if (error != )
{
printf("ERROR in VoEBase::StartSend\n");
return error;
}
//Start Record
error = ptrVoEFile->StartRecordingMicrophone(mic_filename.c_str());
if (error != )
{
printf("ERROR in VoEFile::StartRecordingMicrophone\n");
return error;
}
error = ptrVoEFile->StartRecordingPlayout(audiochannel, play_filename.c_str());
if (error != )
{
printf("ERROR in VoEFile::StartRecordingPlayout\n");
return error;
}

在通话结束之后,还需要进行相应的stop\release

//Stop Record
error = ptrVoEFile->StopRecordingMicrophone();
if (error != )
{
printf("ERROR in VoEFile::StopRecordingMicrophone\n");
return error;
}
error = ptrVoEFile->StopRecordingPlayout(audiochannel);
if (error != )
{
printf("ERROR in VoEFile::StopRecordingPlayout\n");
return error;
}
//Stop Receive
error = ptrVoEBase->StopReceive(audiochannel);
if (error != )
{
printf("ERROR in VoEBase::StopReceive\n");
return error;
}
//Stop Send
error = ptrVoEBase->StopSend(audiochannel);
if (error != )
{
printf("ERROR in VoEBase::StopSend\n");
return error;
}
//Stop Playout
error = ptrVoEBase->StopPlayout(audiochannel);
if (error != )
{
printf("ERROR in VoEBase::StopPlayout\n");
return error;
}
//Delete Channel
error = ptrVoEBase->DeleteChannel(audiochannel);
if (error != )
{
printf("ERROR in VoEBase::DeleteChannel\n");
return error;
} delete voice_channel_transport; ptrVoEBase->DeRegisterVoiceEngineObserver();
error = ptrVoEBase->Terminate();
if (error != )
{
printf("ERROR in VoEBase::Terminate\n");
return error;
} int remainingInterfaces = ;
remainingInterfaces += ptrVoEBase->Release();
remainingInterfaces = ptrVoECodec->Release();
remainingInterfaces += ptrVoEVolume->Release();
remainingInterfaces += ptrVoEFile->Release();
remainingInterfaces += ptrVoEAp->Release();
remainingInterfaces += ptrVoEHardware->Release();
remainingInterfaces += ptrVoENetwork->Release(); /*if (remainingInterfaces > 0)
{
printf("ERROR: Could not release all interfaces\n");
return -1;
}*/ bool deleted = webrtc::VoiceEngine::Delete(ptrVoE);
if (deleted == false)
{
printf("ERROR in VoiceEngine::Delete\n");
return -;
}

需要注意的是,这里remainingInterfaces最后不会为0,因为我们没有用到VoiceEngine的全部sub-apis。

至此,就实现了一个音频通话的功能。本项目源代码下载地址。github地址

原文转自 http://blog.csdn.net/nonmarking/article/details/50577860

WebRTC VoiceEngine综合应用示例(二)——音频通话的基本流程(转)的更多相关文章

  1. WebRTC VoiceEngine综合应用示例(一)——基本结构分析(转)

    把自己这两天学习VoiceEngine的成果分享出来,供大家参考,有什么问题也欢迎大家指出,一起学习一起进步. 本文将对VoiceEngine的基本结构做一个分析,分析的方法是自底向上的:看一个音频编 ...

  2. WebRTC VideoEngine综合应用示例(一)——视频通话的基本流程(转)

    本系列目前共三篇文章,后续还会更新 WebRTC VideoEngine综合应用示例(一)——视频通话的基本流程 WebRTC VideoEngine综合应用示例(二)——集成OPENH264编解码器 ...

  3. WebRTC音频通话升级为视频通话

    我们有时候在音频通话过程中,想要改成视频通话.如果挂断当前通话再重新发起视频通话就会显得比较麻烦. 因此很多app提供了将音频通话升级成视频通话的功能,同时也有将视频通话降为音频通话的功能. 本文演示 ...

  4. AliIAC 智能音频编解码器:在有限带宽条件下带来更高质量的音频通话体验

    随着信息技术的发展,人们对实时通信的需求不断增加,并逐渐成为工作生活中不可或缺的一部分.每年海量的音视频通话分钟数对互联网基础设施提出了巨大的挑战.尽管目前全球的互联网用户绝大多数均处于良好的网络状况 ...

  5. 全互联结构DVPN综合配置示例

    以下内容摘自正在全面热销的最新网络设备图书“豪华四件套”之一<H3C路由器配置与管理完全手册>(第二版)(其余三本分别是:<Cisco交换机配置与管理完全手册>(第二版).&l ...

  6. 通过WebRTC实现实时视频通信(二)

    通过WebRTC实现实时视频通信(一) 通过WebRTC实现实时视频通信(二) 通过WebRTC实现实时视频通信(三) 在上一篇文章中,我们讲解了WebRTC的概述.历史.安全性和开发者工具.接下来我 ...

  7. PIE SDK组件式开发综合运用示例

    1. 功能概述 关于PIE SDK的功能开发,在我们的博客上已经分门别类的进行了展示,点击PIESat博客就可以访问,为了初学者入门,本章节将对从PIE SDK组件式二次开发如何搭建界面.如何综合开发 ...

  8. Github团队开发示例(二)

    Github团队开发示例(二) 作者:Grey 原文地址:http://www.cnblogs.com/greyzeng/p/6063765.html 接之前讲的Github团队开发示例(一),本文主 ...

  9. WPF命中测试示例(二)——几何区域命中测试

    原文:WPF命中测试示例(二)--几何区域命中测试 接续上次的命中测试,这次来做几何区域测试示例. 示例 首先新建一个WPF项目,在主界面中拖入一个按钮控件,并修改代码中的以下高亮位置: 当前设计视图 ...

随机推荐

  1. 用户价值模型 CITE :https://www.jianshu.com/p/34199b13ffbc

    RFM用户价值模型的原理和应用  ▌定义 在众多的用户价值分析模型中,RFM模型是被广泛被应用的:RFM模型是衡量客户价值和客户创利能力的重要工具和手段,在RFM模式中,R(Recency)表示客户购 ...

  2. Java 吃金币游戏设计与制作,下载版后补,代码没问题

    package com.swift; import java.awt.Color; import java.awt.Point; import java.awt.event.KeyEvent; imp ...

  3. *运算和&运算

    /* &:取地址运算符 *:指针运算符(或称为间接运算符),取指针所指向的对象的内容 */ int a,b; int *pointer_1, *pointer_2; pointer_1 = & ...

  4. Github使用技巧总结

    <config> PyCharm与GitHub配置使用总结 <readme> 在github的readme添加图片 github readme写法 GitHub上README. ...

  5. Some tricks

    一 . \(2^i >\sum_{0}^{i - 1}2^i\) 二. 当概率非常小时,且答案允许范围内的误差.如与正确答案不超过\(2^{-6}\)即可. 选取一个较小的值,然后取min即可. ...

  6. 如何用纯 CSS 创作一个小球反弹的动画

    效果预览 在线演示 按下右侧的"点击预览"按钮可以在当前页面预览,点击链接可以全屏预览. https://codepen.io/comehope/pen/OwWROO 可交互视频 ...

  7. 无需上传附件到服务器,Servlet读取Excel(二)

    package com.str; import java.io.File;import java.io.FileInputStream;import java.io.IOException; impo ...

  8. LeetCode(166) Fraction to Recurring Decimal

    题目 Given two integers representing the numerator and denominator of a fraction, return the fraction ...

  9. 安装repo

    $ sudo apt-get install curl -y$ curl "http://android.git.linaro.org/gitweb?p=tools/repo.git;a=b ...

  10. 串口编程的相关API函数

    用户使用函数CreateFile()创建与指定串口相关联的文件,然后可以使用该函数返回的文件句柄进行串口参数设置.• 01 HANDLE hModem; //定义串口句柄02 hModem=Creat ...