(二)Audio子系统之new AudioRecord()(Android4.4)
在上一篇文章《(一)Audio子系统之AudioRecord.getMinBufferSize》中已经介绍了AudioRecord如何获取最小缓冲区大小,接下来,继续分析AudioRecorder方法中的new AudioRecorder的实现,本文基于Android4.4,Android5.1请戳这里
注:本篇文章仅作为笔记使用,如其中有误或与Android5.1版本的分析不同,以Android5.1版本的为准
函数原型:
public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,int bufferSizeInBytes) throws IllegalArgumentException
作用:
创建AudioRecord对象,建立录音通道
参数:
audioSource:录制源,这里设置MediaRecorder.AudioSource.Mic,其他请见MediaRecorder.AudioSource录制源定义,比如MediaRecorder.AudioSource.FM_TUNER等;
sampleRateInHz:默认采样率,单位Hz,这里设置为44100,44100Hz是当前唯一能保证在所有设备上工作的采样率;
channelConfig: 描述音频通道设置,这里设置为AudioFormat.CHANNEL_CONFIGURATION_MONO,CHANNEL_CONFIGURATION_MONO保证能在所有设备上工作;
audioFormat:音频数据保证支持此格式,这里设置为AudioFormat.ENCODING_16BIT;
bufferSizeInBytes:在录制过程中,音频数据写入缓冲区的总数(字节),即getMinVufferSize()获取到的值。
异常:
当参数设置不正确或不支持的参数时,将会抛出IllegalArgumentException
接下来进入系统分析具体实现
base\media\java\android\media\AudioRecord.java
public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
int bufferSizeInBytes)
throws IllegalArgumentException {
mRecordingState = RECORDSTATE_STOPPED; // remember which looper is associated with the AudioRecord instanciation
if ((mInitializationLooper = Looper.myLooper()) == null) {
mInitializationLooper = Looper.getMainLooper();
} audioParamCheck(audioSource, sampleRateInHz, channelConfig, audioFormat); audioBuffSizeCheck(bufferSizeInBytes); // native initialization
int[] session = new int[1];
session[0] = 0;
//TODO: update native initialization when information about hardware init failure
// due to capture device already open is available.
int initResult = native_setup( new WeakReference<AudioRecord>(this), mRecordSource, mSampleRate, mChannelMask, mAudioFormat, mNativeBufferSizeInBytes, session);
if (initResult != SUCCESS) {
loge("Error code "+initResult+" when initializing native AudioRecord object.");
return; // with mState == STATE_UNINITIALIZED
} mSessionId = session[0]; mState = STATE_INITIALIZED;
}
首先标记录音状态为停止,然后调用audioParamCheck函数判断参数是否非法,再调用audioBuffSizeCheck函数检查buffer大小,一切正常,开始调用native_setup方法进行注册,并把自己的WeakReference传过去。
base\core\jni\android_media_AudioRecord.cpp
static int
android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
jint source, jint sampleRateInHertz, jint channelMask,
// Java channel masks map directly to the native definition
jint audioFormat, jint buffSizeInBytes, jintArray jSession)
{
if (!audio_is_input_channel(channelMask)) {
ALOGE("Error creating AudioRecord: channel mask %#x is not valid.", channelMask);
return AUDIORECORD_ERROR_SETUP_INVALIDCHANNELMASK;
}
uint32_t nbChannels = popcount(channelMask); // compare the format against the Java constants
if ((audioFormat != ENCODING_PCM_16BIT)
&& (audioFormat != ENCODING_PCM_8BIT)) {
ALOGE("Error creating AudioRecord: unsupported audio format.");
return AUDIORECORD_ERROR_SETUP_INVALIDFORMAT;
} int bytesPerSample = audioFormat == ENCODING_PCM_16BIT ? 2 : 1;
audio_format_t format = audioFormat == ENCODING_PCM_16BIT ?
AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT; if (buffSizeInBytes == 0) {
ALOGE("Error creating AudioRecord: frameCount is 0.");
return AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT;
}
int frameSize = nbChannels * bytesPerSample;
size_t frameCount = buffSizeInBytes / frameSize; if ((uint32_t(source) >= AUDIO_SOURCE_CNT) && (uint32_t(source) != AUDIO_SOURCE_HOTWORD)) {
ALOGE("Error creating AudioRecord: unknown source.");
return AUDIORECORD_ERROR_SETUP_INVALIDSOURCE;
} jclass clazz = env->GetObjectClass(thiz);
if (clazz == NULL) {
ALOGE("Can't find %s when setting up callback.", kClassPathName);
return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
} if (jSession == NULL) {
ALOGE("Error creating AudioRecord: invalid session ID pointer");
return AUDIORECORD_ERROR;
} jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
if (nSession == NULL) {
ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
return AUDIORECORD_ERROR;
}
int sessionId = nSession[0];
env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
nSession = NULL; // create an uninitialized AudioRecord object
sp<AudioRecord> lpRecorder = new AudioRecord(); // create the callback information:
// this data will be passed with every AudioRecord callback
audiorecord_callback_cookie *lpCallbackData = new audiorecord_callback_cookie;
lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);
// we use a weak reference so the AudioRecord object can be garbage collected.
lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);
lpCallbackData->busy = false; lpRecorder->set((audio_source_t) source,
sampleRateInHertz,
format, // word length, PCM
channelMask,
frameCount,
recorderCallback,// callback_t
lpCallbackData,// void* user
0, // notificationFrames,
true, // threadCanCallJava
sessionId); if (lpRecorder->initCheck() != NO_ERROR) {
ALOGE("Error creating AudioRecord instance: initialization check failed.");
goto native_init_failure;
} nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
if (nSession == NULL) {
ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
goto native_init_failure;
}
// read the audio session ID back from AudioRecord in case a new session was created during set()
nSession[0] = lpRecorder->getSessionId();
env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
nSession = NULL; { // scope for the lock
Mutex::Autolock l(sLock);
sAudioRecordCallBackCookies.add(lpCallbackData);
}
// save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field
// of the Java object
setAudioRecord(env, thiz, lpRecorder); // save our newly created callback information in the "nativeCallbackCookie" field
// of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize()
env->SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, (int)lpCallbackData); return AUDIORECORD_SUCCESS; // failure:
native_init_failure:
env->DeleteGlobalRef(lpCallbackData->audioRecord_class);
env->DeleteGlobalRef(lpCallbackData->audioRecord_ref);
delete lpCallbackData;
env->SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, 0); return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
} static JNINativeMethod gMethods[] = {
// name, signature, funcPtr
{"native_start", "(II)I", (void *)android_media_AudioRecord_start},
{"native_stop", "()V", (void *)android_media_AudioRecord_stop},
{"native_setup", "(Ljava/lang/Object;IIIII[I)I",
(void *)android_media_AudioRecord_setup},
{"native_finalize", "()V", (void *)android_media_AudioRecord_finalize},
{"native_release", "()V", (void *)android_media_AudioRecord_release},
{"native_read_in_byte_array",
"([BII)I", (void *)android_media_AudioRecord_readInByteArray},
{"native_read_in_short_array",
"([SII)I", (void *)android_media_AudioRecord_readInShortArray},
{"native_read_in_direct_buffer","(Ljava/lang/Object;I)I",
(void *)android_media_AudioRecord_readInDirectBuffer},
{"native_set_marker_pos","(I)I", (void *)android_media_AudioRecord_set_marker_pos},
{"native_get_marker_pos","()I", (void *)android_media_AudioRecord_get_marker_pos},
{"native_set_pos_update_period",
"(I)I", (void *)android_media_AudioRecord_set_pos_update_period},
{"native_get_pos_update_period",
"()I", (void *)android_media_AudioRecord_get_pos_update_period},
{"native_get_min_buff_size",
"(III)I", (void *)android_media_AudioRecord_get_min_buff_size},
};
在JNI中继续调用AudioRecoder的set函数,同时把callback回调绑定起来,其中set函数的定义在frameworks\av\include\media\AudioRecord.h
status_t set(audio_source_t inputSource,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
int frameCount = 0,
callback_t cbf = NULL,
void* user = NULL,
int notificationFrames = 0,
bool threadCanCallJava = false,
int sessionId = 0,
transfer_type transferType = TRANSFER_DEFAULT,
audio_input_flags_t flags = AUDIO_INPUT_FLAG_NONE);
这里设置了transfer_type为默认
frameworks\av\media\libmedia\AudioRecord.cpp
status_t AudioRecord::set(
audio_source_t inputSource,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
int frameCountInt,
callback_t cbf,
void* user,
int notificationFrames,
bool threadCanCallJava,
int sessionId,
transfer_type transferType,
audio_input_flags_t flags)
{
switch (transferType) { case TRANSFER_DEFAULT: if (cbf == NULL || threadCanCallJava) {transferType = TRANSFER_SYNC; } else {
transferType = TRANSFER_CALLBACK; }
break;
case TRANSFER_CALLBACK:
if (cbf == NULL) {
ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL");
return BAD_VALUE;
}
break;
case TRANSFER_OBTAIN:
case TRANSFER_SYNC:
break;
default:
ALOGE("Invalid transfer type %d", transferType);
return BAD_VALUE;
}
mTransfer = transferType; // FIXME "int" here is legacy and will be replaced by size_t later
if (frameCountInt < 0) {
ALOGE("Invalid frame count %d", frameCountInt);
return BAD_VALUE;
}
size_t frameCount = frameCountInt; ALOGV("set(): sampleRate %u, channelMask %#x, frameCount %u", sampleRate, channelMask,
frameCount); AutoMutex lock(mLock); if (mAudioRecord != 0) {
ALOGE("Track already in use");
return INVALID_OPERATION;
} if (inputSource == AUDIO_SOURCE_DEFAULT) {
inputSource = AUDIO_SOURCE_MIC;
}
mInputSource = inputSource; if (sampleRate == 0) {
ALOGE("Invalid sample rate %u", sampleRate);
return BAD_VALUE;
}
mSampleRate = sampleRate; // these below should probably come from the audioFlinger too...
if (format == AUDIO_FORMAT_DEFAULT) {
format = AUDIO_FORMAT_PCM_16_BIT;
} // validate parameters
if (!audio_is_valid_format(format)) {
ALOGE("Invalid format %d", format);
return BAD_VALUE;
}
// Temporary restriction: AudioFlinger currently supports 16-bit PCM only
if (format != AUDIO_FORMAT_PCM_16_BIT) {
ALOGE("Format %d is not supported", format);
return BAD_VALUE;
}
mFormat = format; if (!audio_is_input_channel(channelMask)) {
ALOGE("Invalid channel mask %#x", channelMask);
return BAD_VALUE;
}
mChannelMask = channelMask;
uint32_t channelCount = popcount(channelMask);
mChannelCount = channelCount; // Assumes audio_is_linear_pcm(format), else sizeof(uint8_t)
mFrameSize = channelCount * audio_bytes_per_sample(format); // validate framecount
size_t minFrameCount = 0;
status_t status = AudioRecord::getMinFrameCount(&minFrameCount,
sampleRate, format, channelMask);
if (status != NO_ERROR) {
ALOGE("getMinFrameCount() failed; status %d", status);
return status;
}
ALOGV("AudioRecord::set() minFrameCount = %d", minFrameCount); if (frameCount == 0) {
frameCount = minFrameCount;
} else if (frameCount < minFrameCount) {
ALOGE("frameCount %u < minFrameCount %u", frameCount, minFrameCount);
return BAD_VALUE;
}
mFrameCount = frameCount; mNotificationFramesReq = notificationFrames;
mNotificationFramesAct = 0; if (sessionId == 0 ) {
mSessionId = AudioSystem::newAudioSessionId();
} else {
mSessionId = sessionId;
}
ALOGV("set(): mSessionId %d", mSessionId); mFlags = flags; // create the IAudioRecord status = openRecord_l(0 /*epoch*/); if (status) {
return status;
} if (cbf != NULL) {
mAudioRecordThread = new AudioRecordThread(*this, threadCanCallJava);
mAudioRecordThread->run("AudioRecord", ANDROID_PRIORITY_AUDIO);
} mStatus = NO_ERROR; // Update buffer size in case it has been limited by AudioFlinger during track creation
mFrameCount = mCblk->frameCount_; mActive = false;
mCbf = cbf;
mRefreshRemaining = true;
mUserData = user;
// TODO: add audio hardware input latency here
mLatency = (1000*mFrameCount) / sampleRate;
mMarkerPosition = 0;
mMarkerReached = false;
mNewPosition = 0;
mUpdatePeriod = 0;
AudioSystem::acquireAudioSessionId(mSessionId);
mSequence = 1;
mObservedSequence = mSequence;
mInOverrun = false; return NO_ERROR;
}
首先对参数进行二次处理,可以看到SOURCE_DEFAULT的源其实也指向了MIC,FORMAT_DEFAULT其实就是PCM_16_BIT,然后调用getMinFrameCount获取最小frame数量,最后调用openRecord_l()函数创建IAudioRecord对象,并开启一个AudioRecordThread线程,这里着重分析openRecord_l函数
status_t AudioRecord::openRecord_l(size_t epoch)
{
status_t status;
const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
if (audioFlinger == 0) {
ALOGE("Could not get audioflinger");
return NO_INIT;
} IAudioFlinger::track_flags_t trackFlags = IAudioFlinger::TRACK_DEFAULT;
pid_t tid = -1; // Client can only express a preference for FAST. Server will perform additional tests.
// The only supported use case for FAST is callback transfer mode.
if (mFlags & AUDIO_INPUT_FLAG_FAST) {
if ((mTransfer != TRANSFER_CALLBACK) || (mAudioRecordThread == 0)) {
ALOGW("AUDIO_INPUT_FLAG_FAST denied by client");
// once denied, do not request again if IAudioRecord is re-created
mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
} else {
trackFlags |= IAudioFlinger::TRACK_FAST;
tid = mAudioRecordThread->getTid();
}
} mNotificationFramesAct = mNotificationFramesReq; if (!(mFlags & AUDIO_INPUT_FLAG_FAST)) {
// Make sure that application is notified with sufficient margin before overrun
if (mNotificationFramesAct == 0 || mNotificationFramesAct > mFrameCount/2) {
mNotificationFramesAct = mFrameCount/2;
}
} audio_io_handle_t input = AudioSystem::getInput(mInputSource, mSampleRate, mFormat,
mChannelMask, mSessionId);
if (input == 0) {
ALOGE("Could not get audio input for record source %d", mInputSource);
return BAD_VALUE;
} int originalSessionId = mSessionId;
sp<IAudioRecord> record = audioFlinger->openRecord(input,
mSampleRate, mFormat,
mChannelMask,
mFrameCount,
&trackFlags,
tid,
&mSessionId,
&status);
ALOGE_IF(originalSessionId != 0 && mSessionId != originalSessionId,
"session ID changed from %d to %d", originalSessionId, mSessionId); if (record == 0 || status != NO_ERROR) {
ALOGE("AudioFlinger could not create record track, status: %d", status);
AudioSystem::releaseInput(input);
return status;
}
sp<IMemory> iMem = record->getCblk();
if (iMem == 0) {
ALOGE("Could not get control block");
return NO_INIT;
}
void *iMemPointer = iMem->pointer();
if (iMemPointer == NULL) {
ALOGE("Could not get control block pointer");
return NO_INIT;
}
if (mAudioRecord != 0) {
mAudioRecord->asBinder()->unlinkToDeath(mDeathNotifier, this);
mDeathNotifier.clear();
}
mInput = input;
mAudioRecord = record;
mCblkMemory = iMem;
audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);
mCblk = cblk;
// FIXME missing fast track frameCount logic
mAwaitBoost = false;
if (mFlags & AUDIO_INPUT_FLAG_FAST) {
if (trackFlags & IAudioFlinger::TRACK_FAST) {
ALOGV("AUDIO_INPUT_FLAG_FAST successful; frameCount %u", mFrameCount);
mAwaitBoost = true;
// double-buffering is not required for fast tracks, due to tighter scheduling
if (mNotificationFramesAct == 0 || mNotificationFramesAct > mFrameCount) {
mNotificationFramesAct = mFrameCount;
}
} else {
ALOGV("AUDIO_INPUT_FLAG_FAST denied by server; frameCount %u", mFrameCount);
// once denied, do not request again if IAudioRecord is re-created
mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
if (mNotificationFramesAct == 0 || mNotificationFramesAct > mFrameCount/2) {
mNotificationFramesAct = mFrameCount/2;
}
}
} // starting address of buffers in shared memory
void *buffers = (char*)cblk + sizeof(audio_track_cblk_t); // update proxy
mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);
mProxy->setEpoch(epoch);
mProxy->setMinimum(mNotificationFramesAct); mDeathNotifier = new DeathNotifier(this);
mAudioRecord->asBinder()->linkToDeath(mDeathNotifier, this); return NO_ERROR;
}
先分析下这个函数的主要工作:
1.AudioSystem::get_audio_flinger()获取AudioFlinger服务
2.AudioSystem::getInput
获取输入流句柄
3.audioFlinger->openRecord获取IAudioRecord对象
4.通过IMemory共享内存,获取录音数据
5.通过AudioRecordClientProxy客户端更新录音数据
接下来再具体分析下这5步
1.首先获取AudioFlinger服务AudioSystem::get_audio_flinger()
frameworks\av\media\libmedia\AudioSystem.cpp
const sp<IAudioFlinger>& AudioSystem::get_audio_flinger()
{
Mutex::Autolock _l(gLock);
if (gAudioFlinger == 0) {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
binder = sm->getService(String16("media.audio_flinger"));
if (binder != 0)
break;
ALOGW("AudioFlinger not published, waiting...");
usleep(500000); // 0.5 s
} while (true);
if (gAudioFlingerClient == NULL) {
gAudioFlingerClient = new AudioFlingerClient();
} else {
if (gAudioErrorCallback) {
gAudioErrorCallback(NO_ERROR);
}
}
binder->linkToDeath(gAudioFlingerClient);
gAudioFlinger = interface_cast<IAudioFlinger>(binder);
gAudioFlinger->registerClient(gAudioFlingerClient);
}
ALOGE_IF(gAudioFlinger==0, "no AudioFlinger!?"); return gAudioFlinger;
}
然后获取input通道设备AudioSystem::getInput()
frameworks\av\media\libmedia\AudioSystem.cpp
audio_io_handle_t AudioSystem::getInput(audio_source_t inputSource,
uint32_t samplingRate,
audio_format_t format,
audio_channel_mask_t channelMask,
int sessionId)
{
const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
if (aps == 0) return 0;
return aps->getInput(inputSource, samplingRate, format, channelMask, sessionId);
}
获取policy服务AudioSystem::get_audio_policy_service()
const sp<IAudioPolicyService>& AudioSystem::get_audio_policy_service()
{
gLock.lock();
if (gAudioPolicyService == 0) {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
binder = sm->getService(String16("media.audio_policy"));
if (binder != 0)
break;
ALOGW("AudioPolicyService not published, waiting...");
usleep(500000); // 0.5 s
} while (true);
if (gAudioPolicyServiceClient == NULL) {
gAudioPolicyServiceClient = new AudioPolicyServiceClient();
}
binder->linkToDeath(gAudioPolicyServiceClient);
gAudioPolicyService = interface_cast<IAudioPolicyService>(binder);
gLock.unlock();
} else {
gLock.unlock();
}
return gAudioPolicyService;
}
然后调用函数AudioPolicyService::getInput()
frameworks\av\services\audioflinger\AudioPolicyService.cpp
audio_io_handle_t AudioPolicyService::getInput(audio_source_t inputSource,
uint32_t samplingRate,
audio_format_t format,
audio_channel_mask_t channelMask,
int audioSession)
{
if (mpAudioPolicy == NULL) {
return 0;
}
// already checked by client, but double-check in case the client wrapper is bypassed
if (inputSource >= AUDIO_SOURCE_CNT && inputSource != AUDIO_SOURCE_HOTWORD) {
return 0;
} if ((inputSource == AUDIO_SOURCE_HOTWORD) && !captureHotwordAllowed()) {
return 0;
} Mutex::Autolock _l(mLock);
// the audio_in_acoustics_t parameter is ignored by get_input()
audio_io_handle_t input = mpAudioPolicy->get_input(mpAudioPolicy, inputSource, samplingRate,
format, channelMask, (audio_in_acoustics_t) 0); if (input == 0) {
return input;
}
// create audio pre processors according to input source
audio_source_t aliasSource = (inputSource == AUDIO_SOURCE_HOTWORD) ?
AUDIO_SOURCE_VOICE_RECOGNITION : inputSource; ssize_t index = mInputSources.indexOfKey(aliasSource);
if (index < 0) {
return input;
}
ssize_t idx = mInputs.indexOfKey(input);
InputDesc *inputDesc;
if (idx < 0) {
inputDesc = new InputDesc(audioSession);
mInputs.add(input, inputDesc);
} else {
inputDesc = mInputs.valueAt(idx);
} Vector <EffectDesc *> effects = mInputSources.valueAt(index)->mEffects;
for (size_t i = 0; i < effects.size(); i++) {
EffectDesc *effect = effects[i];
sp<AudioEffect> fx = new AudioEffect(NULL, &effect->mUuid, -1, 0, 0, audioSession, input);
status_t status = fx->initCheck();
if (status != NO_ERROR && status != ALREADY_EXISTS) {
ALOGW("Failed to create Fx %s on input %d", effect->mName, input);
// fx goes out of scope and strong ref on AudioEffect is released
continue;
}
for (size_t j = 0; j < effect->mParams.size(); j++) {
fx->setParameter(effect->mParams[j]);
}
inputDesc->mEffects.add(fx);
}
setPreProcessorEnabled(inputDesc, true);
return input;
}
继续调用mpAudioPolicy->get_input(),这里struct audio_policy *mpAudioPolicy;该结构体在#include <hardware/audio_policy.h>,也就是说接下来将进入HAL层
hardware\libhardware_legacy\audio\audio_policy_hal.cpp
static int create_legacy_ap(const struct audio_policy_device *device,
struct audio_policy_service_ops *aps_ops,
void *service,
struct audio_policy **ap)
{
struct legacy_audio_policy *lap;
int ret; if (!service || !aps_ops)
return -EINVAL; lap = (struct legacy_audio_policy *)calloc(1, sizeof(*lap));
if (!lap)
return -ENOMEM; lap->policy.set_device_connection_state = ap_set_device_connection_state;
lap->policy.get_device_connection_state = ap_get_device_connection_state;
lap->policy.set_phone_state = ap_set_phone_state;
lap->policy.set_ringer_mode = ap_set_ringer_mode;
lap->policy.set_force_use = ap_set_force_use;
lap->policy.get_force_use = ap_get_force_use;
lap->policy.set_can_mute_enforced_audible =
ap_set_can_mute_enforced_audible;
lap->policy.init_check = ap_init_check;
lap->policy.get_output = ap_get_output;
lap->policy.start_output = ap_start_output;
lap->policy.stop_output = ap_stop_output;
lap->policy.release_output = ap_release_output;
lap->policy.get_input = ap_get_input;
lap->policy.start_input = ap_start_input;
lap->policy.stop_input = ap_stop_input;
lap->policy.release_input = ap_release_input;
lap->policy.init_stream_volume = ap_init_stream_volume;
lap->policy.set_stream_volume_index = ap_set_stream_volume_index;
lap->policy.get_stream_volume_index = ap_get_stream_volume_index;
lap->policy.set_stream_volume_index_for_device = ap_set_stream_volume_index_for_device;
lap->policy.get_stream_volume_index_for_device = ap_get_stream_volume_index_for_device;
lap->policy.get_strategy_for_stream = ap_get_strategy_for_stream;
lap->policy.get_devices_for_stream = ap_get_devices_for_stream;
lap->policy.get_output_for_effect = ap_get_output_for_effect;
lap->policy.register_effect = ap_register_effect;
lap->policy.unregister_effect = ap_unregister_effect;
lap->policy.set_effect_enabled = ap_set_effect_enabled;
lap->policy.is_stream_active = ap_is_stream_active;
lap->policy.is_stream_active_remotely = ap_is_stream_active_remotely;
lap->policy.is_source_active = ap_is_source_active;
lap->policy.dump = ap_dump;
lap->policy.is_offload_supported = ap_is_offload_supported; lap->service = service;
lap->aps_ops = aps_ops;
lap->service_client =
new AudioPolicyCompatClient(aps_ops, service);
if (!lap->service_client) {
ret = -ENOMEM;
goto err_new_compat_client;
} lap->apm = createAudioPolicyManager(lap->service_client);
if (!lap->apm) {
ret = -ENOMEM;
goto err_create_apm;
} *ap = &lap->policy;
return 0; err_create_apm:
delete lap->service_client;
err_new_compat_client:
free(lap);
*ap = NULL;
return ret;
}
ap_get_input()函数实现如下
static audio_io_handle_t ap_get_input(struct audio_policy *pol, audio_source_t inputSource,
uint32_t sampling_rate,
audio_format_t format,
audio_channel_mask_t channelMask,
audio_in_acoustics_t acoustics)
{
struct legacy_audio_policy *lap = to_lap(pol);
return lap->apm->getInput((int) inputSource, sampling_rate, (int) format, channelMask,
(AudioSystem::audio_in_acoustics)acoustics);
}
先看legacy_audio_policy结构体
struct legacy_audio_policy {
struct audio_policy policy; void *service;
struct audio_policy_service_ops *aps_ops;
AudioPolicyCompatClient *service_client;
AudioPolicyInterface *apm;
};
接着调用getInput()
hardware\libhardware_legacy\audio\AudioPolicyManagerBase.cpp
audio_io_handle_t AudioPolicyManagerBase::getInput(int inputSource,
uint32_t samplingRate,
uint32_t format,
uint32_t channelMask,
AudioSystem::audio_in_acoustics acoustics)
{
audio_io_handle_t input = 0;
audio_devices_t device = getDeviceForInputSource(inputSource); ALOGV("getInput() inputSource %d, samplingRate %d, format %d, channelMask %x, acoustics %x",
inputSource, samplingRate, format, channelMask, acoustics); if (device == AUDIO_DEVICE_NONE) {
ALOGW("getInput() could not find device for inputSource %d", inputSource);
return 0;
} // adapt channel selection to input source
switch(inputSource) {
case AUDIO_SOURCE_VOICE_UPLINK:
channelMask = AudioSystem::CHANNEL_IN_VOICE_UPLINK;
break;
case AUDIO_SOURCE_VOICE_DOWNLINK:
channelMask = AudioSystem::CHANNEL_IN_VOICE_DNLINK;
break;
case AUDIO_SOURCE_VOICE_CALL:
channelMask = (AudioSystem::CHANNEL_IN_VOICE_UPLINK | AudioSystem::CHANNEL_IN_VOICE_DNLINK);
break;
default:
break;
} IOProfile *profile = getInputProfile(device,
samplingRate,
format,
channelMask);
if (profile == NULL) {
ALOGW("getInput() could not find profile for device %04x, samplingRate %d, format %d,"
"channelMask %04x",
device, samplingRate, format, channelMask);
return 0;
} if (profile->mModule->mHandle == 0) {
ALOGE("getInput(): HW module %s not opened", profile->mModule->mName);
return 0;
} AudioInputDescriptor *inputDesc = new AudioInputDescriptor(profile); inputDesc->mInputSource = inputSource;
inputDesc->mDevice = device;
inputDesc->mSamplingRate = samplingRate;
inputDesc->mFormat = (audio_format_t)format;
inputDesc->mChannelMask = (audio_channel_mask_t)channelMask;
inputDesc->mRefCount = 0;
input = mpClientInterface->openInput(profile->mModule->mHandle,
&inputDesc->mDevice,
&inputDesc->mSamplingRate,
&inputDesc->mFormat,
&inputDesc->mChannelMask); // only accept input with the exact requested set of parameters
if (input == 0 ||
(samplingRate != inputDesc->mSamplingRate) ||
(format != inputDesc->mFormat) ||
(channelMask != inputDesc->mChannelMask)) {
ALOGV("getInput() failed opening input: samplingRate %d, format %d, channelMask %d",
samplingRate, format, channelMask);
if (input != 0) {
mpClientInterface->closeInput(input);
}
delete inputDesc;
return 0;
}
mInputs.add(input, inputDesc);
return input;
}
在这个函数中调用getDeviceForInputSource获取输入设备,再调用getInputProfile获取配置信息,最后打开输入设备
mpClientInterface->openInputhardware\libhardware_legacy\audio\AudioPolicyCompatClient.cpp
audio_io_handle_t AudioPolicyCompatClient::openInput(audio_module_handle_t module,
audio_devices_t *pDevices,
uint32_t *pSamplingRate,
audio_format_t *pFormat,
audio_channel_mask_t *pChannelMask)
{
return mServiceOps->open_input_on_module(mService, module, pDevices,
pSamplingRate, pFormat, pChannelMask);
}
其中:struct audio_policy_service_ops* mServiceOps;,该结构体为Audio output Control functions,再继续,又回到Frameworks了
frameworks\av\services\audioflinger\AudioPolicyService.cpp
namespace {
struct audio_policy_service_ops aps_ops = {
open_output : aps_open_output,
open_duplicate_output : aps_open_dup_output,
close_output : aps_close_output,
suspend_output : aps_suspend_output,
restore_output : aps_restore_output,
open_input : aps_open_input,
close_input : aps_close_input,
set_stream_volume : aps_set_stream_volume,
set_stream_output : aps_set_stream_output,
set_parameters : aps_set_parameters,
get_parameters : aps_get_parameters,
start_tone : aps_start_tone,
stop_tone : aps_stop_tone,
set_voice_volume : aps_set_voice_volume,
move_effects : aps_move_effects,
load_hw_module : aps_load_hw_module,
open_output_on_module : aps_open_output_on_module,
open_input_on_module : aps_open_input_on_module,
};
}; // namespace <unnamed>
也就是说对应到aps_open_input_on_module函数中
static audio_io_handle_t aps_open_input_on_module(void *service,
audio_module_handle_t module,
audio_devices_t *pDevices,
uint32_t *pSamplingRate,
audio_format_t *pFormat,
audio_channel_mask_t *pChannelMask)
{
sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
if (af == 0) {
ALOGW("%s: could not get AudioFlinger", __func__);
return 0;
} return af->openInput(module, pDevices, pSamplingRate, pFormat, pChannelMask);
}
这里又需要调用Flinger的openInput函数
frameworks\av\services\audioflinger\AudioFlinger.cpp
audio_io_handle_t AudioFlinger::openInput(audio_module_handle_t module,
audio_devices_t *pDevices,
uint32_t *pSamplingRate,
audio_format_t *pFormat,
audio_channel_mask_t *pChannelMask)
{
status_t status;
RecordThread *thread = NULL;
struct audio_config config;
config.sample_rate = (pSamplingRate != NULL) ? *pSamplingRate : 0;
config.channel_mask = (pChannelMask != NULL) ? *pChannelMask : 0;
config.format = (pFormat != NULL) ? *pFormat : AUDIO_FORMAT_DEFAULT; uint32_t reqSamplingRate = config.sample_rate;
audio_format_t reqFormat = config.format;
audio_channel_mask_t reqChannels = config.channel_mask;
audio_stream_in_t *inStream = NULL;
AudioHwDevice *inHwDev; if (pDevices == NULL || *pDevices == 0) {
return 0;
} Mutex::Autolock _l(mLock); inHwDev = findSuitableHwDev_l(module, *pDevices);
if (inHwDev == NULL)
return 0; audio_hw_device_t *inHwHal = inHwDev->hwDevice();
audio_io_handle_t id = nextUniqueId(); status = inHwHal->open_input_stream(inHwHal, id, *pDevices, &config,
&inStream);
ALOGV("openInput() openInputStream returned input %p, SamplingRate %d, Format %d, Channels %x, "
"status %d",
inStream,
config.sample_rate,
config.format,
config.channel_mask,
status); // If the input could not be opened with the requested parameters and we can handle the
// conversion internally, try to open again with the proposed parameters. The AudioFlinger can
// resample the input and do mono to stereo or stereo to mono conversions on 16 bit PCM inputs.
if (status == BAD_VALUE &&
reqFormat == config.format && config.format == AUDIO_FORMAT_PCM_16_BIT &&
(config.sample_rate <= 2 * reqSamplingRate) &&
(popcount(config.channel_mask) <= FCC_2) && (popcount(reqChannels) <= FCC_2)) {
ALOGV("openInput() reopening with proposed sampling rate and channel mask");
inStream = NULL;
status = inHwHal->open_input_stream(inHwHal, id, *pDevices, &config, &inStream);
} if (status == NO_ERROR && inStream != NULL) { #ifdef TEE_SINK
// Try to re-use most recently used Pipe to archive a copy of input for dumpsys,
// or (re-)create if current Pipe is idle and does not match the new format
sp<NBAIO_Sink> teeSink;
enum {
TEE_SINK_NO, // don't copy input
TEE_SINK_NEW, // copy input using a new pipe
TEE_SINK_OLD, // copy input using an existing pipe
} kind;
NBAIO_Format format = Format_from_SR_C(inStream->common.get_sample_rate(&inStream->common),
popcount(inStream->common.get_channels(&inStream->common)));
if (!mTeeSinkInputEnabled) {
kind = TEE_SINK_NO;
} else if (format == Format_Invalid) {
kind = TEE_SINK_NO;
} else if (mRecordTeeSink == 0) {
kind = TEE_SINK_NEW;
} else if (mRecordTeeSink->getStrongCount() != 1) {
kind = TEE_SINK_NO;
} else if (format == mRecordTeeSink->format()) {
kind = TEE_SINK_OLD;
} else {
kind = TEE_SINK_NEW;
}
switch (kind) {
case TEE_SINK_NEW: {
Pipe *pipe = new Pipe(mTeeSinkInputFrames, format);
size_t numCounterOffers = 0;
const NBAIO_Format offers[1] = {format};
ssize_t index = pipe->negotiate(offers, 1, NULL, numCounterOffers);
ALOG_ASSERT(index == 0);
PipeReader *pipeReader = new PipeReader(*pipe);
numCounterOffers = 0;
index = pipeReader->negotiate(offers, 1, NULL, numCounterOffers);
ALOG_ASSERT(index == 0);
mRecordTeeSink = pipe;
mRecordTeeSource = pipeReader;
teeSink = pipe;
}
break;
case TEE_SINK_OLD:
teeSink = mRecordTeeSink;
break;
case TEE_SINK_NO:
default:
break;
}
#endif AudioStreamIn *input = new AudioStreamIn(inHwDev, inStream); // Start record thread
// RecordThread requires both input and output device indication to forward to audio
// pre processing modules
thread = new RecordThread(this,
input,
reqSamplingRate,
reqChannels,
id,
primaryOutputDevice_l(),
*pDevices
#ifdef TEE_SINK
, teeSink
#endif
);
mRecordThreads.add(id, thread);
ALOGV("openInput() created record thread: ID %d thread %p", id, thread);
if (pSamplingRate != NULL) {
*pSamplingRate = reqSamplingRate;
}
if (pFormat != NULL) {
*pFormat = config.format;
}
if (pChannelMask != NULL) {
*pChannelMask = reqChannels;
} // notify client processes of the new input creation
thread->audioConfigChanged_l(AudioSystem::INPUT_OPENED);
return id;
} return 0;
}
找到一个可用的device:inHwDev = findSuitableHwDev_l
AudioFlinger::AudioHwDevice* AudioFlinger::findSuitableHwDev_l(
audio_module_handle_t module,
audio_devices_t devices)
{
// if module is 0, the request comes from an old policy manager and we should load
// well known modules
if (module == 0) {
ALOGW("findSuitableHwDev_l() loading well know audio hw modules");
for (size_t i = 0; i < ARRAY_SIZE(audio_interfaces); i++) {
loadHwModule_l(audio_interfaces[i]);
}
// then try to find a module supporting the requested device.
for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
AudioHwDevice *audioHwDevice = mAudioHwDevs.valueAt(i);
audio_hw_device_t *dev = audioHwDevice->hwDevice();
if ((dev->get_supported_devices != NULL) &&
(dev->get_supported_devices(dev) & devices) == devices)
return audioHwDevice;
}
} else {
// check a match for the requested module handle
AudioHwDevice *audioHwDevice = mAudioHwDevs.valueFor(module);
if (audioHwDevice != NULL) {
return audioHwDevice;
}
} return NULL;
}
然后打开输入流:inHwHal->open_input_stream,这里又进入HAL
hardware\libhardware_legacy\audio\audio_hw_hal.cpp
static int adev_open_input_stream(struct audio_hw_device *dev,
audio_io_handle_t handle,
audio_devices_t devices,
struct audio_config *config,
struct audio_stream_in **stream_in)
{
struct legacy_audio_device *ladev = to_ladev(dev);
status_t status;
struct legacy_stream_in *in;
int ret; in = (struct legacy_stream_in *)calloc(1, sizeof(*in));
if (!in)
return -ENOMEM; devices = convert_audio_device(devices, HAL_API_REV_2_0, HAL_API_REV_1_0); in->legacy_in = ladev->hwif->openInputStream(devices, (int *) &config->format,
&config->channel_mask, &config->sample_rate,
&status, (AudioSystem::audio_in_acoustics)0);
if (!in->legacy_in) {
ret = status;
goto err_open;
} in->stream.common.get_sample_rate = in_get_sample_rate;
in->stream.common.set_sample_rate = in_set_sample_rate;
in->stream.common.get_buffer_size = in_get_buffer_size;
in->stream.common.get_channels = in_get_channels;
in->stream.common.get_format = in_get_format;
in->stream.common.set_format = in_set_format;
in->stream.common.standby = in_standby;
in->stream.common.dump = in_dump;
in->stream.common.set_parameters = in_set_parameters;
in->stream.common.get_parameters = in_get_parameters;
in->stream.common.add_audio_effect = in_add_audio_effect;
in->stream.common.remove_audio_effect = in_remove_audio_effect;
in->stream.set_gain = in_set_gain;
in->stream.read = in_read;
in->stream.get_input_frames_lost = in_get_input_frames_lost; *stream_in = &in->stream;
return 0; err_open:
free(in);
*stream_in = NULL;
return ret;
}
对于openInputStream函数来说,申明在hardware\libhardware_legacy\include\hardware_legacy\AudioHardwareInterface.h中
但是在libhardware_legacy\audio\AudioHardwareGeneric.cpp与libhardware_legacy\audio\AudioHardwareStub.cpp中都继承了该接口,并实现了他
AudioHardwareStub.cpp:
AudioStreamIn* AudioHardwareStub::openInputStream(
uint32_t devices, int *format, uint32_t *channels, uint32_t *sampleRate,
status_t *status, AudioSystem::audio_in_acoustics acoustics)
{
// check for valid input source
if (!AudioSystem::isInputDevice((AudioSystem::audio_devices)devices)) {
return 0;
} AudioStreamInStub* in = new AudioStreamInStub();
status_t lStatus = in->set(format, channels, sampleRate, acoustics);
if (status) {
*status = lStatus;
}
if (lStatus == NO_ERROR)
return in;
delete in;
return 0;
}
AudioHardwareGeneric.cpp:
AudioStreamIn* AudioHardwareGeneric::openInputStream(
uint32_t devices, int *format, uint32_t *channels, uint32_t *sampleRate,
status_t *status, AudioSystem::audio_in_acoustics acoustics)
{
// check for valid input source
if (!AudioSystem::isInputDevice((AudioSystem::audio_devices)devices)) {
return 0;
} AutoMutex lock(mLock); // only one input stream allowed
if (mInput) {
if (status) {
*status = INVALID_OPERATION;
}
return 0;
} // create new output stream
AudioStreamInGeneric* in = new AudioStreamInGeneric();
status_t lStatus = in->set(this, mFd, devices, format, channels, sampleRate, acoustics);
if (status) {
*status = lStatus;
}
if (lStatus == NO_ERROR) {
mInput = in;
} else {
delete in;
}
return mInput;
}
(二)Audio子系统之new AudioRecord()(Android4.4)的更多相关文章
- (二)Audio子系统之new AudioRecord()
在上一篇文章<(一)Audio子系统之AudioRecord.getMinBufferSize>中已经介绍了AudioRecord如何获取最小缓冲区大小,接下来,继续分析AudioReco ...
- (三)Audio子系统之AudioRecord.startRecording
在上一篇文章<(二)Audio子系统之new AudioRecord()>中已经介绍了Audio系统如何创建AudioRecord对象以及输入流,并创建了RecordThread线程,接下 ...
- (一)Audio子系统之AudioRecord.getMinBufferSize
在文章<基于Allwinner的Audio子系统分析(Android-5.1)>中已经介绍了Audio的系统架构以及应用层调用的流程,接下来,继续分析AudioRecorder方法中的ge ...
- (四)Audio子系统之AudioRecord.read
在上一篇文章<(三)Audio子系统之AudioRecord.startRecording>中已经介绍了AudioRecord如何开始录制音频,接下来,继续分析AudioRecord方 ...
- (五)Audio子系统之AudioRecord.stop
在上一篇文章<(四)Audio子系统之AudioRecord.read>中已经介绍了AudioRecord如何获取音频数据,接下来,继续分析AudioRecord方法中的stop的实现 函 ...
- (六)Audio子系统之AudioRecord.release
在上一篇文章<(五)Audio子系统之AudioRecord.stop>中已经介绍了AudioRecord如何暂停录制,接下来,继续分析AudioRecord方法中的release的实 ...
- 基于Allwinner的Audio子系统分析(Android-5.1)
前言 一直想总结下Audio子系统的博客,但是各种原因(主要还是自己懒>_<),一直拖到现在才开始重新整理,期间看过H8(Android-4.4),T3(Android-4.4),A64( ...
- Android音视频之AudioRecord录音(一)
在音视频开发中,录音当然是必不可少的.首先我们要学会单独的录音功能,当然这里说的录音是指用AudioRecord来录音,读取录音原始数据,读到的就是所谓的PCM数据.对于录音来说,最重要的几个参数要搞 ...
- audio元素和video元素在ios和andriod中无法自动播放
原因: 因为各大浏览器都为了节省流量,做出了优化,在用户没有行为动作时(交互)不予许自动播放: /音频,写法一 <audio src="music/bg.mp3" autop ...
随机推荐
- 洛谷 P2569[SCOI2010]股票交易(动规+单调队列)
//只能写出裸的动规,为什么会有人能想到用单调队列优化Orz 题目描述 最近lxhgww又迷上了投资股票,通过一段时间的观察和学习,他总结出了股票行情的一些规律. 通过一段时间的观察,lxhgww预测 ...
- Python爬虫实战五之模拟登录淘宝并获取所有订单
经过多次尝试,模拟登录淘宝终于成功了,实在是不容易,淘宝的登录加密和验证太复杂了,煞费苦心,在此写出来和大家一起分享,希望大家支持. 温馨提示 更新时间,2016-02-01,现在淘宝换成了滑块验证了 ...
- 3d点云与cad模型
https://stackoverflow.com/questions/19000096/match-3d-point-cloud-to-cad-model
- Windows7 64位 安装mysql
Windows上安装MySQL还是比较方便的,之前做过一个Windows10上面的安装方法,但是一个同学说自己的电脑是Windows7的,所以我写一个Windows7上的MySQL安装方法. MySQ ...
- jquery 常用工具方法
inArray(value, array [, fromIndex ])方法类似于原生javascript的indexOf()方法,没有找到匹配元素时它返回-1.如果数组第一个元素匹配参数,那么$.i ...
- ACM 韩信点兵 、n的另一种阶乘、6174的问题
3.6174问题 描述 假设你有一个各位数字互不相同的四位数,把所有的数字从大到小排序后得到a,从小到大后得到b,然后用a-b替换原来这个数,并且继续操作.例如,从1234出发,依次可以得到4321- ...
- C#基础入门 四
C#基础入门 四 方法参数 值参数:不附加任何修饰符: 输出参数:以out修饰符声明,可以返回一个或多个给调用者: 如果想要一个方法返回多个值,可以用输出参数来处理,输出参数由out关键字标识,如st ...
- Python 串口通信 GUI 开发
在项目中遇到树莓派串口通信问题.由于本人一直从事.net 开发,希望将树莓派系统换成Win10 IOT版.但是在测试过程中出现无法找到串口的问题.最终也没有解决.最终按照领导要求,linux (了解不 ...
- C#字典 Dictionary<Tkey,Tvalue> 之线程安全问题 ConcurrentDictionary<Tkey,Tvalue> 多线程字典
ConcurrentDictionary<Tkey,Tvalue> Model #region 程序集 mscorlib, Version=4.0.0.0, Culture=neutra ...
- 常用的jQuery学习文档及使用经验
分享几个jQuery学习的API在线文档 1. 首推 http://hemin.cn/jq/ 原因是全中文文档,文档排列通俗易懂,容易查找,示例清楚. 2. https://www.jquery123 ...