Android 4.4KitKat AudioTrack 流程分析
Android Audio 系统的主要内容:
- AudioManager:这个主要是用来管理Audio系统的,需要考虑整个系统上声音的策略问题,例如来电话铃声,短信铃声等,主要是策略上的问题。
- AudioTrack:这个主要是用来播放声音的
- AudioRecord:这个主要是用来录音的
当前分析AudioTrack的文章较多,先以AudioTrack为例进行分析。
JAVA层的AudioTrack class:framework\base\media\java\android\media\AudioTrack.java中。
AudioTrack的使用方法实例:
//根据采样率,采样精度,单双声道来得到frame的大小。
int bufsize = AudioTrack.getMinBufferSize(8000,//每秒8K个点
AudioFormat.CHANNEL_CONFIGURATION_STEREO,//双声道
AudioFormat.ENCODING_PCM_16BIT);//一个采样点16比特-2个字节
//注意,按照数字音频的知识,这个算出来的是一秒钟buffer的大小。
//创建AudioTrack
AudioTrack trackplayer = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,
AudioFormat.CHANNEL_CONFIGURATION_ STEREO,
AudioFormat.ENCODING_PCM_16BIT,bufsize,AudioTrack.MODE_STREAM);//
trackplayer.play() ;//开始
trackplayer.write(bytes_pkg, 0, bytes_pkg.length) ;//往track中写数据
….
trackplayer.stop();//停止播放
trackplayer.release();//释放底层资源。
AudioTrack.MODE_STREAM:AudioTrack中有MODE_STATIC和MODE_STREAM两种分类。STREAM的意思是由用户在应用程序通过write方式把数据一次一次得写到audiotrack中。这个和我们在socket中发送数据一样,应用层从某个地方获取数据,例如通过编解码得到PCM数据,然后write到audiotrack。这种方式的坏处就是总是在JAVA层和Native层交互,效率损失较大。而STATIC的意思是一开始创建的时候,就把音频数据放到一个固定的buffer,然后直接传给audiotrack,后续就不用一次次得write了。AudioTrack会自己播放这个buffer中的数据。这种方法对于铃声等内存占用较小,延时要求较高的声音来说很适用。
StreamType:这个在构造AudioTrack的第一个参数中使用。这个参数和Android中的AudioManager有关系,涉及到手机上的音频管理策略。Android将系统的声音分为以下几类常见的(未写全):
- STREAM_ALARM:警告声
- STREAM_MUSCI:音乐声,例如music等
- STREAM_RING:铃声
- STREAM_SYSTEM:系统声音
- STREAM_VOCIE_CALL:电话声音
为什么要分这么多呢?以前在台式机上开发的时候很少知道有这么多的声音类型,不过仔细思考下,发现这样做是有道理的。例如你在听music的时候接到电话,这个时候music播放肯定会停止,此时你只能听到电话,如果你调节音量的话,这个调节肯定只对电话起作用。当电话打完了,再回到music,你肯定不用再调节音量了。其实系统将这几种声音的数据分开管理,所以,这个参数对AudioTrack来说,它的含义就是告诉系统,我现在想使用的是哪种类型的声音,这样系统就可以对应管理他们了。
从AudioTrack的使用实例来逐个分析其中用到的方法,首先是getMinBufferSize:
getMinBufferSize
/**
* Returns the minimum buffer size required for the successful creation of an AudioTrack
* object to be created in the {@link #MODE_STREAM} mode. Note that this size doesn't
* guarantee a smooth playback under load, and higher values should be chosen according to
* the expected frequency at which the buffer will be refilled with additional data to play.
* For example, if you intend to dynamically set the source sample rate of an AudioTrack
* to a higher value than the initial source sample rate, be sure to configure the buffer size
* based on the highest planned sample rate.
* @param sampleRateInHz the source sample rate expressed in Hz.
* @param channelConfig describes the configuration of the audio channels.
* See {@link AudioFormat#CHANNEL_OUT_MONO} and
* {@link AudioFormat#CHANNEL_OUT_STEREO}
* @param audioFormat the format in which the audio data is represented.
* See {@link AudioFormat#ENCODING_PCM_16BIT} and
* {@link AudioFormat#ENCODING_PCM_8BIT}
* @return {@link #ERROR_BAD_VALUE} if an invalid parameter was passed,
* or {@link #ERROR} if unable to query for output properties,
* or the minimum buffer size expressed in bytes.
*/
static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {
int channelCount = ;
switch(channelConfig) {
case AudioFormat.CHANNEL_OUT_MONO:
case AudioFormat.CHANNEL_CONFIGURATION_MONO:
channelCount = ;
break;
case AudioFormat.CHANNEL_OUT_STEREO:
case AudioFormat.CHANNEL_CONFIGURATION_STEREO:
channelCount = ;
break;
default:
if ((channelConfig & SUPPORTED_OUT_CHANNELS) != channelConfig) {
// input channel configuration features unsupported channels
loge("getMinBufferSize(): Invalid channel configuration.");
return ERROR_BAD_VALUE;
} else {
channelCount = Integer.bitCount(channelConfig);
}
}
//目前只支持PCM8和PCM16精度的音频
if ((audioFormat != AudioFormat.ENCODING_PCM_16BIT)
&& (audioFormat != AudioFormat.ENCODING_PCM_8BIT)) {
loge("getMinBufferSize(): Invalid audio format.");
return ERROR_BAD_VALUE;
}
//ft,对采样频率也有要求,太低或太高都不行,人耳分辨率在20HZ到40KHZ之间
// sample rate, note these values are subject to change
if ( (sampleRateInHz < SAMPLE_RATE_HZ_MIN) || (sampleRateInHz > SAMPLE_RATE_HZ_MAX) ) {
loge("getMinBufferSize(): " + sampleRateInHz + " Hz is not a supported sample rate.");
return ERROR_BAD_VALUE;
}
//调用native函数
int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);
if (size <= ) {
loge("getMinBufferSize(): error querying hardware");
return ERROR;
}
else {
return size;
}
}
native_get_min_buff_size函数进入了framework/base/core/jni/android_media_AudioTrack.cpp中的android_media_AudioTrack_get_min_buff_size:
// returns the minimum required size for the successful creation of a streaming AudioTrack
// returns -1 if there was an error querying the hardware.
static jint android_media_AudioTrack_get_min_buff_size(JNIEnv *env, jobject thiz,
jint sampleRateInHertz, jint nbChannels, jint audioFormat) {
size_t frameCount = 0;
if (AudioTrack::getMinFrameCount(&frameCount, AUDIO_STREAM_DEFAULT,sampleRateInHertz) != NO_ERROR) {
return -1;
}
return frameCount * nbChannels * (audioFormat == ENCODING_PCM_16BIT ? 2 : 1);
}
根据最小的framecount计算最小的buffersize。音频中最常见的是frame这个单位,一个frame就是1个采样点的字节数*声道。为啥搞个frame出来?因为对于多//声道的话,用1个采样点的字节数表示不全,因为播放的时候肯定是多个声道的数据都要播出来//才行。所以为了方便,就说1秒钟有多少个frame,这样就能抛开声道数,把意思表示全了。getMinBufSize函数完了后,我们得到一个满足最小要求的缓冲区大小。这样用户分配缓冲区就有了依据。下面就需要创建AudioTrack对象了
创建AudioTrack对象
先看AudioTrack.java中的构造函数:
/**
* Class constructor with audio session. Use this constructor when the AudioTrack must be
* attached to a particular audio session. The primary use of the audio session ID is to
* associate audio effects to a particular instance of AudioTrack: if an audio session ID
* is provided when creating an AudioEffect, this effect will be applied only to audio tracks
* and media players in the same session and not to the output mix.
* When an AudioTrack is created without specifying a session, it will create its own session
* which can be retrieved by calling the {@link #getAudioSessionId()} method.
* If a non-zero session ID is provided, this AudioTrack will share effects attached to this
* session
* with all other media players or audio tracks in the same session, otherwise a new session
* will be created for this track if none is supplied.
* @param streamType the type of the audio stream. See
* {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},
* {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC},
* {@link AudioManager#STREAM_ALARM}, and {@link AudioManager#STREAM_NOTIFICATION}.
* @param sampleRateInHz the initial source sample rate expressed in Hz.
* @param channelConfig describes the configuration of the audio channels.
* See {@link AudioFormat#CHANNEL_OUT_MONO} and
* {@link AudioFormat#CHANNEL_OUT_STEREO}
* @param audioFormat the format in which the audio data is represented.
* See {@link AudioFormat#ENCODING_PCM_16BIT} and
* {@link AudioFormat#ENCODING_PCM_8BIT}
* @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read
* from for playback. If using the AudioTrack in streaming mode, you can write data into
* this buffer in smaller chunks than this size. If using the AudioTrack in static mode,
* this is the maximum size of the sound that will be played for this instance.
* See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size
* for the successful creation of an AudioTrack instance in streaming mode. Using values
* smaller than getMinBufferSize() will result in an initialization failure.
* @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}
* @param sessionId Id of audio session the AudioTrack must be attached to
* @throws java.lang.IllegalArgumentException
*/
public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,
int bufferSizeInBytes, int mode, int sessionId) throws IllegalArgumentException {
// mState already == STATE_UNINITIALIZED
// remember which looper is associated with the AudioTrack instantiation
Looper looper;
// 获得主线程的Looper,这个在MediaScanner中有相关介绍。
if ((looper = Looper.myLooper()) == null) {
looper = Looper.getMainLooper();
}
mInitializationLooper = looper;
//检查参数是否合法之类的,可以不管它
audioParamCheck(streamType, sampleRateInHz, channelConfig, audioFormat, mode);
audioBuffSizeCheck(bufferSizeInBytes);
if (sessionId < 0) {
throw new IllegalArgumentException("Invalid audio session ID: "+sessionId);
}
int[] session = new int[1];
session[0] = sessionId;
// native initialization
// 调用native层的native_setup,把自己的WeakReference传进去了
int initResult = native_setup(new WeakReference<AudioTrack>(this),
mStreamType, mSampleRate, mChannels, mAudioFormat,
mNativeBufferSizeInBytes, mDataLoadMode, session);
if (initResult != SUCCESS) {
loge("Error code "+initResult+" when initializing AudioTrack.");
return; // with mState == STATE_UNINITIALIZED
}
mSessionId = session[0];
if (mDataLoadMode == MODE_STATIC) {
mState = STATE_NO_STATIC_DATA;
} else {
mState = STATE_INITIALIZED;
}
}
native_setup函数进入了framework/base/core/jni/android_media_AudioTrack.cpp中的android_media_AudioTrack_native_setup:
static int android_media_AudioTrack_native_setup(JNIEnv *env, jobject thiz, jobject weak_this,
jint streamType, jint sampleRateInHertz, jint javaChannelMask,
jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession)
{
ALOGV("sampleRate=%d, audioFormat(from Java)=%d, channel mask=%x, buffSize=%d",
sampleRateInHertz, audioFormat, javaChannelMask, buffSizeInBytes);
uint32_t afSampleRate;
size_t afFrameCount; if (AudioSystem::getOutputFrameCount(&afFrameCount, (audio_stream_type_t) streamType) != NO_ERROR) {
ALOGE("Error creating AudioTrack: Could not get AudioSystem frame count.");
return AUDIOTRACK_ERROR_SETUP_AUDIOSYSTEM;
}
if (AudioSystem::getOutputSamplingRate(&afSampleRate, (audio_stream_type_t) streamType) != NO_ERROR) {
ALOGE("Error creating AudioTrack: Could not get AudioSystem sampling rate.");
return AUDIOTRACK_ERROR_SETUP_AUDIOSYSTEM;
} // Java channel masks don't map directly to the native definition, but it's a simple shift
// to skip the two deprecated channel configurations "default" and "mono".
uint32_t nativeChannelMask = ((uint32_t)javaChannelMask) >> ; if (!audio_is_output_channel(nativeChannelMask)) {
ALOGE("Error creating AudioTrack: invalid channel mask %#x.", javaChannelMask);
return AUDIOTRACK_ERROR_SETUP_INVALIDCHANNELMASK;
}
//popCount是统计一个整数中有多少位为1的算法
int nbChannels = popcount(nativeChannelMask); // check the stream type
audio_stream_type_t atStreamType;
switch (streamType) {
case AUDIO_STREAM_VOICE_CALL:
case AUDIO_STREAM_SYSTEM:
case AUDIO_STREAM_RING:
case AUDIO_STREAM_MUSIC:
case AUDIO_STREAM_ALARM:
case AUDIO_STREAM_NOTIFICATION:
case AUDIO_STREAM_BLUETOOTH_SCO:
case AUDIO_STREAM_DTMF:
atStreamType = (audio_stream_type_t) streamType;
break;
default:
ALOGE("Error creating AudioTrack: unknown stream type.");
return AUDIOTRACK_ERROR_SETUP_INVALIDSTREAMTYPE;
} // check the format.
// This function was called from Java, so we compare the format against the Java constants
if ((audioFormat != ENCODING_PCM_16BIT) && (audioFormat != ENCODING_PCM_8BIT)) {
ALOGE("Error creating AudioTrack: unsupported audio format.");
return AUDIOTRACK_ERROR_SETUP_INVALIDFORMAT;
}
// for the moment 8bitPCM in MODE_STATIC is not supported natively in the AudioTrack C++ class
// so we declare everything as 16bitPCM, the 8->16bit conversion for MODE_STATIC will be handled
// in android_media_AudioTrack_native_write_byte()
if ((audioFormat == ENCODING_PCM_8BIT)&& (memoryMode == MODE_STATIC)) {
ALOGV("android_media_AudioTrack_native_setup(): requesting MODE_STATIC for 8bit \
buff size of %dbytes, switching to 16bit, buff size of %dbytes",
buffSizeInBytes, *buffSizeInBytes);
audioFormat = ENCODING_PCM_16BIT;
// we will need twice the memory to store the data
buffSizeInBytes *= ;
} // compute the frame count
int bytesPerSample = audioFormat == ENCODING_PCM_16BIT ? : ;
audio_format_t format = audioFormat == ENCODING_PCM_16BIT ? AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT;
//根据Buffer大小和一个Frame大小来计算帧数。
int frameCount = buffSizeInBytes / (nbChannels * bytesPerSample);
jclass clazz = env->GetObjectClass(thiz);
if (clazz == NULL) {
ALOGE("Can't find %s when setting up callback.", kClassPathName);
return AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}
if (jSession == NULL) {
ALOGE("Error creating AudioTrack: invalid session ID pointer");
return AUDIOTRACK_ERROR;
}
jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
if (nSession == NULL) {
ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
return AUDIOTRACK_ERROR;
}
int sessionId = nSession[];
env->ReleasePrimitiveArrayCritical(jSession, nSession, );
nSession = NULL; // create the native AudioTrack object
//创建真正的AudioTrack对象
sp<AudioTrack> lpTrack = new AudioTrack(); // initialize the callback information:
// this data will be passed with every AudioTrack callback
// AudioTrackJniStorage,就是一个保存一些数据的地方,这里边有一些有用的知识,下面再详细解释
AudioTrackJniStorage* lpJniStorage = new AudioTrackJniStorage();
lpJniStorage->mStreamType = atStreamType;
lpJniStorage->mCallbackData.audioTrack_class = (jclass)env->NewGlobalRef(clazz);
// we use a weak reference so the AudioTrack object can be garbage collected.
lpJniStorage->mCallbackData.audioTrack_ref = env->NewGlobalRef(weak_this);
lpJniStorage->mCallbackData.busy = false; // initialize the native AudioTrack object
switch (memoryMode) {
case MODE_STREAM:
lpTrack->set(
atStreamType,// stream type
sampleRateInHertz,
format,// word length, PCM
nativeChannelMask,
frameCount,
AUDIO_OUTPUT_FLAG_NONE,
audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
,// shared mem
true,// thread can call Java
sessionId);// audio session ID
break;
case MODE_STATIC:
// AudioTrack is using shared memory
//如果是static模式,需要用户一次性把数据写进去,然后再由audioTrack自己去把数据读出来,
//所以需要一个共享内存这里的共享内存是指C++ AudioTrack和AudioFlinger之间共享的内容因为真正播放的工作是由AudioFlinger来完成的。
if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
goto native_init_failure;
}
lpTrack->set(
atStreamType,// stream type
sampleRateInHertz,
format,// word length, PCM
nativeChannelMask,
frameCount,
AUDIO_OUTPUT_FLAG_NONE,
audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
lpJniStorage->mMemBase,// shared mem
true,// thread can call Java
sessionId);// audio session ID
break;
default:
ALOGE("Unknown mode %d", memoryMode);
goto native_init_failure;
}
if (lpTrack->initCheck() != NO_ERROR) {
ALOGE("Error initializing AudioTrack");
goto native_init_failure;
}
nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
if (nSession == NULL) {
ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
goto native_init_failure;
}
// read the audio session ID back from AudioTrack in case we create a new session
nSession[] = lpTrack->getSessionId();
env->ReleasePrimitiveArrayCritical(jSession, nSession, );
nSession = NULL;
{ // scope for the lock
Mutex::Autolock l(sLock);
sAudioTrackCallBackCookies.add(&lpJniStorage->mCallbackData);
}
// save our newly created C++ AudioTrack in the "nativeTrackInJavaObj" field
// of the Java object (in mNativeTrackInJavaObj)
setAudioTrack(env, thiz, lpTrack);
//把C++AudioTrack对象指针保存到JAVA对象的一个变量中,这样,Native层的AudioTrack对象就和JAVA层的AudioTrack对象关联起来了.// save the JNI resources so we can free them later
//ALOGV("storing lpJniStorage: %x\n", (int)lpJniStorage);
env->SetIntField(thiz, javaAudioTrackFields.jniData, (int)lpJniStorage);
return AUDIOTRACK_SUCCESS;
// failures:
native_init_failure:
if (nSession != NULL) {
env->ReleasePrimitiveArrayCritical(jSession, nSession, );
}
env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_class);
env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_ref);
delete lpJniStorage;
env->SetIntField(thiz, javaAudioTrackFields.jniData, ); return AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}
AudioTrackJniStorage
这个类其实就是一个辅助类,但是里边有一些知识很重要,尤其是Android封装的一套共享内存的机制。把这块搞清楚了,我们就能轻松得在两个进程间进行内存的拷贝。
class AudioTrackJniStorage {
public:
sp<MemoryHeapBase> mMemHeap;
sp<MemoryBase> mMemBase;
audiotrack_callback_cookie mCallbackData;
audio_stream_type_t mStreamType; AudioTrackJniStorage() {
mCallbackData.audioTrack_class = ;
mCallbackData.audioTrack_ref = ;
mStreamType = AUDIO_STREAM_DEFAULT;
} ~AudioTrackJniStorage() {
mMemBase.clear();
mMemHeap.clear();
} bool allocSharedMem(int sizeInBytes) {
mMemHeap = new MemoryHeapBase(sizeInBytes, , "AudioTrack Heap Base");
if (mMemHeap->getHeapID() < ) {
return false;
}
mMemBase = new MemoryBase(mMemHeap, , sizeInBytes);
//注意用法,先弄一个MemoryHeapBase,再把MemoryHeapBase传入到MemoryBase中去。
return true;
}
};
MemroyHeapBase,MemoryBase是Android搞的一套基于Binder机制的对内存操作的类。既然是Binder机制,那么肯定有一个服务端Bnxxx,一个代理端Bpxxx。
MemoryXXX大概的使用方法如下:
BnXXX端先分配BnMemoryHeapBase和BnMemoryBase,
然后把BnMemoryBase传递到BpXXX
BpXXX就可以使用BpMemoryBase得到BnXXX端分配的共享内存了。
注意,既然是进程间共享内存,那么Bp端肯定使用memcpy之类的函数来操作内存,这些函数是没有同步保护的,而且Android也不可能在系统内部为这种共享内存去做增加同步保护。所以看来后续在操作这些共享内存的时候,肯定存在一个跨进程的同步保护机制。我们在后面讲实际播放的时候会碰到。
另外,这里的SharedBuffer最终会在Bp端也就是AudioFlinger那用到。
play和write
JAVA层到这一步后就是调用play和write了。JAVA层这两个函数没什么内容,都是直接转到native层干活了。先看看play函数对应的JNI函数:
static void
android_media_AudioTrack_start(JNIEnv *env, jobject thiz)
{
//从JAVA那个AudioTrack对象获取保存的C++层的AudioTrack对象指针
//从int类型直接转换成指针。要是以后ARM变成64位平台了,看google怎么改!
sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
if (lpTrack == NULL) {
jniThrowException(env, "java/lang/IllegalStateException",
"Unable to retrieve AudioTrack pointer for start()");
return;
}
lpTrack->start();
}
再看write。我们写的是short数组:
static jint android_media_AudioTrack_native_write_short(JNIEnv *env, jobject thiz,
jshortArray javaAudioData,
jint offsetInShorts, jint sizeInShorts,
jint javaAudioFormat) {
jint written = android_media_AudioTrack_native_write_byte(env, thiz,
(jbyteArray) javaAudioData,
offsetInShorts*, sizeInShorts*,
javaAudioFormat);
if (written > ) {
written /= ;
}
return written;
}
static jint android_media_AudioTrack_native_write_byte(JNIEnv *env, jobject thiz,
jbyteArray javaAudioData,
jint offsetInBytes, jint sizeInBytes,
jint javaAudioFormat) {
//ALOGV("android_media_AudioTrack_native_write_byte(offset=%d, sizeInBytes=%d) called",
// offsetInBytes, sizeInBytes);
sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
if (lpTrack == NULL) {
jniThrowException(env, "java/lang/IllegalStateException",
"Unable to retrieve AudioTrack pointer for write()");
return ;
} // get the pointer for the audio data from the java array
// NOTE: We may use GetPrimitiveArrayCritical() when the JNI implementation changes in such
// a way that it becomes much more efficient. When doing so, we will have to prevent the
// AudioSystem callback to be called while in critical section (in case of media server
// process crash for instance)
jbyte* cAudioData = NULL;
if (javaAudioData) {
cAudioData = (jbyte *)env->GetByteArrayElements(javaAudioData, NULL);
if (cAudioData == NULL) {
ALOGE("Error retrieving source of audio data to play, can't play");
return ; // out of memory or no data to load
}
} else {
ALOGE("NULL java array of audio data to play, can't play");
return ;
} jint written = writeToTrack(lpTrack, javaAudioFormat, cAudioData, offsetInBytes, sizeInBytes); env->ReleaseByteArrayElements(javaAudioData, cAudioData, ); //ALOGV("write wrote %d (tried %d) bytes in the native AudioTrack with offset %d",
// (int)written, (int)(sizeInBytes), (int)offsetInBytes);
return written;
}
jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, jbyte* data,
jint offsetInBytes, jint sizeInBytes) {
// give the data to the native AudioTrack object (the data starts at the offset)
ssize_t written = ;
// regular write() or copy the data to the AudioTrack's shared memory?
if (track->sharedBuffer() == ) {
//创建的是流的方式,所以没有共享内存在track中
written = track->write(data + offsetInBytes, sizeInBytes);
// for compatibility with earlier behavior of write(), return 0 in this case
if (written == (ssize_t) WOULD_BLOCK) {
written = ;
}
} else {
if (audioFormat == ENCODING_PCM_16BIT) {
// writing to shared memory, check for capacity
if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
sizeInBytes = track->sharedBuffer()->size();
}
//STATIC模式的,就直接把数据拷贝到共享内存里
memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
written = sizeInBytes;
} else if (audioFormat == ENCODING_PCM_8BIT) {
//PCM8格式的要先转换成PCM16
// data contains 8bit data we need to expand to 16bit before copying
// to the shared memory
// writing to shared memory, check for capacity,
// note that input data will occupy 2X the input space due to 8 to 16bit conversion
if (((size_t)sizeInBytes)* > track->sharedBuffer()->size()) {
sizeInBytes = track->sharedBuffer()->size() / ;
}
int count = sizeInBytes;
int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
const int8_t *src = (const int8_t *)(data + offsetInBytes);
while (count--) {
*dst++ = (int16_t)(*src++^0x80) << ;
}
// even though we wrote 2*sizeInBytes, we only report sizeInBytes as written to hide
// the 8bit mixer restriction from the user of this function
written = sizeInBytes;
}
}
return written; }
到这里,似乎很简单,JAVA层的AudioTrack,无非就是调用write函数,而实际由JNI层的C++ AudioTrack write数据。
未完,看累了,歇几天继续
Reprinted from:http://www.cnblogs.com/innost/archive/2011/01/09/1931457.html
Android 4.4KitKat AudioTrack 流程分析的更多相关文章
- Android 4.4KitKat AudioRecord 流程分析
Android是架构分为三层: 底层 Linux Kernel 中间层 主要由C++实现 (Android 60%源码都是C++实现) 应用层 主要由JAVA开发的应用程序 应用程序执行 ...
- Android 4.4KitKat AudioFlinger 流程分析
AudioFlinger(AF)是一个服务,具体的启动代码在av\media\mediaserver\Main_mediaserver.cpp中: int main(int argc, char** ...
- Gradle之Android Gradle Plugin 主要流程分析(二)
[Android 修炼手册]Gradle 篇 -- Android Gradle Plugin 主要流程分析 预备知识 理解 gradle 的基本开发 了解 gradle task 和 plugin ...
- Cocos2d-x3.3RC0的Android编译Activity启动流程分析
本文将从引擎源代码Jni分析Cocos2d-x3.3RC0的Android Activity的启动流程,以下是具体分析. 1.引擎源代码Jni.部分Java层和C++层代码分析 watermark/2 ...
- Android FART脱壳机流程分析
本文首发于安全客 链接:https://www.anquanke.com/post/id/219094 0x1 前言 在Android平台上,程序员编写的Java代码最终将被编译成字节码在Androi ...
- Android恢复出厂设置流程分析【Android源码解析十】
最近看恢复出厂的一个问题,以前也查过这方面的流程,所以这里整理一些AP+framework层的流程: 在setting-->备份与重置--->恢复出厂设置--->重置手机---> ...
- Android 9.0 关机流程分析
极力推荐文章:欢迎收藏 Android 干货分享 阅读五分钟,每日十点,和您一起终身学习,这里是程序员Android 本篇文章主要介绍 Android 开发中的部分知识点,通过阅读本篇文章,您将收获以 ...
- 【转载】Android 中 View 绘制流程分析
创建Window 在Activity的attach方法中通过调用PolicyManager.makeNewWindo创建Window,将一个View add到WindowManager时,Window ...
- Android View 绘制刷新流程分析
Android中对View的更新有很多种方式,使用时要区分不同的应用场合.1.不使用多线程和双缓冲 这种情况最简单,一般只是希望在View发生改变时对UI进行重绘.你只需显式地调用View对 ...
随机推荐
- 【模拟退火】poj2069 Super Star
题意:让你求空间内n个点的最小覆盖球. 模拟退火随机走的时候主要有这几种走法:①随机旋转角度. ②直接不随机,往最远的点的方向走,仅仅在尝试接受解的时候用概率.(最小圆/球覆盖时常用) ③往所有点的方 ...
- 【Kruskal+dfs】BZOJ1016- [JSOI2008]最小生成树计数
[题目大意] 现在给出了一个简单无向加权图.你不满足于求出这个图的最小生成树,而希望知道这个图中有多少个不同的最小生成树. [思路] 拖欠了三个月整(?)的题目,搞出来弄掉了……本年度写的时候姿势最丑 ...
- [bzoj1008](HNOI2008)越狱(矩阵快速幂加速递推)
Description 监狱有连续编号为1...N的N个房间,每个房间关押一个犯人,有M种宗教,每个犯人可能信仰其中一种.如果相邻房间的犯人的宗教相同,就可能发生越狱,求有多少种状态可能发生越狱 In ...
- [转]java中JSONObject与JSONArray的使用详细说明及有关JSON的工具类
JSONObject与JSONArray的使用 一.JAR包简介 要使程序可以运行必须引入JSON-lib包,JSON-lib包同时依赖于以下的JAR包: 1.commons-lang.jar 2.c ...
- Spring MVC常用注解@PathVariable、@RequestHeader、@CookieValue、@RequestParam、@RequestBody、@SessionAttributes、@ModelAttribute
简介: handler method参数绑定常用的注解,我们根据他们处理的Request的不同内容部分分为四类:(主要讲解常用类型) A.处理requet uri部分(这里指uri template中 ...
- HMACSHA1算法的JAVA实现
import javax.crypto.Mac; import javax.crypto.SecretKey; import javax.crypto.spec.SecretKeySpec; publ ...
- mysql 按年月查询
查询2017的数据:select * from table where year(column)='2017';查找月份为12的数据:select * from table where month(c ...
- 详解MySQL性能优化(二)
http://www.jb51.net/article/70530.htm 七.MySQL数据库Schema设计的性能优化高效的模型设计 适度冗余-让Query尽两减少Join 大字段垂直分拆-sum ...
- Index column size too large. The maximum column size is 767 bytes.
mysql建表时报Index column size too large. The maximum column size is 767 bytes.解决办法:在建表语句的后面加入:ENGINE=In ...
- 【js UUID】JS生成UUID 使用
* 生成UUID * @returns */ function UUID() { var s = []; var hexDigits = "0123456789abcdef"; f ...