AudioTrack 在Java应用中,管理和播放一个单一的语音资源

The AudioTrack class manages and plays a single audio resource for Java applications.
 * It allows streaming of PCM audio buffers to the audio sink for playback. This is
 * achieved by "pushing" the data to the AudioTrack object using one of the
 *  {@link #write(byte[], int, int)} and {@link #write(short[], int, int)} methods.

一个AudioTrack 实例可以在两种模式下运行:static和streaming模式

在Streaming模式下,应用调用write()方法,向AudioTrack中写入一段连续的数据流。

这个操作处于阻塞状态,直到数据从java层传递到native层,并且加入播放队列。才返回

streaming模式最适用于   以下音频数据块:

(1)音乐长度太长。导致太大放不进内存

(2)音乐质量太高,导致太大放不进内存。(高取样率,采样位数)

(3)在队列中的前一个audio正在播放时,接收到或生成。

* <p>An AudioTrack instance can operate under two modes: static or streaming.<br>
 * In Streaming mode, the application writes a continuous stream of data to the AudioTrack, using
 * one of the {@code write()} methods. These are blocking and return when the data has been
 * transferred from the Java layer to the native layer and queued for playback. The streaming
 * mode is most useful when playing blocks of audio data that for instance are:
 *
 * <ul>
 *   <li>too big to fit in memory because of the duration of the sound to play,</li>
 *   <li>too big to fit in memory because of the characteristics of the audio data
 *         (high sampling rate, bits per sample ...)</li>
 *   <li>received or generated while previously queued audio is playing.</li>
 * </ul>
 *

static模式适用于  处理可以放在内存中的较短,且需要小的播放开销的声音片段

因此static模式适用于UI声音 和 游戏声音这种经常被播放的情况,开销很小。

* The static mode should be chosen when dealing with short sounds that fit in memory and
 * that need to be played with the smallest latency possible. The static mode will
 * therefore be preferred for UI and game sounds that are played often, and with the
 * smallest overhead possible.

被创建后,一个AudioTrack对象初始化和它相关的音频缓存

缓存的大小在构造方法中被详细指定,决定了一个AudioTrack在用完数据之前能播放多长

对于一个使用static模式的AudioTrack,定义的size大小,是它能播放的声音片段大小的最大值

对使用streaming模式的, 写入audio sink(音频信宿)中的数据总和小于等于缓存大小。

AudioTrack 不是 final的,所以可以有子类,但是不推荐这么做

* <p>Upon creation, an AudioTrack object initializes its associated audio buffer.
 * The size of this buffer, specified during the construction, determines how long an AudioTrack
 * can play before running out of data.<br>
 * For an AudioTrack using the static mode, this size is the maximum size of the sound that can
 * be played from it.<br>
 * For the streaming mode, data will be written to the audio sink in chunks of
 * sizes less than or equal to the total buffer size.
 *
 * AudioTrack is not final and thus permits subclasses, but such use is not recommended.

    /**
* State of an AudioTrack that was not successfully initialized upon creation.
*/
public static final int STATE_UNINITIALIZED = 0;  未成功初始化
/**
* State of an AudioTrack that is ready to be used.
*/
public static final int STATE_INITIALIZED = 1;  成功初始化
/**
* State of a successfully initialized AudioTrack that uses static data,
* but that hasn't received that data yet.
*/
public static final int STATE_NO_STATIC_DATA = 2;  使用static模式,成功初始化。但还没有收到任何音频数据 /**
* Indicates the state of the AudioTrack instance.
*/
private int mState = STATE_UNINITIALIZED;     mState 记录当前AudioTrack 实例的状态 /**
* Indicates the play state of the AudioTrack instance.
*/
private int mPlayState = PLAYSTATE_STOPPED;   mPlayState 记录当前播放状态 初始化为 stopped /**
* Lock to make sure mPlayState updates are reflecting the actual state of the object.
*/
private final Object mPlayStateLock = new Object();    mPlayState的锁 为保证反应播放真实状态 /**
* Looper associated with the thread that creates the AudioTrack instance.
*/
private final Looper mInitializationLooper;    创建AudioTrack实例的线程的Looper /**
* The audio data source sampling rate in Hz.
*/
private int mSampleRate; // initialized by all constructors    音频数据资源的采样率(Hz) /**
* The audio channel mask.
*/
private int mChannels = AudioFormat.CHANNEL_OUT_MONO;    初始化为单声道 /**
* The type of the audio stream to play. See
* {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},     通话  系统
* {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC},         铃声  音乐
* {@link AudioManager#STREAM_ALARM}, {@link AudioManager#STREAM_NOTIFICATION}, and  闹铃  通知
* {@link AudioManager#STREAM_DTMF}.                           DTMF多音双频
*/
private int mStreamType = AudioManager.STREAM_MUSIC;      初始化为音乐声道
/**
* The way audio is consumed by the audio sink, streaming or static.
*/
private int mDataLoadMode = MODE_STREAM;            数据在音频信宿中的加载模式
/**
* The current audio channel configuration.
*/
private int mChannelConfiguration = AudioFormat.CHANNEL_OUT_MONO;  当前音频通道配置
/**
* The encoding of the audio samples.
* @see AudioFormat#ENCODING_PCM_8BIT
* @see AudioFormat#ENCODING_PCM_16BIT
*/
private int mAudioFormat = AudioFormat.ENCODING_PCM_16BIT;     音频格式 初始化为16bits per sample(设备保证支持的方式)
/**
* Audio session ID
*/
private int mSessionId = 0; public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,
int bufferSizeInBytes, int mode)
  throws IllegalArgumentException {
   this(streamType, sampleRateInHz, channelConfig, audioFormat,
   bufferSizeInBytes, mode, 0 /*session*/);
} 当AudioTrack必须和一个特殊的音频session绑定时。使用这个构造函数。 session ID的首要用途是 /**
* Class constructor with audio session. Use this constructor when the AudioTrack must be
* attached to a particular audio session. The primary use of the audio session ID is to
* associate audio effects to a particular instance of AudioTrack: if an audio session ID
* is provided when creating an AudioEffect, this effect will be applied only to audio tracks
* and media players in the same session and not to the output mix.
* When an AudioTrack is created without specifying a session, it will create its own session
* which can be retrieved by calling the {@link #getAudioSessionId()} method.
* If a non-zero session ID is provided, this AudioTrack will share effects attached to this
* session
* with all other media players or audio tracks in the same session, otherwise a new session
* will be created for this track if none is supplied.
* @param streamType the type of the audio stream. See
* {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},
* {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC},
* {@link AudioManager#STREAM_ALARM}, and {@link AudioManager#STREAM_NOTIFICATION}.
* @param sampleRateInHz the initial source sample rate expressed in Hz.
* @param channelConfig describes the configuration of the audio channels.
* See {@link AudioFormat#CHANNEL_OUT_MONO} and
* {@link AudioFormat#CHANNEL_OUT_STEREO}
* @param audioFormat the format in which the audio data is represented.
* See {@link AudioFormat#ENCODING_PCM_16BIT} and
* {@link AudioFormat#ENCODING_PCM_8BIT} * @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read    
* from for playback. If using the AudioTrack in streaming mode, you can write data into
* this buffer in smaller chunks than this size. If using the AudioTrack in static mode,
* this is the maximum size of the sound that will be played for this instance.
134      If track's creation mode is {@link #MODE_STREAM}, you can write data into
* this buffer in chunks less than or equal to this size, and it is typical to use
* chunks of 1/2 of the total size to permit double-buffering.
* If the track's creation mode is {@link #MODE_STATIC},
* this is the maximum length sample, or audio clip, that can be played by this instance.
* See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size
* for the successful creation of an AudioTrack instance in streaming mode. Using values
* smaller than getMinBufferSize() will result in an initialization failure. 144   bufferSizeInBytes 是音频缓存区的总大小。从中读出音频数据来播放
146   streaming模式中。写入的数据要小于等于这个值。通常传1/2buffer大小的数据。保证双重缓冲区处理
148   static模式中。这是能播放的音频大小的最大值。
150   通过getMinBufferSize来获得一个AudioTrack实例在streaming模式下正确初始化需要的缓存区最小值。
152   如果buffer比这个最小值还小。将会导致初始化失败 * @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}
* @param sessionId Id of audio session the AudioTrack must be attached to
* @throws java.lang.IllegalArgumentException
*/
public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,
int bufferSizeInBytes, int mode, int sessionId) throws IllegalArgumentException {
// mState already == STATE_UNINITIALIZED // remember which looper is associated with the AudioTrack instantiation
Looper looper;
if ((looper = Looper.myLooper()) == null) {
looper = Looper.getMainLooper();
}
mInitializationLooper = looper;      记录创建实例时 所在looper audioParamCheck(streamType, sampleRateInHz, channelConfig, audioFormat, mode);    检查参数合法性 audioBuffSizeCheck(bufferSizeInBytes);      检查缓冲区大小 if (sessionId < 0) {
throw new IllegalArgumentException("Invalid audio session ID: "+sessionId);
} int[] session = new int[1];
session[0] = sessionId;
// native initialization
int initResult = native_setup(new WeakReference<AudioTrack>(this),
mStreamType, mSampleRate, mChannels, mAudioFormat,
mNativeBufferSizeInBytes, mDataLoadMode, session);
if (initResult != SUCCESS) {
loge("Error code "+initResult+" when initializing AudioTrack.");
return; // with mState == STATE_UNINITIALIZED
} mSessionId = session[0]; if (mDataLoadMode == MODE_STATIC) {
mState = STATE_NO_STATIC_DATA;
} else {
mState = STATE_INITIALIZED;
}
} // mask of all the channels supported by this implementation        按位或  也就是将1叠加。就是支持的所有通道模式
private static final int SUPPORTED_OUT_CHANNELS =
AudioFormat.CHANNEL_OUT_FRONT_LEFT |
AudioFormat.CHANNEL_OUT_FRONT_RIGHT |
AudioFormat.CHANNEL_OUT_FRONT_CENTER |
AudioFormat.CHANNEL_OUT_LOW_FREQUENCY |
AudioFormat.CHANNEL_OUT_BACK_LEFT |
AudioFormat.CHANNEL_OUT_BACK_RIGHT |
AudioFormat.CHANNEL_OUT_BACK_CENTER; // Convenience method for the constructor's parameter checks.
// This is where constructor IllegalArgumentException-s are thrown
// postconditions:
// mStreamType is valid
// mChannelCount is valid
// mChannels is valid
// mAudioFormat is valid
// mSampleRate is valid
// mDataLoadMode is valid
private void audioParamCheck(int streamType, int sampleRateInHz,        用于构造函数参数检查
int channelConfig, int audioFormat, int mode) { //--------------
// stream type
if( (streamType != AudioManager.STREAM_ALARM) && (streamType != AudioManager.STREAM_MUSIC)  
&& (streamType != AudioManager.STREAM_RING) && (streamType != AudioManager.STREAM_SYSTEM)
&& (streamType != AudioManager.STREAM_VOICE_CALL)
&& (streamType != AudioManager.STREAM_NOTIFICATION)
&& (streamType != AudioManager.STREAM_BLUETOOTH_SCO)
&& (streamType != AudioManager.STREAM_DTMF)) {
throw new IllegalArgumentException("Invalid stream type.");
}
mStreamType = streamType; //--------------
// sample rate, note these values are subject to change
if ( (sampleRateInHz < 4000) || (sampleRateInHz > 48000) ) {      4000Hz <= 采样率 <= 48000Hz 合法
throw new IllegalArgumentException(sampleRateInHz
+ "Hz is not a supported sample rate.");
}
mSampleRate = sampleRateInHz; //--------------
// channel config
mChannelConfiguration = channelConfig; switch (channelConfig) {
case AudioFormat.CHANNEL_OUT_DEFAULT: //AudioFormat.CHANNEL_CONFIGURATION_DEFAULT
case AudioFormat.CHANNEL_OUT_MONO:
case AudioFormat.CHANNEL_CONFIGURATION_MONO:
mChannelCount = 1;                      声道数:1
mChannels = AudioFormat.CHANNEL_OUT_MONO;        单声道
break;
case AudioFormat.CHANNEL_OUT_STEREO:
case AudioFormat.CHANNEL_CONFIGURATION_STEREO:
mChannelCount = 2;                      声道数:2
mChannels = AudioFormat.CHANNEL_OUT_STEREO;       双声道
break;
default:
if (!isMultichannelConfigSupported(channelConfig)) {        不支持多声道 抛异常
// input channel configuration features unsupported channels
throw new IllegalArgumentException("Unsupported channel configuration.");
}
mChannels = channelConfig;
mChannelCount = Integer.bitCount(channelConfig);         计算1的位数  即为声道数母
} //--------------
// audio format
switch (audioFormat) {
case AudioFormat.ENCODING_DEFAULT:            默认16bit
mAudioFormat = AudioFormat.ENCODING_PCM_16BIT;
break;
case AudioFormat.ENCODING_PCM_16BIT:
case AudioFormat.ENCODING_PCM_8BIT:
mAudioFormat = audioFormat;
break;
default:
throw new IllegalArgumentException("Unsupported sample encoding."
+ " Should be ENCODING_PCM_8BIT or ENCODING_PCM_16BIT.");
} //--------------
// audio load mode
if ( (mode != MODE_STREAM) && (mode != MODE_STATIC) ) {  不是stream或者static 抛异常
throw new IllegalArgumentException("Invalid mode.");
}
mDataLoadMode = mode;
} // Convenience method for the constructor's audio buffer size check.
// preconditions:
// mChannelCount is valid
// mAudioFormat is valid
// postcondition:
// mNativeBufferSizeInBytes is valid (multiple of frame size, positive)
private void audioBuffSizeCheck(int audioBufferSize) {
// NB: this section is only valid with PCM data.
// To update when supporting compressed formats
int frameSizeInBytes = mChannelCount
* (mAudioFormat == AudioFormat.ENCODING_PCM_8BIT ? 1 : 2);      帧大小 = 信道数 * (8bits per sample :1,16bits per sample :2)
if ((audioBufferSize % frameSizeInBytes != 0) || (audioBufferSize < 1)) {       不整 或<1。抛异常
throw new IllegalArgumentException("Invalid audio buffer size.");
} mNativeBufferSizeInBytes = audioBufferSize;
mNativeBufferSizeInFrames = audioBufferSize / frameSizeInBytes;          帧数
} /**
* Convenience method to check that the channel configuration (a.k.a channel mask) is supported
* @param channelConfig the mask to validate
* @return false if the AudioTrack can't be used with such a mask
*/
private static boolean isMultichannelConfigSupported(int channelConfig) {
// check for unsupported channels
if ((channelConfig & SUPPORTED_OUT_CHANNELS) != channelConfig) {    不相等,也就是某个1被置0了。说明包含不支持的声道模式
loge("Channel configuration features unsupported channels");
return false;
}
// check for unsupported multichannel combinations:
// - FL/FR must be present
// - L/R channels must be paired (e.g. no single L channel)
final int frontPair =
AudioFormat.CHANNEL_OUT_FRONT_LEFT | AudioFormat.CHANNEL_OUT_FRONT_RIGHT;  前置
if ((channelConfig & frontPair) != frontPair) {
loge("Front channels must be present in multichannel configurations");
return false;
}
final int backPair =
AudioFormat.CHANNEL_OUT_BACK_LEFT | AudioFormat.CHANNEL_OUT_BACK_RIGHT;  后置
if ((channelConfig & backPair) != 0) {
if ((channelConfig & backPair) != backPair) {        支持后置,但是只支持其中一个。
loge("Rear channels can't be used independently");
return false;
}
}                            ==0情况下返回true。也就是不支持后置时,返回的是true
return true;
} /**
* Releases the native AudioTrack resources.        释放native层的资源
*/
public void release() {
// even though native_release() stops the native AudioTrack, we need to stop   即使native_release(); 中停止了native层的AudioTrack。我们还是需要调用stop去停止AudioTrack的子类
// AudioTrack subclasses too.
try {
stop();
} catch(IllegalStateException ise) {
// don't raise an exception, we're releasing the resources.
}
native_release();
mState = STATE_UNINITIALIZED;      状态设置为 未初始化
} @Override
protected void finalize() {
native_finalize();
} //--------------------------------------------------------------------------
// Getters
//--------------------
/**
* Returns the minimum valid volume value. Volume values set under this one will    如果音量设置比MIN小,则置为0.1
* be clamped at this value.
* @return the minimum volume expressed as a linear attenuation.
*/
static public float getMinVolume() {
return VOLUME_MIN;
} /**
* Returns the maximum valid volume value. Volume values set above this one will    如果音量设置比MAX大,则置为1.0
* be clamped at this value.
* @return the maximum volume expressed as a linear attenuation.
*/
static public float getMaxVolume() {
return VOLUME_MAX;
} /**
* Returns the playback state of the AudioTrack instance.
* @see #PLAYSTATE_STOPPED
* @see #PLAYSTATE_PAUSED
* @see #PLAYSTATE_PLAYING
*/
public int getPlayState() {
synchronized (mPlayStateLock) {            互斥锁用在这儿
return mPlayState;
}
} 406 返回stream模式下。成功创建需要的buffer大小
407
408 注意:这个大小不保证 音频加载后顺利播放。
409
410 buffer 被数据重复填充的期望频率。需要选择更大的buffer值。
411
412 例如,如果你想要动态设置一个比初始值大的AudioTrack资源采样率。要保证计算时,使用你想设置的
413
414 最大采样率去计算buffer大小 /**
* Returns the minimum buffer size required for the successful creation of an AudioTrack
* object to be created in the {@link #MODE_STREAM} mode. Note that this size doesn't
* guarantee a smooth playback under load, and higher values should be chosen according to
* the expected frequency at which the buffer will be refilled with additional data to play.
* For example, if you intend to dynamically set the source sample rate of an AudioTrack
* to a higher value than the initial source sample rate, be sure to configure the buffer size
* based on the highest planned sample rate.
* @param sampleRateInHz the source sample rate expressed in Hz.
* @param channelConfig describes the configuration of the audio channels.
* See {@link AudioFormat#CHANNEL_OUT_MONO} and
* {@link AudioFormat#CHANNEL_OUT_STEREO}
* @param audioFormat the format in which the audio data is represented.
* See {@link AudioFormat#ENCODING_PCM_16BIT} and
* {@link AudioFormat#ENCODING_PCM_8BIT}
* @return {@link #ERROR_BAD_VALUE} if an invalid parameter was passed,
* or {@link #ERROR} if unable to query for output properties,
* or the minimum buffer size expressed in bytes.
*/
static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {
int channelCount = 0;
switch(channelConfig) {                      计算信道数目
case AudioFormat.CHANNEL_OUT_MONO:
case AudioFormat.CHANNEL_CONFIGURATION_MONO:
channelCount = 1;
break;
case AudioFormat.CHANNEL_OUT_STEREO:
case AudioFormat.CHANNEL_CONFIGURATION_STEREO:
channelCount = 2;
break;
default:
if ((channelConfig & SUPPORTED_OUT_CHANNELS) != channelConfig) {
// input channel configuration features unsupported channels
loge("getMinBufferSize(): Invalid channel configuration.");
return ERROR_BAD_VALUE;
} else {
channelCount = Integer.bitCount(channelConfig);
}
} if ((audioFormat != AudioFormat.ENCODING_PCM_16BIT)       非法音频格式
&& (audioFormat != AudioFormat.ENCODING_PCM_8BIT)) {
loge("getMinBufferSize(): Invalid audio format.");
return ERROR_BAD_VALUE;
} // sample rate, note these values are subject to change
if ( (sampleRateInHz < SAMPLE_RATE_HZ_MIN) || (sampleRateInHz > SAMPLE_RATE_HZ_MAX) ) {  非法采样率
loge("getMinBufferSize(): " + sampleRateInHz + " Hz is not a supported sample rate.");
return ERROR_BAD_VALUE;
} int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);    调用native
if (size <= 0) {
loge("getMinBufferSize(): error querying hardware");
return ERROR;
}
else {
return size;
}
} /**     486 设置播放的开始位置(帧)。但是此时播放必须处于停止或暂停状态。
487
488 并且必须是static 模式下。
489
490 0 <= position <= buffer 可以容纳的帧总数 * Sets the playback head position.
* The track must be stopped or paused for the position to be changed,
* and must use the {@link #MODE_STATIC} mode.
* @param positionInFrames playback head position expressed in frames
* Zero corresponds to start of buffer.
* The position must not be greater than the buffer size in frames, or negative.
* @return error code or success, see {@link #SUCCESS}, {@link #ERROR_BAD_VALUE},
* {@link #ERROR_INVALID_OPERATION}
*/
public int setPlaybackHeadPosition(int positionInFrames) {
if (mDataLoadMode == MODE_STREAM || mState != STATE_INITIALIZED ||
getPlayState() == PLAYSTATE_PLAYING) {
return ERROR_INVALID_OPERATION;
}
if (!(0 <= positionInFrames && positionInFrames <= mNativeBufferSizeInFrames)) {
return ERROR_BAD_VALUE;
}
return native_set_position(positionInFrames);
} /** 518 设置循环起始位置,终止位置和循环次数。可以无限循环
519
520 和上一方法一样。前提必须停止 或者暂停。且为static模式
521
522 起始帧位置。0指代buffer的开头。 start 不能 >= buffer帧总数结束帧位置。end 不能 > buffer帧总数
523
524 为了循环。 start < end
525
526 start end 和 loopcount 可以都为0 * Sets the loop points and the loop count. The loop can be infinite.
* Similarly to setPlaybackHeadPosition,
* the track must be stopped or paused for the loop points to be changed,
* and must use the {@link #MODE_STATIC} mode.
* @param startInFrames loop start marker expressed in frames
* Zero corresponds to start of buffer.
* The start marker must not be greater than or equal to the buffer size in frames, or negative.
* @param endInFrames loop end marker expressed in frames
* The total buffer size in frames corresponds to end of buffer.
* The end marker must not be greater than the buffer size in frames.
* For looping, the end marker must not be less than or equal to the start marker,
* but to disable looping
* it is permitted for start marker, end marker, and loop count to all be 0.
* @param loopCount the number of times the loop is looped.
* A value of -1 means infinite looping, and 0 disables looping.
* @return error code or success, see {@link #SUCCESS}, {@link #ERROR_BAD_VALUE},
* {@link #ERROR_INVALID_OPERATION}
*/
public int setLoopPoints(int startInFrames, int endInFrames, int loopCount) {
if (mDataLoadMode == MODE_STREAM || mState != STATE_INITIALIZED ||
getPlayState() == PLAYSTATE_PLAYING) {
return ERROR_INVALID_OPERATION;
}
if (loopCount == 0) {
; // explicitly allowed as an exception to the loop region range check
} else if (!(0 <= startInFrames && startInFrames < mNativeBufferSizeInFrames &&
startInFrames < endInFrames && endInFrames <= mNativeBufferSizeInFrames)) {
return ERROR_BAD_VALUE;
}
return native_set_loop(startInFrames, endInFrames, loopCount);
} /**    不让用只有子类让用。但是不建议有子类。所以此方法荒废
* Sets the initialization state of the instance. This method was originally intended to be used
* in an AudioTrack subclass constructor to set a subclass-specific post-initialization state.
* However, subclasses of AudioTrack are no longer recommended, so this method is obsolete.
* @param state the state of the AudioTrack instance
* @deprecated Only accessible by subclasses, which are not recommended for AudioTrack.
*/
@Deprecated
protected void setState(int state) {
mState = state;
} //---------------------------------------------------------
// Transport control methods      以下三个方法格式基本相同 play stop pause
//--------------------
/**
* Starts playing an AudioTrack.
* If track's creation mode is {@link #MODE_STATIC}, you must have called write() prior.    static 模式下,要先调用write()
*
* @throws IllegalStateException
*/
public void play()
throws IllegalStateException {
if (mState != STATE_INITIALIZED) {
throw new IllegalStateException("play() called on uninitialized AudioTrack.");
} synchronized(mPlayStateLock) {
native_start();
mPlayState = PLAYSTATE_PLAYING;
}
} /**
601 stream模式下创建的实例,音频会在buffer写入的数据都被播放后才会停止。
602
603 如果想让它立即停止需要调用 pause() 然后调用flush() 来清除buffer中没被播放的数据
* Stops playing the audio data.
* When used on an instance created in {@link #MODE_STREAM} mode, audio will stop playing
* after the last buffer that was written has been played. For an immediate stop, use
* {@link #pause()}, followed by {@link #flush()} to discard audio data that hasn't been played
* back yet.
* @throws IllegalStateException
*/
public void stop()
throws IllegalStateException {
if (mState != STATE_INITIALIZED) {
throw new IllegalStateException("stop() called on uninitialized AudioTrack.");
} // stop playing
synchronized(mPlayStateLock) {
native_stop();
mPlayState = PLAYSTATE_STOPPED;
}
} /** 626 暂停。还没被播放的数据不会被消除。如果再调用play() 会继续播放。
627
628 使用flush() 清除缓存数据 * Pauses the playback of the audio data. Data that has not been played
* back will not be discarded. Subsequent calls to {@link #play} will play
* this data back. See {@link #flush()} to discard this data.
*
* @throws IllegalStateException
*/
public void pause()
throws IllegalStateException {
if (mState != STATE_INITIALIZED) {
throw new IllegalStateException("pause() called on uninitialized AudioTrack.");
}
//logd("pause()"); // pause playback
synchronized(mPlayStateLock) {
native_pause();
mPlayState = PLAYSTATE_PAUSED;
}
} //---------------------------------------------------------
// Audio data supply
//-------------------- /** 658 清除队列中等待播放的音频数据。所有都会被清除。
659
660 如果没有停止或暂停。或者当前模式不是stream。操作会失败 * Flushes the audio data currently queued for playback. Any data that has
* not been played back will be discarded. No-op if not stopped or paused,
* or if the track's creation mode is not {@link #MODE_STREAM}.
*/
public void flush() {
if (mState == STATE_INITIALIZED) {
// flush the data in native layer
native_flush();
} } //--------------------------------------------------------------------------
// Audio effects management      音频效果管理
//-------------------- /** 684 为AudioTrack增加一种辅助效果。一种典型辅助效果是混响,可以应用于所有音频资源。导致
685
686 对这种辅助效果 造成一个具有确切数值的影响。这个数值被setAuxEffectSendLevel()指定
687
688 创建一个辅助效果后,通过AudioEffect.gerId()来获得它的ID,在使用这个ID调用此方法。
689
690 当你想去掉这种辅助效果。再调一次这个方法,此时ID传null * Attaches an auxiliary effect to the audio track. A typical auxiliary
* effect is a reverberation effect which can be applied on any sound source
* that directs a certain amount of its energy to this effect. This amount
* is defined by setAuxEffectSendLevel().
* {@see #setAuxEffectSendLevel(float)}.
* <p>After creating an auxiliary effect (e.g.
* {@link android.media.audiofx.EnvironmentalReverb}), retrieve its ID with
* {@link android.media.audiofx.AudioEffect#getId()} and use it when calling
* this method to attach the audio track to the effect.
* <p>To detach the effect from the audio track, call this method with a
* null effect id.
*
* @param effectId system wide unique id of the effect to attach    这个效果id,在系统内是唯一的
* @return error code or success, see {@link #SUCCESS},
* {@link #ERROR_INVALID_OPERATION}, {@link #ERROR_BAD_VALUE}
*/
public int attachAuxEffect(int effectId) {
if (mState == STATE_UNINITIALIZED) {
return ERROR_INVALID_OPERATION;
}
return native_attachAuxEffect(effectId);
} /** 718 设置辅助效果的level。从0.0f 到1.0f。超出会被卡掉。默认为0.0f
719
720 所以即使已经指定了效果。也需要调用这个方法,效果才会被应用
721
722 注意:这个值是个未加工过的度量。UI操作需要被对数拉伸
723
724 audio 框架中的增益 从-72dB到0dB。所以从线性UI的输入x到这个level的一个合适的转换为:
725
726 x == 0 则 level==0
727
728 0<x<=R 则 level = 10^(72*(x-R)/20/R) * Sets the send level of the audio track to the attached auxiliary effect
* {@link #attachAuxEffect(int)}. The level value range is 0.0f to 1.0f.
* Values are clamped to the (0.0f, 1.0f) interval if outside this range.
* <p>By default the send level is 0.0f, so even if an effect is attached to the player
* this method must be called for the effect to be applied.
* Note that the passed level value is a raw scalar. UI controls should be scaled logarithmically: the gain applied by audio framework ranges from -72dB to 0dB, so an appropriate conversion from linear UI input x to level is: x == 0 -> level = 0 0 < x <= R -> level = 10^(72*(x-R)/20/R)
*
* @param level send level scalar
* @return error code or success, see {@link #SUCCESS},
* {@link #ERROR_INVALID_OPERATION}
*/
public int setAuxEffectSendLevel(float level) {
if (mState == STATE_UNINITIALIZED) {
return ERROR_INVALID_OPERATION;
}
// clamp the level
if (level < getMinVolume()) {
level = getMinVolume();
}
if (level > getMaxVolume()) {
level = getMaxVolume();
}
native_setAuxEffectSendLevel(level);
return SUCCESS;
}

AudioTrack的更多相关文章

  1. Android音频开发之AudioTrack实时播放

    前言: 其实在Android中录音可以用MediaRecord录音,操作比较简单.但是不能对音频进行处理.考虑到项目中做的是实时语音只能选择AudioRecord进行录音.然后实时播放也只能采用Aud ...

  2. Android使用AudioTrack发送红外信号

    最近要做一个项目,利用手机的耳机口输出红外信号,从而把手机变成红外遥控器,信号处理的知识基本都还给老师了,刚开始真的挺头疼.找了不少资料研究了一下,总算有点心得,在这里做个备忘. 一.音频信号输出原理 ...

  3. MediaPlayer中创建AudioTrack的过程

    使用MediaPlayer播放音视频时,会创建AudioTrack对象用于播放音频数据.下面就来看看MediaPlayer创建AudioTrack的过程: 1.创建AudioTrack对象MediaP ...

  4. MediaPlayer和AudioTrack播放Audio的区别与联系

    转自http://blog.csdn.net/ameyume/article/details/7618820 播放声音可以用MediaPlayer和AudioTrack,两者都提供了java API供 ...

  5. Android深入浅出之 AudioTrack分析

    Android深入浅出之Audio 第一部分 AudioTrack分析 一 目的 本文的目的是通过从Audio系统来分析Android的代码,包括Android自定义的那套机制和一些常见类的使用,比如 ...

  6. [Android] AudioTrack::start

    AudioTrack的start方法用于实现Android的音频输出,start究竟做了什么?回顾一下上一小节createTrack_l的最后部分,通过binder返回了一个Track的句柄,并以被保 ...

  7. [Android] AudioTrack实例

    AudioTrack在Android系统中是用于PCM数据的混音.播放,并不涉及到音频的解码.因此MP3这类经过编码的音频格式文件不能直接通过AudioTrack正确地播放,AudioTrack只能播 ...

  8. 使用AudioTrack播放PCM音频数据(android)

    众所周知,Android的MediaPlayer包含了Audio和video的播放功能,在Android的界面上,Music和Video两个应用程序都是调用MediaPlayer实现的.MediaPl ...

  9. Android Audio System 之一:AudioTrack如何与AudioFlinger

    Android Framework的音频子系统中,每一个音频流对应着一个AudioTrack类的一个实例,每个AudioTrack会在创建时注册到 AudioFlinger中,由AudioFlinge ...

  10. Android音频: 怎样使用AudioTrack播放一个WAV格式文件?

    翻译 By Long Luo 原文链接:Android Audio: Play a WAV file on an AudioTrack 译者注: 1. 因为这是技术文章,所以有些词句使用原文,表达更准 ...

随机推荐

  1. java: 观察者模式:Observable被观察者,Observer观察者

    java: 观察者模式:Observable被观察者,Observer观察者 以房子价格为例,卖房者为被观察者: import java.util.Observable; //被观察者子类化 publ ...

  2. Ubuntu16.04 安装wine下的QQ

    下载连接 wine-qqintl http://www.ubuntukylin.com/application/show.php?lang=cn&id=279 安装步骤 安装依赖库 sudo ...

  3. Uploading files using ASP.NET Web Api

    http://chris.59north.com/post/Uploading-files-using-ASPNET-Web-Api

  4. 机器学习(二十五)— 极大似然估计(MLE)、贝叶斯估计、最大后验概率估计(MAP)区别

    最大似然估计(Maximum likelihood estimation, 简称MLE)和最大后验概率估计(Maximum aposteriori estimation, 简称MAP)是很常用的两种参 ...

  5. POJ - 2195 最小费用最大流

    题意:每个人到每个房子一一对应,费用为曼哈顿距离,求最小的费用 题解:单源点汇点最小费用最大流,每个人和房子对于建边 #include<map> #include<set> # ...

  6. Codeforces Round #435 (Div. 2) c+d

    C:给n和k要求,找出n个不同的数,使得亦或起来等于k 可以先预处理从1到1e5,找亦或起来等于(11111111111111111)(二进制)的所有对数,然后四个一起亦或就是0了,再和k亦或 可以看 ...

  7. ural 2019 Pair: normal and paranormal

    2019. Pair: normal and paranormal Time limit: 1.0 secondMemory limit: 64 MB If you find yourself in ...

  8. CruiseControl初探

    一.背景 CruiseControl从<项目自动化之道>这本书中了解到,然后又从网上查资料做了一定尝试.同时,项目持续集成这部分我也计划在自己参与的项目上先试点实行,才有了这篇文章. 二. ...

  9. 大数据分析处理框架——离线分析(hive,pig,spark)、近似实时分析(Impala)和实时分析(storm、spark streaming)

    大数据分析处理架构图 数据源: 除该种方法之外,还可以分为离线数据.近似实时数据和实时数据.按照图中的分类其实就是说明了数据存储的结构,而特别要说的是流数据,它的核心就是数据的连续性和快速分析性: 计 ...

  10. Ubuntu下使用tmux实现分屏,以及tmux快捷键

    最近用到了终端的复用,使用了tmux,写一下自己的使用和一些快捷键. tmux是指通过一个终端登录远程主机并运行后,在其中可以开启多个控制台的终端复用软件. 来个效果图: 截图我使用的命令是  gno ...