Android 12(S) MultiMedia Learning(九)MediaCodec
这一节来学习MediaCodec的工作原理,相关代码路径:
http://aospxref.com/android-12.0.0_r3/xref/frameworks/av/media/libstagefright/MediaCodec.cpp
1、创建mediacodec对象
mediacodec对外给出了两个静态方法用于创建mediacodec对象,分别为CreateByType 和 CreateByComponentName,下面分别来看看
CreateByType方法的参数需要给出一个mimetype,并且指定是否为encoder,然后去MediaCodecList中去查询是否有合适的Codec,接着创建一个mediacodec对象,最后用查到的componentName来初始化mediacodec对象
// static
sp<MediaCodec> MediaCodec::CreateByType(
const sp<ALooper> &looper, const AString &mime, bool encoder, status_t *err, pid_t pid,
uid_t uid) {
sp<AMessage> format;
return CreateByType(looper, mime, encoder, err, pid, uid, format);
} sp<MediaCodec> MediaCodec::CreateByType(
const sp<ALooper> &looper, const AString &mime, bool encoder, status_t *err, pid_t pid,
uid_t uid, sp<AMessage> format) {
Vector<AString> matchingCodecs; MediaCodecList::findMatchingCodecs(
mime.c_str(),
encoder,
0,
format,
&matchingCodecs); if (err != NULL) {
*err = NAME_NOT_FOUND;
}
for (size_t i = 0; i < matchingCodecs.size(); ++i) {
sp<MediaCodec> codec = new MediaCodec(looper, pid, uid);
AString componentName = matchingCodecs[i];
status_t ret = codec->init(componentName);
if (err != NULL) {
*err = ret;
}
if (ret == OK) {
return codec;
}
ALOGD("Allocating component '%s' failed (%d), try next one.",
componentName.c_str(), ret);
}
return NULL;
}
CreateByComponentName方法其实和上面这个方法类似,因为参数指定了componentName,所以不需要MediaCodec查找的过程。
// static
sp<MediaCodec> MediaCodec::CreateByComponentName(
const sp<ALooper> &looper, const AString &name, status_t *err, pid_t pid, uid_t uid) {
sp<MediaCodec> codec = new MediaCodec(looper, pid, uid); const status_t ret = codec->init(name);
if (err != NULL) {
*err = ret;
}
return ret == OK ? codec : NULL; // NULL deallocates codec.
}
init
status_t MediaCodec::init(const AString &name) {
// 保存componentName
mInitName = name; mCodecInfo.clear(); bool secureCodec = false;
const char *owner = "";
// 从MediaCodecList中获取ComponentName对应的codecInfo
if (!name.startsWith("android.filter.")) {
status_t err = mGetCodecInfo(name, &mCodecInfo);
if (err != OK) {
mCodec = NULL; // remove the codec.
return err;
}
if (mCodecInfo == nullptr) {
ALOGE("Getting codec info with name '%s' failed", name.c_str());
return NAME_NOT_FOUND;
}
secureCodec = name.endsWith(".secure");
Vector<AString> mediaTypes;
mCodecInfo->getSupportedMediaTypes(&mediaTypes);
for (size_t i = 0; i < mediaTypes.size(); ++i) {
if (mediaTypes[i].startsWith("video/")) {
mIsVideo = true;
break;
}
}
获取ownerName
owner = mCodecInfo->getOwnerName();
}
// 根据owner来创建一个CodecBase对象
mCodec = mGetCodecBase(name, owner);
if (mCodec == NULL) {
ALOGE("Getting codec base with name '%s' (owner='%s') failed", name.c_str(), owner);
return NAME_NOT_FOUND;
}
// 根据codecInfo来判读是否为Video,如果是video则为其创建一个Looper
if (mIsVideo) {
// video codec needs dedicated looper
if (mCodecLooper == NULL) {
mCodecLooper = new ALooper;
mCodecLooper->setName("CodecLooper");
mCodecLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
} mCodecLooper->registerHandler(mCodec);
} else {
mLooper->registerHandler(mCodec);
} mLooper->registerHandler(this);
// 给codecbase注册回调
mCodec->setCallback(
std::unique_ptr<CodecBase::CodecCallback>(
new CodecCallback(new AMessage(kWhatCodecNotify, this))));
// 获取codecbase的bufferchannel
mBufferChannel = mCodec->getBufferChannel();
// 给bufferchannel注册回调
mBufferChannel->setCallback(
std::unique_ptr<CodecBase::BufferCallback>(
new BufferCallback(new AMessage(kWhatCodecNotify, this)))); sp<AMessage> msg = new AMessage(kWhatInit, this);
if (mCodecInfo) {
msg->setObject("codecInfo", mCodecInfo);
// name may be different from mCodecInfo->getCodecName() if we stripped
// ".secure"
}
msg->setString("name", name); // ......
err = PostAndAwaitResponse(msg, &response); return err;
}
init方法中做了以下几件事情:
1、从MediaCodecList中获取componentName对应的codecInfo(mGetCodecBase是个函数指针,在构造函数中被定义),然后查看codecInfo的类型,判别当前的mediacodec对象是用于video/audio,获取component对应的owner。为什么要找owner?现在android中有两套框架用于编解码,一套是omx,还有一套codec2.0,component owner用于标记其所属omx还是codec2.0,
//static
sp<CodecBase> MediaCodec::GetCodecBase(const AString &name, const char *owner) {
if (owner) {
if (strcmp(owner, "default") == 0) {
return new ACodec;
} else if (strncmp(owner, "codec2", 6) == 0) {
return CreateCCodec();
}
} if (name.startsWithIgnoreCase("c2.")) {
return CreateCCodec();
} else if (name.startsWithIgnoreCase("omx.")) {
// at this time only ACodec specifies a mime type.
return new ACodec;
} else if (name.startsWithIgnoreCase("android.filter.")) {
return new MediaFilter;
} else {
return NULL;
}
}
创建codecbase的代码并不长,这里就贴出来。可以看到有两套判断机制,可以根据owner来判断,也可以用component的开头来判断。
2、给codecbase设置looper,如果是video则为其创建新的Looper,如果是audio则用上层传下来的Looper,mediacodec自身使用上层传下来的looper
3、给codecbase注册回调,注册的是CodecCallback对象,CodecCallback中保存的AMessage target是mediacodec对象,所以CodecBase发出回调消息,会通过CodecCallback中转后发给mediaCodec处理
4、获取codecbase的bufferchannel
5、给bufferchannel注册回调,注册的是BufferCallback对象,过程和codecbase的回调相同
6、发送一条kWhatInit消息,到onMessageReceived中处理,将codecInfo和componentName重新打包用于初始化codecbase对象
setState(INITIALIZING); sp<RefBase> codecInfo;
(void)msg->findObject("codecInfo", &codecInfo);
AString name;
CHECK(msg->findString("name", &name)); sp<AMessage> format = new AMessage;
if (codecInfo) {
format->setObject("codecInfo", codecInfo);
}
format->setString("componentName", name); mCodec->initiateAllocateComponent(format);
到这里mediacodec的创建就完成了,codecbase如何创建以及初始化的会专门来学习。
2、configure
configure代码比较长,但是很简单!这里只贴一点点
sp<AMessage> msg = new AMessage(kWhatConfigure, this);
msg->setMessage("format", format);
msg->setInt32("flags", flags);
msg->setObject("surface", surface); if (crypto != NULL || descrambler != NULL) {
if (crypto != NULL) {
msg->setPointer("crypto", crypto.get());
} else {
msg->setPointer("descrambler", descrambler.get());
}
if (mMetricsHandle != 0) {
mediametrics_setInt32(mMetricsHandle, kCodecCrypto, 1);
}
} else if (mFlags & kFlagIsSecure) {
ALOGW("Crypto or descrambler should be given for secure codec");
}
err = PostAndAwaitResponse(msg, &response);
这个方法做了两件事:
1、解析出传入format中的参数信息,保存到mediacodec中
2、重新打包format、surface、crypto等信息,到onMessageReceived中处理
case kWhatConfigure:
{
sp<RefBase> obj;
CHECK(msg->findObject("surface", &obj)); sp<AMessage> format;
CHECK(msg->findMessage("format", &format));
// setSurface
if (obj != NULL) {
if (!format->findInt32(KEY_ALLOW_FRAME_DROP, &mAllowFrameDroppingBySurface)) {
// allow frame dropping by surface by default
mAllowFrameDroppingBySurface = true;
} format->setObject("native-window", obj);
status_t err = handleSetSurface(static_cast<Surface *>(obj.get()));
if (err != OK) {
PostReplyWithError(replyID, err);
break;
}
} else {
// we are not using surface so this variable is not used, but initialize sensibly anyway
mAllowFrameDroppingBySurface = false; handleSetSurface(NULL);
} uint32_t flags;
CHECK(msg->findInt32("flags", (int32_t *)&flags));
if (flags & CONFIGURE_FLAG_USE_BLOCK_MODEL) {
if (!(mFlags & kFlagIsAsync)) {
PostReplyWithError(replyID, INVALID_OPERATION);
break;
}
mFlags |= kFlagUseBlockModel;
}
mReplyID = replyID;
setState(CONFIGURING);
// 获取crypto
void *crypto;
if (!msg->findPointer("crypto", &crypto)) {
crypto = NULL;
}
// 将crypto设定给bufferchannel
mCrypto = static_cast<ICrypto *>(crypto);
mBufferChannel->setCrypto(mCrypto);
// 获取解扰信息
void *descrambler;
if (!msg->findPointer("descrambler", &descrambler)) {
descrambler = NULL;
}
// 将解扰信息设定给bufferchannel
mDescrambler = static_cast<IDescrambler *>(descrambler);
mBufferChannel->setDescrambler(mDescrambler); // 从flags中判断是否为encoder
format->setInt32("flags", flags);
if (flags & CONFIGURE_FLAG_ENCODE) {
format->setInt32("encoder", true);
mFlags |= kFlagIsEncoder;
} // 获取csd buffer
extractCSD(format); // 判断是否需要tunnel mode
int32_t tunneled;
if (format->findInt32("feature-tunneled-playback", &tunneled) && tunneled != 0) {
ALOGI("Configuring TUNNELED video playback.");
mTunneled = true;
} else {
mTunneled = false;
} int32_t background = 0;
if (format->findInt32("android._background-mode", &background) && background) {
androidSetThreadPriority(gettid(), ANDROID_PRIORITY_BACKGROUND);
}
// 调用codecbase的configure方法
mCodec->initiateConfigureComponent(format);
break;
}
configure是至关重要的,播放器的功能,比如:是否需要surface、tunnel mode、加密播放、以及是否为encoder,都在配置
3、Start
配置完mediacodec状态被置为CONFIGURED,接下来就可以开始播放了。
setState(STARTING);
mCodec->initiateStart();
start方法比较简单,将状态置为STARTING,调用codecbase的start方法。可以猜测,codecbase start成功之后会有个回调将状态置为STARTED
4、setCallback
setCallback其实应该放在Start之前,因为只有设置了callback之后,上层才能正常使用mediacodec。callback会将底层传给mediacodec的事件上抛给再上一层,由上层处理事件比如CB_INPUT_AVAILABLE
方法很简单:
sp<AMessage> callback;
CHECK(msg->findMessage("callback", &callback));
mCallback = callback;
5、上层getBuffer
涉及到2组共4个方法 getInputBuffers / getOutputBuffers / getInputBuffer / getOutpuBuffer。
getInputBuffers / getOutputBuffers 用于一次性获取decoder的输入输出buffer数组,codecbase中创建的buffer都由bufferchannel来管理,所以调用的是bufferchannel的getInputBufferArray方法
status_t MediaCodec::getInputBuffers(Vector<sp<MediaCodecBuffer> > *buffers) const {
sp<AMessage> msg = new AMessage(kWhatGetBuffers, this);
msg->setInt32("portIndex", kPortIndexInput);
msg->setPointer("buffers", buffers); sp<AMessage> response;
return PostAndAwaitResponse(msg, &response);
} case kWhatGetBuffers:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID));
if (!isExecuting() || (mFlags & kFlagIsAsync)) {
PostReplyWithError(replyID, INVALID_OPERATION);
break;
} else if (mFlags & kFlagStickyError) {
PostReplyWithError(replyID, getStickyError());
break;
} int32_t portIndex;
CHECK(msg->findInt32("portIndex", &portIndex)); Vector<sp<MediaCodecBuffer> > *dstBuffers;
CHECK(msg->findPointer("buffers", (void **)&dstBuffers)); dstBuffers->clear();
if (portIndex != kPortIndexInput || !mHaveInputSurface) {
if (portIndex == kPortIndexInput) {
mBufferChannel->getInputBufferArray(dstBuffers);
} else {
mBufferChannel->getOutputBufferArray(dstBuffers);
}
} (new AMessage)->postReply(replyID);
break;
}
getInputBuffer / getOutpuBuffer 根据索引来查找mediacodec buffer队列中的buffer,队列中的元素codecbase通过回调方法加入的
status_t MediaCodec::getOutputBuffer(size_t index, sp<MediaCodecBuffer> *buffer) {
sp<AMessage> format;
return getBufferAndFormat(kPortIndexOutput, index, buffer, &format);
} status_t MediaCodec::getBufferAndFormat(
size_t portIndex, size_t index,
sp<MediaCodecBuffer> *buffer, sp<AMessage> *format) { if (buffer == NULL) {
ALOGE("getBufferAndFormat - null MediaCodecBuffer");
return INVALID_OPERATION;
} if (format == NULL) {
ALOGE("getBufferAndFormat - null AMessage");
return INVALID_OPERATION;
} buffer->clear();
format->clear(); if (!isExecuting()) {
ALOGE("getBufferAndFormat - not executing");
return INVALID_OPERATION;
} Mutex::Autolock al(mBufferLock); std::vector<BufferInfo> &buffers = mPortBuffers[portIndex];
if (index >= buffers.size()) {
return INVALID_OPERATION;
} const BufferInfo &info = buffers[index];
if (!info.mOwnedByClient) {
return INVALID_OPERATION;
} *buffer = info.mData;
*format = info.mData->format(); return OK;
}
6、buffers的处理过程
接下来看看input / output buffer的处理过程
kPortIndexInput
BufferChannel调用BufferCallback的onInputBufferAvailable方法将input buffer加入到队列中
void BufferCallback::onInputBufferAvailable(
size_t index, const sp<MediaCodecBuffer> &buffer) {
sp<AMessage> notify(mNotify->dup());
notify->setInt32("what", kWhatFillThisBuffer);
notify->setSize("index", index);
notify->setObject("buffer", buffer);
notify->post();
}
onMessageReceived中的处理不算太长,做了5件事:
case kWhatFillThisBuffer:
{
// 将buffer加入到mPortBuffers当中,将索引加入到mAvailPortBuffers中
/* size_t index = */updateBuffers(kPortIndexInput, msg); // 如果是flush、stop、release状态则清除availPortBuffer中的索引,丢弃buffer中的内容
if (mState == FLUSHING
|| mState == STOPPING
|| mState == RELEASING) {
returnBuffersToCodecOnPort(kPortIndexInput);
break;
}
// 如果包含有 csd buffer,那么会首先将这个buffer写给decoder,之后就清除csd buffer,下次seek/flush之后可能会再次设定csd buffer
if (!mCSD.empty()) {
ssize_t index = dequeuePortBuffer(kPortIndexInput);
CHECK_GE(index, 0); status_t err = queueCSDInputBuffer(index); if (err != OK) {
ALOGE("queueCSDInputBuffer failed w/ error %d",
err); setStickyError(err);
postActivityNotificationIfPossible(); cancelPendingDequeueOperations();
}
break;
}
// 先处理mLeftover中的buffer、暂时未用到
if (!mLeftover.empty()) {
ssize_t index = dequeuePortBuffer(kPortIndexInput);
CHECK_GE(index, 0); status_t err = handleLeftover(index);
if (err != OK) {
setStickyError(err);
postActivityNotificationIfPossible();
cancelPendingDequeueOperations();
}
break;
}
// 如果是异步处理buffer,也就是设置了callback,就调用onInputBufferAvailable,通知上层处理,否则等待同步调用
if (mFlags & kFlagIsAsync) {
if (!mHaveInputSurface) {
if (mState == FLUSHED) {
mHavePendingInputBuffers = true;
} else {
onInputBufferAvailable();
}
}
} else if (mFlags & kFlagDequeueInputPending) {
CHECK(handleDequeueInputBuffer(mDequeueInputReplyID)); ++mDequeueInputTimeoutGeneration;
mFlags &= ~kFlagDequeueInputPending;
mDequeueInputReplyID = 0;
} else {
postActivityNotificationIfPossible();
}
break;
}
1、调用updateBuffers将送上来的inputBuffer保存到mPortBuffers[kPortIndexInput],对应的索引保存到mAvailPortBuffers中
2、判断当前的状态是否要丢弃所有的buffer
3、如果有csd buffer,则要先将csd buffer 写给decoder
4、先将mLeftover中的buffer处理结束,暂时未用到
5、如果设置了callback说明是异步调用,那么调用onInputBufferAvailable通知上层异步处理,否则等待同步调用
void MediaCodec::onInputBufferAvailable() {
int32_t index;
// 循环处理直到mAvailPortBuffers中没有索引了
while ((index = dequeuePortBuffer(kPortIndexInput)) >= 0) {
sp<AMessage> msg = mCallback->dup();
msg->setInt32("callbackID", CB_INPUT_AVAILABLE);
msg->setInt32("index", index);
// 通知上层处理
msg->post();
}
} ssize_t MediaCodec::dequeuePortBuffer(int32_t portIndex) {
CHECK(portIndex == kPortIndexInput || portIndex == kPortIndexOutput); // 获取mAvailPortBuffers中的第一个可用索引,然后取出mPortBuffers中对应位置的buffer
BufferInfo *info = peekNextPortBuffer(portIndex);
if (!info) {
return -EAGAIN;
} List<size_t> *availBuffers = &mAvailPortBuffers[portIndex];
size_t index = *availBuffers->begin();
CHECK_EQ(info, &mPortBuffers[portIndex][index]);
// 擦除第一个索引
availBuffers->erase(availBuffers->begin());
// mOwnedByClient要到codecbase中研究
CHECK(!info->mOwnedByClient);
{
Mutex::Autolock al(mBufferLock);
info->mOwnedByClient = true; // set image-data
if (info->mData->format() != NULL) {
sp<ABuffer> imageData;
if (info->mData->format()->findBuffer("image-data", &imageData)) {
info->mData->meta()->setBuffer("image-data", imageData);
}
int32_t left, top, right, bottom;
if (info->mData->format()->findRect("crop", &left, &top, &right, &bottom)) {
info->mData->meta()->setRect("crop-rect", left, top, right, bottom);
}
}
}
// 返回索引
return index;
}
onInputBufferAvailable会一次性将队列中的inputBuffer index都通知给上层,上层拿到索引就可以通过getInputBuffer获取buffer、填充buffer,最后调用queueInputBuffer将buffer写给decoder,接下来看看是如何写入的。
status_t MediaCodec::queueInputBuffer(
size_t index,
size_t offset,
size_t size,
int64_t presentationTimeUs,
uint32_t flags,
AString *errorDetailMsg) {
if (errorDetailMsg != NULL) {
errorDetailMsg->clear();
} sp<AMessage> msg = new AMessage(kWhatQueueInputBuffer, this);
msg->setSize("index", index);
msg->setSize("offset", offset);
msg->setSize("size", size);
msg->setInt64("timeUs", presentationTimeUs);
msg->setInt32("flags", flags);
msg->setPointer("errorDetailMsg", errorDetailMsg); sp<AMessage> response;
return PostAndAwaitResponse(msg, &response);
}
queueInputBuffer将index,pts,flag、size等信息打包发送出去,到onMessageReceive中做实际处理
case kWhatQueueInputBuffer:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID)); if (!isExecuting()) {
PostReplyWithError(replyID, INVALID_OPERATION);
break;
} else if (mFlags & kFlagStickyError) {
PostReplyWithError(replyID, getStickyError());
break;
} status_t err = UNKNOWN_ERROR;
// 检查mLeftOver是否为空,如果不为空则先加入到mLeftover中
if (!mLeftover.empty()) {
mLeftover.push_back(msg);
size_t index;
msg->findSize("index", &index);
err = handleLeftover(index);
} else {
// 或者直接调用onQueueInputBuffer中处理
err = onQueueInputBuffer(msg);
} PostReplyWithError(replyID, err);
break;
}
有两种处理方式,一是加入到mLeftover队列中,调用handleLeftover方法处理,还有一种是调用onQueueInputBuffer处理。由于目前还没接触到mLeftover,所以先看onQueueInputBuffer是如何处理的。
status_t MediaCodec::onQueueInputBuffer(const sp<AMessage> &msg) {
size_t index;
size_t offset;
size_t size;
int64_t timeUs;
uint32_t flags;
CHECK(msg->findSize("index", &index));
CHECK(msg->findInt64("timeUs", &timeUs));
CHECK(msg->findInt32("flags", (int32_t *)&flags));
std::shared_ptr<C2Buffer> c2Buffer;
sp<hardware::HidlMemory> memory;
sp<RefBase> obj;
// queueCSDbuffer / queueEncryptedBuffer时用到c2buffer / memory
if (msg->findObject("c2buffer", &obj)) {
CHECK(obj);
c2Buffer = static_cast<WrapperObject<std::shared_ptr<C2Buffer>> *>(obj.get())->value;
} else if (msg->findObject("memory", &obj)) {
CHECK(obj);
memory = static_cast<WrapperObject<sp<hardware::HidlMemory>> *>(obj.get())->value;
CHECK(msg->findSize("offset", &offset));
} else {
CHECK(msg->findSize("offset", &offset));
}
const CryptoPlugin::SubSample *subSamples;
size_t numSubSamples;
const uint8_t *key;
const uint8_t *iv;
CryptoPlugin::Mode mode = CryptoPlugin::kMode_Unencrypted; CryptoPlugin::SubSample ss;
CryptoPlugin::Pattern pattern; if (msg->findSize("size", &size)) {
if (hasCryptoOrDescrambler()) {
ss.mNumBytesOfClearData = size;
ss.mNumBytesOfEncryptedData = 0; subSamples = &ss;
numSubSamples = 1;
key = NULL;
iv = NULL;
pattern.mEncryptBlocks = 0;
pattern.mSkipBlocks = 0;
}
} else if (!c2Buffer) {
if (!hasCryptoOrDescrambler()) {
return -EINVAL;
} CHECK(msg->findPointer("subSamples", (void **)&subSamples));
CHECK(msg->findSize("numSubSamples", &numSubSamples));
CHECK(msg->findPointer("key", (void **)&key));
CHECK(msg->findPointer("iv", (void **)&iv));
CHECK(msg->findInt32("encryptBlocks", (int32_t *)&pattern.mEncryptBlocks));
CHECK(msg->findInt32("skipBlocks", (int32_t *)&pattern.mSkipBlocks)); int32_t tmp;
CHECK(msg->findInt32("mode", &tmp)); mode = (CryptoPlugin::Mode)tmp; size = 0;
for (size_t i = 0; i < numSubSamples; ++i) {
size += subSamples[i].mNumBytesOfClearData;
size += subSamples[i].mNumBytesOfEncryptedData;
}
} if (index >= mPortBuffers[kPortIndexInput].size()) {
return -ERANGE;
}
// 拿到mPortBuffers[kPortIndexInput] 中对应索引的buffer
BufferInfo *info = &mPortBuffers[kPortIndexInput][index];
sp<MediaCodecBuffer> buffer = info->mData; if (c2Buffer || memory) {
sp<AMessage> tunings;
CHECK(msg->findMessage("tunings", &tunings));
onSetParameters(tunings); status_t err = OK;
if (c2Buffer) {
err = mBufferChannel->attachBuffer(c2Buffer, buffer);
} else if (memory) {
err = mBufferChannel->attachEncryptedBuffer(
memory, (mFlags & kFlagIsSecure), key, iv, mode, pattern,
offset, subSamples, numSubSamples, buffer);
} else {
err = UNKNOWN_ERROR;
} if (err == OK && !buffer->asC2Buffer()
&& c2Buffer && c2Buffer->data().type() == C2BufferData::LINEAR) {
C2ConstLinearBlock block{c2Buffer->data().linearBlocks().front()};
if (block.size() > buffer->size()) {
C2ConstLinearBlock leftover = block.subBlock(
block.offset() + buffer->size(), block.size() - buffer->size());
sp<WrapperObject<std::shared_ptr<C2Buffer>>> obj{
new WrapperObject<std::shared_ptr<C2Buffer>>{
C2Buffer::CreateLinearBuffer(leftover)}};
msg->setObject("c2buffer", obj);
mLeftover.push_front(msg);
// Not sending EOS if we have leftovers
flags &= ~BUFFER_FLAG_EOS;
}
} offset = buffer->offset();
size = buffer->size();
if (err != OK) {
return err;
}
} if (buffer == nullptr || !info->mOwnedByClient) {
return -EACCES;
} if (offset + size > buffer->capacity()) {
return -EINVAL;
}
// 将传入的offset和pts打包进buffer
buffer->setRange(offset, size);
buffer->meta()->setInt64("timeUs", timeUs);
if (flags & BUFFER_FLAG_EOS) {
// 如果是eos,也设置buffer中的flag
buffer->meta()->setInt32("eos", true);
}
// 如果是写入的csd buffer,也拉起对应的flag通知codec
if (flags & BUFFER_FLAG_CODECCONFIG) {
buffer->meta()->setInt32("csd", true);
}
// 不太清楚这里的flag有什么用
if (mTunneled) {
TunnelPeekState previousState = mTunnelPeekState;
switch(mTunnelPeekState){
case TunnelPeekState::kEnabledNoBuffer:
buffer->meta()->setInt32("tunnel-first-frame", 1);
mTunnelPeekState = TunnelPeekState::kEnabledQueued;
break;
case TunnelPeekState::kDisabledNoBuffer:
buffer->meta()->setInt32("tunnel-first-frame", 1);
mTunnelPeekState = TunnelPeekState::kDisabledQueued;
break;
default:
break;
}
} status_t err = OK;
if (hasCryptoOrDescrambler() && !c2Buffer && !memory) {
AString *errorDetailMsg;
CHECK(msg->findPointer("errorDetailMsg", (void **)&errorDetailMsg));
// Notify mCrypto of video resolution changes
if (mTunneled && mCrypto != NULL) {
int32_t width, height;
if (mInputFormat->findInt32("width", &width) &&
mInputFormat->findInt32("height", &height) && width > 0 && height > 0) {
if (width != mTunneledInputWidth || height != mTunneledInputHeight) {
mTunneledInputWidth = width;
mTunneledInputHeight = height;
mCrypto->notifyResolution(width, height);
}
}
}
// 写入加密buffer
err = mBufferChannel->queueSecureInputBuffer(
buffer,
(mFlags & kFlagIsSecure),
key,
iv,
mode,
pattern,
subSamples,
numSubSamples,
errorDetailMsg);
if (err != OK) {
mediametrics_setInt32(mMetricsHandle, kCodecQueueSecureInputBufferError, err);
ALOGW("Log queueSecureInputBuffer error: %d", err);
}
} else {
// 写入普通buffer
err = mBufferChannel->queueInputBuffer(buffer);
if (err != OK) {
mediametrics_setInt32(mMetricsHandle, kCodecQueueInputBufferError, err);
ALOGW("Log queueInputBuffer error: %d", err);
}
} if (err == OK) {
// synchronization boundary for getBufferAndFormat
Mutex::Autolock al(mBufferLock);
// 改变BufferInfo的owner
info->mOwnedByClient = false;
info->mData.clear();
// 记录下写入buffer的pts以及对应的写入时间
statsBufferSent(timeUs, buffer);
} return err;
}
onQueueInputBuffer方法很长,主要是有很多不同的方法会调用它,比如这里的queueInputBuffer、queueCSDBuffer以及queueSecureBuffer等,里面会做很多判断,最终调用了codecbase的queueInputBuffer和queueSecureBuffer。
到这里一个完整的inputBuffer处理过程就结束了。
kPortIndexOutput
BufferChannel调用回调方法onOutputBufferAvailable去
void BufferCallback::onOutputBufferAvailable(
size_t index, const sp<MediaCodecBuffer> &buffer) {
sp<AMessage> notify(mNotify->dup());
notify->setInt32("what", kWhatDrainThisBuffer);
notify->setSize("index", index);
notify->setObject("buffer", buffer);
notify->post();
}
接下来还是到onMessageReceived中处理
case kWhatDrainThisBuffer:
{
// 把output buffer加入到队列当中
/* size_t index = */updateBuffers(kPortIndexOutput, msg); if (mState == FLUSHING
|| mState == STOPPING
|| mState == RELEASING) {
returnBuffersToCodecOnPort(kPortIndexOutput);
break;
} if (mFlags & kFlagIsAsync) {
sp<RefBase> obj;
CHECK(msg->findObject("buffer", &obj));
sp<MediaCodecBuffer> buffer = static_cast<MediaCodecBuffer *>(obj.get()); // In asynchronous mode, output format change is processed immediately.
// 如果outputformat发生变化,则调用方法更新
handleOutputFormatChangeIfNeeded(buffer);
// 异步通知上层处理outputbuffer
onOutputBufferAvailable();
} else if (mFlags & kFlagDequeueOutputPending) {
CHECK(handleDequeueOutputBuffer(mDequeueOutputReplyID)); ++mDequeueOutputTimeoutGeneration;
mFlags &= ~kFlagDequeueOutputPending;
mDequeueOutputReplyID = 0;
} else {
postActivityNotificationIfPossible();
} break;
}
还是熟悉的过程:
1、将outputbuffer以及其索引加入到队列当中
2、如果outputformat发生变化则更新
3、调用onOutputBufferAvailable异步通知上层处理
void MediaCodec::onOutputBufferAvailable() {
int32_t index;
while ((index = dequeuePortBuffer(kPortIndexOutput)) >= 0) {
const sp<MediaCodecBuffer> &buffer =
mPortBuffers[kPortIndexOutput][index].mData;
sp<AMessage> msg = mCallback->dup();
msg->setInt32("callbackID", CB_OUTPUT_AVAILABLE);
msg->setInt32("index", index);
msg->setSize("offset", buffer->offset());
msg->setSize("size", buffer->size()); int64_t timeUs;
CHECK(buffer->meta()->findInt64("timeUs", &timeUs)); msg->setInt64("timeUs", timeUs); int32_t flags;
CHECK(buffer->meta()->findInt32("flags", &flags)); msg->setInt32("flags", flags); // 记录outputbuffer送给上层的时间及对应的pts
statsBufferReceived(timeUs, buffer); msg->post();
}
}
上层拿到outputbuffer之后,做完AVSync会确定渲染还是丢弃,调用renderOutputBufferAndRelease 和 releaseOutputBuffer
status_t MediaCodec::renderOutputBufferAndRelease(size_t index, int64_t timestampNs) {
sp<AMessage> msg = new AMessage(kWhatReleaseOutputBuffer, this);
msg->setSize("index", index);
msg->setInt32("render", true);
msg->setInt64("timestampNs", timestampNs); sp<AMessage> response;
return PostAndAwaitResponse(msg, &response);
}
case kWhatReleaseOutputBuffer:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID)); if (!isExecuting()) {
PostReplyWithError(replyID, INVALID_OPERATION);
break;
} else if (mFlags & kFlagStickyError) {
PostReplyWithError(replyID, getStickyError());
break;
} status_t err = onReleaseOutputBuffer(msg); PostReplyWithError(replyID, err);
break;
}
status_t MediaCodec::onReleaseOutputBuffer(const sp<AMessage> &msg) {
size_t index;
CHECK(msg->findSize("index", &index)); int32_t render;
if (!msg->findInt32("render", &render)) {
render = 0;
} if (!isExecuting()) {
return -EINVAL;
} if (index >= mPortBuffers[kPortIndexOutput].size()) {
return -ERANGE;
} BufferInfo *info = &mPortBuffers[kPortIndexOutput][index]; if (info->mData == nullptr || !info->mOwnedByClient) {
return -EACCES;
} // synchronization boundary for getBufferAndFormat
sp<MediaCodecBuffer> buffer;
{
Mutex::Autolock al(mBufferLock);
info->mOwnedByClient = false;
buffer = info->mData;
info->mData.clear();
} if (render && buffer->size() != 0) {
int64_t mediaTimeUs = -1;
buffer->meta()->findInt64("timeUs", &mediaTimeUs); int64_t renderTimeNs = 0;
if (!msg->findInt64("timestampNs", &renderTimeNs)) {
// use media timestamp if client did not request a specific render timestamp
ALOGV("using buffer PTS of %lld", (long long)mediaTimeUs);
renderTimeNs = mediaTimeUs * 1000;
} if (mSoftRenderer != NULL) {
std::list<FrameRenderTracker::Info> doneFrames = mSoftRenderer->render(
buffer->data(), buffer->size(), mediaTimeUs, renderTimeNs,
mPortBuffers[kPortIndexOutput].size(), buffer->format()); // if we are running, notify rendered frames
if (!doneFrames.empty() && mState == STARTED && mOnFrameRenderedNotification != NULL) {
sp<AMessage> notify = mOnFrameRenderedNotification->dup();
sp<AMessage> data = new AMessage;
if (CreateFramesRenderedMessage(doneFrames, data)) {
notify->setMessage("data", data);
notify->post();
}
}
}
status_t err = mBufferChannel->renderOutputBuffer(buffer, renderTimeNs); if (err == NO_INIT) {
ALOGE("rendering to non-initilized(obsolete) surface");
return err;
}
if (err != OK) {
ALOGI("rendring output error %d", err);
}
} else {
mBufferChannel->discardBuffer(buffer);
} return OK;
}
可以看到,最后是调用BufferChannel的renderOutputBuffer来渲染。
到这里一个output buffer的处理就完成了。
7、flush
case kWhatFlush:
{
if (!isExecuting()) {
PostReplyWithError(msg, INVALID_OPERATION);
break;
} else if (mFlags & kFlagStickyError) {
PostReplyWithError(msg, getStickyError());
break;
} if (mReplyID) {
mDeferredMessages.push_back(msg);
break;
}
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID)); mReplyID = replyID;
// TODO: skip flushing if already FLUSHED
setState(FLUSHING);
// 调用codecbase的signalFlush
mCodec->signalFlush();
// 将所有的buffer丢弃
returnBuffersToCodec();
TunnelPeekState previousState = mTunnelPeekState;
mTunnelPeekState = TunnelPeekState::kEnabledNoBuffer;
ALOGV("TunnelPeekState: %s -> %s",
asString(previousState),
asString(TunnelPeekState::kEnabledNoBuffer));
break;
}
flush方法会先将状态置为FLUSHING,然后调用codecbase的signalFlush方法(等待调用结束后应该会有回调置为FLUSHED),将所有的buffer丢弃,丢弃buffer分为两部分:
一是调用BufferChannel的discardbuffer方法,将buffer还给decoder,二是清除mediacode持有的可用索引。
mediacodec并没有pause和resume方法!pause和resume需要player来实现。基本的运行原理大概都了解清楚了,其他的方法暂时就不看了。
Android 12(S) MultiMedia Learning(九)MediaCodec的更多相关文章
- Android 12(S) 图形显示系统 - BufferQueue的工作流程(九)
题外话 Covid-19疫情的强烈反弹,小区里检测出了无症状感染者.小区封闭管理,我也不得不居家办公了.既然这么大把的时间可以光明正大的宅家里,自然要好好利用,八个字 == 努力工作,好好学习 一.前 ...
- Android系统--输入系统(九)Reader线程_核心类及配置文件
Android系统--输入系统(九)Reader线程_核心类及配置文件 1. Reader线程核心类--EventHub 1.1 Reader线程核心结构体 实例化对象:mEventHub--表示多个 ...
- Android 12(S) 图形显示系统 - Surface 一点补充知识(十二)
必读: Android 12(S) 图形显示系统 - 开篇 一.前言 因为个人工作主要是Android多媒体播放的内容,在工作中查看源码或设计程序经常会遇到调用API: static inline i ...
- Android 12(S) 图形显示系统 - 解读Gralloc架构及GraphicBuffer创建/传递/释放(十四)
必读: Android 12(S) 图形显示系统 - 开篇 一.前言 在前面的文章中,已经出现过 GraphicBuffer 的身影,GraphicBuffer 是Android图形显示系统中的一个重 ...
- Android实训案例(九)——答题系统的思绪,自己设计一个题库的体验,一个思路清晰的答题软件制作过程
Android实训案例(九)--答题系统的思绪,自己设计一个题库的体验,一个思路清晰的答题软件制作过程 项目也是偷师的,决心研究一下数据库.所以写的还是很详细的,各位看官,耐着性子看完,实现结果不重要 ...
- (转)android平台下使用点九PNG技术
“点九”是andriod平台的应用软件开发里的一种特殊的图片形式,文件扩展名为:.9.png 智能手机中有自动横屏的功能,同一幅界面会在随着手机(或平板电脑)中的方向传感器的参数不同而改变显示的方向, ...
- Android 12(S) 图形显示系统 - 示例应用(二)
1 前言 为了更深刻的理解Android图形系统抽象的概念和BufferQueue的工作机制,这篇文章我们将从Native Level入手,基于Android图形系统API写作一个简单的图形处理小程序 ...
- Android 12(S) 图形显示系统 - 基本概念(一)
1 前言 Android图形系统是系统框架中一个非常重要的子系统,与其它子系统一样,Android 框架提供了各种用于 2D 和 3D 图形渲染的 API供开发者使用来创建绚丽多彩的应用APP.图形渲 ...
- Android 12(S) 图形显示系统 - 应用建立和SurfaceFlinger的沟通桥梁(三)
1 前言 上一篇文章中我们已经创建了一个Native示例应用,从使用者的角度了解了图形显示系统API的基本使用,从这篇文章开始我们将基于这个示例应用深入图形显示系统API的内部实现逻辑,分析运作流程. ...
- Android 12(S) 图形显示系统 - SurfaceFlinger的启动和消息队列处理机制(四)
1 前言 SurfaceFlinger作为Android图形显示系统处理逻辑的核心单元,我们有必要去了解其是如何启动,初始化及进行消息处理的.这篇文章我们就来简单分析SurfaceFlinger这个B ...
随机推荐
- 一、Unity调用Xcode封装方法(工程引用文件)
1.Xcode新建Static Library 工程 (我起的名字是UnityExtend 可以在接下来的图中看到) 2.打包unity ios工程 unity打包ios 打出Xcode工程 3.打开 ...
- Prometheus之grafana(No data to show)
一.问题现象 1.grafana添加数据源后获取不到监控数据(No data to show) 2.prometheus以下报错 二.问题原因 服务器与浏览器时间不同步的原因,服务器端配置NTP服务和 ...
- k8s之持久存储卷PV和PVC
一.简介 在前边文章中可以看到,Kubernetes中依赖后端存储包括:NFS.Ceph.块存储等存储设备实现数据的远程存储以及数据持久化. 使用这些网络存储资源需要工程师对存储有一定的了解,并需要在 ...
- c# 优化代码的一些规则——用委托表示回调[五]
前言 委托为什么可以作为回调? 因为委托可以作为方法的参数. 正文 通过委托,是一种定义类型安全回调. 记得第一个接触委托的时候,是老师讲的,后来真正用的是完成学期项目,一个winform,委托作为事 ...
- KubeVela 插件指南:轻松扩展你的平台专属能力
简介: 本文将会全方位介绍 KubeVela 插件的核心机制,教你如何编写一个自定义插件.在最后,我们将展示最终用户使用插件的体验,以及插件将如何融入到 KubeVela 平台,为用户提供一致的体验. ...
- 一文搞懂传统单节点网站的 Serverless 上云
简介: 阿里云函数计算 FC 是事件驱动的全托管计算服务,真正的无需去考虑服务器的运维管理,只需要完成开发的代码进行上传,函数计算会通过角色策略去规划计算资源,弹性的方式执行函数,最后高效的执行部署. ...
- DataWorks搬站方案:Azkaban作业迁移至DataWorks
简介: DataWorks迁移助手提供任务搬站功能,支持将开源调度引擎Oozie.Azkaban.Airflow的任务快速迁移至DataWorks.本文主要介绍如何将开源Azkaban工作流调度引擎中 ...
- 庖丁解牛-图解MySQL 8.0优化器查询转换篇
简介: 在<庖丁解牛-图解MySQL 8.0优化器查询解析篇>一文中我们重点介绍了MySQL最新版本8.0.25关于SQL基本元素表.列.函数.聚合.分组.排序等元素的解析.设置和转换过 ...
- 大模型 RAG 是什么
大模型 RAG(Retrieval-Augmented Generation)是一种结合了检索(Retrieval)与生成(Generation)能力的先进人工智能技术,主要用于增强大型语言模型(LL ...
- [PHP] 浅谈 Laravel Scout 的存在意义
注:Laravel Scout 是官方支持的对框架模型数据进行全文检索功能的扩展包. Laravel 的 Scout 与 Eloquent ORM 进行了深度集成,不用开发者再自己进行代码侵入了. L ...