Android : Camera HAL3的参数传递(CameraMetadata)
一、camera_metadata简介
Camera API2/HAL3架构下使用了全新的CameraMetadata结构取代了之前的SetParameter/Paramters等操作,实现了Java到native到HAL3的参数传递。引入了管道的概念将安卓设备和摄像头之间联系起来,系统向摄像头发送 Capture 请求,而摄像头会返回 CameraMetadata,这一切建立在一个叫作 CameraCaptureSession 的会话中。
二、Framework到HAL层的转换
Camera2Client 使用 API1 传递参数采用的逻辑是还是在Java层预留了setParameters接口,只是当Parameter在设置时比起CameraClient而言,是将这个Parameter根据不同的TAG形式直接绑定到CameraMetadata mPreviewRequest/mRecordRequest/mCaptureRequest中,这些数据会由Capture_Request转为camera3_capture_request中的camera_metadata_t settings完成参数从Java到native到HAL3的传递。
但是在Camera API2下,不再需要那么复杂的转换过程,在Java层中直接对参数进行设置并将其封装到Capture_Request即可,即参数控制由Java层来完成。这也体现了API2中Request和Result在APP中就大量存在的原因。对此为了和Framework Native层相关TAG数据的统一,在Java层中大量出现的参数设置是通过Section Tag的name来交由Native完成转换生成在Java层的TAG。
(1)Java层对应代码位置:frameworks\base\core\java\android\hardware\camera2\impl\CameraMetadataNative.java
private <T> T getBase(Key<T> key) {
int tag = nativeGetTagFromKeyLocal(key.getName());
byte[] values = readValues(tag);
if (values == null) {
// If the key returns null, use the fallback key if exists.
// This is to support old key names for the newly published keys.
if (key.mFallbackName == null) {
return null;
}
tag = nativeGetTagFromKeyLocal(key.mFallbackName);
values = readValues(tag);
if (values == null) {
return null;
}
} int nativeType = nativeGetTypeFromTagLocal(tag);
Marshaler<T> marshaler = getMarshalerForKey(key, nativeType);
ByteBuffer buffer = ByteBuffer.wrap(values).order(ByteOrder.nativeOrder());
return marshaler.unmarshal(buffer);
}
(2)Native层对应代码位置:frameworks/base/core/jni/android_hardware_camera2_CameraMetadata.cpp
static const JNINativeMethod gCameraMetadataMethods[] = {
// static methods
{ "nativeGetTagFromKey",
"(Ljava/lang/String;J)I",
(void *)CameraMetadata_getTagFromKey },
{ "nativeGetTypeFromTag",
"(IJ)I",
(void *)CameraMetadata_getTypeFromTag },
{ "nativeSetupGlobalVendorTagDescriptor",
"()I",
(void*)CameraMetadata_setupGlobalVendorTagDescriptor },
// instance methods
......
其中CameraMetadata_getTagFromKey是实现将一个Java层的string转为一个tag的值,如:android.control.mode。对比最初不同的Section name就可以发现前面两个x.y的字符串就是代表是Section name.而后面mode即是在该section下的tag数值,所以通过对这个string的分析可知,就可以定位对应的section以及tag值,这样返回到Java层的就是key相应的tag值了。继续追踪到 \system\media\camera\src\camera_metadata.c:
// Declared in system/media/private/camera/include/camera_metadata_hidden.h
const char *get_local_camera_metadata_tag_name_vendor_id(uint32_t tag,
metadata_vendor_id_t id) {
uint32_t tag_section = tag >> ;
if (tag_section >= VENDOR_SECTION && vendor_cache_ops != NULL &&
id != CAMERA_METADATA_INVALID_VENDOR_ID) {
return vendor_cache_ops->get_tag_name(tag, id);
} else if (tag_section >= VENDOR_SECTION && vendor_tag_ops != NULL) {
return vendor_tag_ops->get_tag_name(
vendor_tag_ops,
tag);
}
if (tag_section >= ANDROID_SECTION_COUNT ||
tag >= camera_metadata_section_bounds[tag_section][] ) { // 关键是camera_metadata_section_bounds这个数组,保存了各个tag的绑定信息
return NULL;
}
uint32_t tag_index = tag & 0xFFFF;
return tag_info[tag_section][tag_index].tag_name;
}
其他相关文件的调用关系如下图:
其中 camera_metadata_tags.h 包含了所有的基本宏,每一个section的大小是64K(每个枚举值左移16位):
/**
* !! Do not include this file directly !!
*
* Include camera_metadata.h instead.
*/ /**
* ! Do not edit this file directly !
*
* Generated automatically from camera_metadata_tags.mako
*/ /** TODO: Nearly every enum in this file needs a description */ /**
* Top level hierarchy definitions for camera metadata. *_INFO sections are for
* the static metadata that can be retrived without opening the camera device.
* New sections must be added right before ANDROID_SECTION_COUNT to maintain
* existing enumerations.
*/
typedef enum camera_metadata_section {
ANDROID_COLOR_CORRECTION,
ANDROID_CONTROL,
ANDROID_DEMOSAIC,
ANDROID_EDGE,
ANDROID_FLASH,
ANDROID_FLASH_INFO,
ANDROID_HOT_PIXEL,
ANDROID_JPEG,
ANDROID_LENS,
ANDROID_LENS_INFO,
ANDROID_NOISE_REDUCTION,
ANDROID_QUIRKS,
ANDROID_REQUEST,
ANDROID_SCALER,
ANDROID_SENSOR,
ANDROID_SENSOR_INFO,
ANDROID_SHADING,
ANDROID_STATISTICS,
ANDROID_STATISTICS_INFO,
ANDROID_TONEMAP,
ANDROID_LED,
ANDROID_INFO,
ANDROID_BLACK_LEVEL,
ANDROID_SYNC,
ANDROID_REPROCESS,
ANDROID_DEPTH,
ANDROID_LOGICAL_MULTI_CAMERA,
ANDROID_DISTORTION_CORRECTION,
ANDROID_SECTION_COUNT, VENDOR_SECTION = 0x8000
} camera_metadata_section_t; /**
* Hierarchy positions in enum space. All vendor extension tags must be
* defined with tag >= VENDOR_SECTION_START
*/
typedef enum camera_metadata_section_start {
ANDROID_COLOR_CORRECTION_START = ANDROID_COLOR_CORRECTION << ,
ANDROID_CONTROL_START = ANDROID_CONTROL << ,
ANDROID_DEMOSAIC_START = ANDROID_DEMOSAIC << ,
ANDROID_EDGE_START = ANDROID_EDGE << ,
ANDROID_FLASH_START = ANDROID_FLASH << ,
ANDROID_FLASH_INFO_START = ANDROID_FLASH_INFO << ,
ANDROID_HOT_PIXEL_START = ANDROID_HOT_PIXEL << ,
ANDROID_JPEG_START = ANDROID_JPEG << ,
ANDROID_LENS_START = ANDROID_LENS << ,
ANDROID_LENS_INFO_START = ANDROID_LENS_INFO << ,
ANDROID_NOISE_REDUCTION_START = ANDROID_NOISE_REDUCTION << ,
ANDROID_QUIRKS_START = ANDROID_QUIRKS << ,
ANDROID_REQUEST_START = ANDROID_REQUEST << ,
ANDROID_SCALER_START = ANDROID_SCALER << ,
ANDROID_SENSOR_START = ANDROID_SENSOR << ,
ANDROID_SENSOR_INFO_START = ANDROID_SENSOR_INFO << ,
ANDROID_SHADING_START = ANDROID_SHADING << ,
ANDROID_STATISTICS_START = ANDROID_STATISTICS << ,
ANDROID_STATISTICS_INFO_START = ANDROID_STATISTICS_INFO << ,
ANDROID_TONEMAP_START = ANDROID_TONEMAP << ,
ANDROID_LED_START = ANDROID_LED << ,
ANDROID_INFO_START = ANDROID_INFO << ,
ANDROID_BLACK_LEVEL_START = ANDROID_BLACK_LEVEL << ,
ANDROID_SYNC_START = ANDROID_SYNC << ,
ANDROID_REPROCESS_START = ANDROID_REPROCESS << ,
ANDROID_DEPTH_START = ANDROID_DEPTH << ,
ANDROID_LOGICAL_MULTI_CAMERA_START
= ANDROID_LOGICAL_MULTI_CAMERA
<< ,
ANDROID_DISTORTION_CORRECTION_START
= ANDROID_DISTORTION_CORRECTION
<< ,
VENDOR_SECTION_START = VENDOR_SECTION <<
} camera_metadata_section_start_t;
而每个MODE的END值是根据START后的填充枚举变量偏移所得:
/**
* Main enum for defining camera metadata tags. New entries must always go
* before the section _END tag to preserve existing enumeration values. In
* addition, the name and type of the tag needs to be added to
* system/media/camera/src/camera_metadata_tag_info.c
*/
typedef enum camera_metadata_tag {
ANDROID_COLOR_CORRECTION_MODE = // enum | public | HIDL v3.2
ANDROID_COLOR_CORRECTION_START,
ANDROID_COLOR_CORRECTION_TRANSFORM, // rational[] | public | HIDL v3.2
ANDROID_COLOR_CORRECTION_GAINS, // float[] | public | HIDL v3.2
ANDROID_COLOR_CORRECTION_ABERRATION_MODE, // enum | public | HIDL v3.2
ANDROID_COLOR_CORRECTION_AVAILABLE_ABERRATION_MODES,
// byte[] | public | HIDL v3.2
ANDROID_COLOR_CORRECTION_END, ANDROID_CONTROL_AE_ANTIBANDING_MODE = // enum | public | HIDL v3.2
ANDROID_CONTROL_START,
ANDROID_CONTROL_AE_EXPOSURE_COMPENSATION, // int32 | public | HIDL v3.2
ANDROID_CONTROL_AE_LOCK, // enum | public | HIDL v3.2
ANDROID_CONTROL_AE_MODE, // enum | public | HIDL v3.2
ANDROID_CONTROL_AE_REGIONS, // int32[] | public | HIDL v3.2
ANDROID_CONTROL_AE_TARGET_FPS_RANGE, // int32[] | public | HIDL v3.2
ANDROID_CONTROL_AE_PRECAPTURE_TRIGGER, // enum | public | HIDL v3.2
ANDROID_CONTROL_AF_MODE, // enum | public | HIDL v3.2
ANDROID_CONTROL_AF_REGIONS, // int32[] | public | HIDL v3.2
ANDROID_CONTROL_AF_TRIGGER, // enum | public | HIDL v3.2
ANDROID_CONTROL_AWB_LOCK, // enum | public | HIDL v3.2
ANDROID_CONTROL_AWB_MODE, // enum | public | HIDL v3.2
ANDROID_CONTROL_AWB_REGIONS, // int32[] | public | HIDL v3.2
ANDROID_CONTROL_CAPTURE_INTENT, // enum | public | HIDL v3.2
ANDROID_CONTROL_EFFECT_MODE, // enum | public | HIDL v3.2
ANDROID_CONTROL_MODE, // enum | public | HIDL v3.2
ANDROID_CONTROL_SCENE_MODE, // enum | public | HIDL v3.2
ANDROID_CONTROL_VIDEO_STABILIZATION_MODE, // enum | public | HIDL v3.2
ANDROID_CONTROL_AE_AVAILABLE_ANTIBANDING_MODES, // byte[] | public | HIDL v3.2
ANDROID_CONTROL_AE_AVAILABLE_MODES, // byte[] | public | HIDL v3.2
ANDROID_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES, // int32[] | public | HIDL v3.2
ANDROID_CONTROL_AE_COMPENSATION_RANGE, // int32[] | public | HIDL v3.2
ANDROID_CONTROL_AE_COMPENSATION_STEP, // rational | public | HIDL v3.2
ANDROID_CONTROL_AF_AVAILABLE_MODES, // byte[] | public | HIDL v3.2
ANDROID_CONTROL_AVAILABLE_EFFECTS, // byte[] | public | HIDL v3.2
ANDROID_CONTROL_AVAILABLE_SCENE_MODES, // byte[] | public | HIDL v3.2
ANDROID_CONTROL_AVAILABLE_VIDEO_STABILIZATION_MODES,
// byte[] | public | HIDL v3.2
ANDROID_CONTROL_AWB_AVAILABLE_MODES, // byte[] | public | HIDL v3.2
ANDROID_CONTROL_MAX_REGIONS, // int32[] | ndk_public | HIDL v3.2
ANDROID_CONTROL_SCENE_MODE_OVERRIDES, // byte[] | system | HIDL v3.2
ANDROID_CONTROL_AE_PRECAPTURE_ID, // int32 | system | HIDL v3.2
ANDROID_CONTROL_AE_STATE, // enum | public | HIDL v3.2
ANDROID_CONTROL_AF_STATE, // enum | public | HIDL v3.2
ANDROID_CONTROL_AF_TRIGGER_ID, // int32 | system | HIDL v3.2
ANDROID_CONTROL_AWB_STATE, // enum | public | HIDL v3.2
ANDROID_CONTROL_AVAILABLE_HIGH_SPEED_VIDEO_CONFIGURATIONS,
// int32[] | hidden | HIDL v3.2
ANDROID_CONTROL_AE_LOCK_AVAILABLE, // enum | public | HIDL v3.2
ANDROID_CONTROL_AWB_LOCK_AVAILABLE, // enum | public | HIDL v3.2
ANDROID_CONTROL_AVAILABLE_MODES, // byte[] | public | HIDL v3.2
ANDROID_CONTROL_POST_RAW_SENSITIVITY_BOOST_RANGE, // int32[] | public | HIDL v3.2
ANDROID_CONTROL_POST_RAW_SENSITIVITY_BOOST, // int32 | public | HIDL v3.2
ANDROID_CONTROL_ENABLE_ZSL, // enum | public | HIDL v3.2
ANDROID_CONTROL_AF_SCENE_CHANGE, // enum | public | HIDL v3.3
ANDROID_CONTROL_END,
......
对应关系如图所示:
然后在 camera_metadata_tag_info.c 中进行了映射和绑定,前面Native层CameraMetadata_getTagFromKey调用的camera_metadata_section_bounds实现在这里:
/**
* ! Do not edit this file directly !
*
* Generated automatically from camera_metadata_tag_info.mako
*/ const char *camera_metadata_section_names[ANDROID_SECTION_COUNT] = {
[ANDROID_COLOR_CORRECTION] = "android.colorCorrection",
[ANDROID_CONTROL] = "android.control",
[ANDROID_DEMOSAIC] = "android.demosaic",
[ANDROID_EDGE] = "android.edge",
[ANDROID_FLASH] = "android.flash",
[ANDROID_FLASH_INFO] = "android.flash.info",
[ANDROID_HOT_PIXEL] = "android.hotPixel",
[ANDROID_JPEG] = "android.jpeg",
[ANDROID_LENS] = "android.lens",
[ANDROID_LENS_INFO] = "android.lens.info",
[ANDROID_NOISE_REDUCTION] = "android.noiseReduction",
[ANDROID_QUIRKS] = "android.quirks",
[ANDROID_REQUEST] = "android.request",
[ANDROID_SCALER] = "android.scaler",
[ANDROID_SENSOR] = "android.sensor",
[ANDROID_SENSOR_INFO] = "android.sensor.info",
[ANDROID_SHADING] = "android.shading",
[ANDROID_STATISTICS] = "android.statistics",
[ANDROID_STATISTICS_INFO] = "android.statistics.info",
[ANDROID_TONEMAP] = "android.tonemap",
[ANDROID_LED] = "android.led",
[ANDROID_INFO] = "android.info",
[ANDROID_BLACK_LEVEL] = "android.blackLevel",
[ANDROID_SYNC] = "android.sync",
[ANDROID_REPROCESS] = "android.reprocess",
[ANDROID_DEPTH] = "android.depth",
[ANDROID_LOGICAL_MULTI_CAMERA] = "android.logicalMultiCamera",
[ANDROID_DISTORTION_CORRECTION]
= "android.distortionCorrection",
}; unsigned int camera_metadata_section_bounds[ANDROID_SECTION_COUNT][] = {
[ANDROID_COLOR_CORRECTION] = { ANDROID_COLOR_CORRECTION_START,
ANDROID_COLOR_CORRECTION_END },
[ANDROID_CONTROL] = { ANDROID_CONTROL_START,
ANDROID_CONTROL_END },
[ANDROID_DEMOSAIC] = { ANDROID_DEMOSAIC_START,
ANDROID_DEMOSAIC_END },
[ANDROID_EDGE] = { ANDROID_EDGE_START,
ANDROID_EDGE_END },
[ANDROID_FLASH] = { ANDROID_FLASH_START,
ANDROID_FLASH_END },
[ANDROID_FLASH_INFO] = { ANDROID_FLASH_INFO_START,
ANDROID_FLASH_INFO_END },
[ANDROID_HOT_PIXEL] = { ANDROID_HOT_PIXEL_START,
ANDROID_HOT_PIXEL_END },
[ANDROID_JPEG] = { ANDROID_JPEG_START,
ANDROID_JPEG_END },
[ANDROID_LENS] = { ANDROID_LENS_START,
ANDROID_LENS_END },
[ANDROID_LENS_INFO] = { ANDROID_LENS_INFO_START,
ANDROID_LENS_INFO_END },
[ANDROID_NOISE_REDUCTION] = { ANDROID_NOISE_REDUCTION_START,
ANDROID_NOISE_REDUCTION_END },
[ANDROID_QUIRKS] = { ANDROID_QUIRKS_START,
ANDROID_QUIRKS_END },
[ANDROID_REQUEST] = { ANDROID_REQUEST_START,
ANDROID_REQUEST_END },
[ANDROID_SCALER] = { ANDROID_SCALER_START,
ANDROID_SCALER_END },
[ANDROID_SENSOR] = { ANDROID_SENSOR_START,
ANDROID_SENSOR_END },
[ANDROID_SENSOR_INFO] = { ANDROID_SENSOR_INFO_START,
ANDROID_SENSOR_INFO_END },
[ANDROID_SHADING] = { ANDROID_SHADING_START,
ANDROID_SHADING_END },
[ANDROID_STATISTICS] = { ANDROID_STATISTICS_START,
ANDROID_STATISTICS_END },
[ANDROID_STATISTICS_INFO] = { ANDROID_STATISTICS_INFO_START,
ANDROID_STATISTICS_INFO_END },
[ANDROID_TONEMAP] = { ANDROID_TONEMAP_START,
ANDROID_TONEMAP_END },
[ANDROID_LED] = { ANDROID_LED_START,
ANDROID_LED_END },
[ANDROID_INFO] = { ANDROID_INFO_START,
ANDROID_INFO_END },
[ANDROID_BLACK_LEVEL] = { ANDROID_BLACK_LEVEL_START,
ANDROID_BLACK_LEVEL_END },
[ANDROID_SYNC] = { ANDROID_SYNC_START,
ANDROID_SYNC_END },
[ANDROID_REPROCESS] = { ANDROID_REPROCESS_START,
ANDROID_REPROCESS_END },
[ANDROID_DEPTH] = { ANDROID_DEPTH_START,
ANDROID_DEPTH_END },
[ANDROID_LOGICAL_MULTI_CAMERA] = { ANDROID_LOGICAL_MULTI_CAMERA_START,
ANDROID_LOGICAL_MULTI_CAMERA_END },
[ANDROID_DISTORTION_CORRECTION]
= { ANDROID_DISTORTION_CORRECTION_START,
ANDROID_DISTORTION_CORRECTION_END },
};
由 tag_info 结构体统一管理:
static tag_info_t android_color_correction[ANDROID_COLOR_CORRECTION_END -
ANDROID_COLOR_CORRECTION_START] = {
[ ANDROID_COLOR_CORRECTION_MODE - ANDROID_COLOR_CORRECTION_START ] =
{ "mode", TYPE_BYTE },
[ ANDROID_COLOR_CORRECTION_TRANSFORM - ANDROID_COLOR_CORRECTION_START ] =
{ "transform", TYPE_RATIONAL
},
[ ANDROID_COLOR_CORRECTION_GAINS - ANDROID_COLOR_CORRECTION_START ] =
{ "gains", TYPE_FLOAT },
[ ANDROID_COLOR_CORRECTION_ABERRATION_MODE - ANDROID_COLOR_CORRECTION_START ] =
{ "aberrationMode", TYPE_BYTE },
[ ANDROID_COLOR_CORRECTION_AVAILABLE_ABERRATION_MODES - ANDROID_COLOR_CORRECTION_START ] =
{ "availableAberrationModes", TYPE_BYTE },
}; ------------------------------------------------------------- tag_info_t *tag_info[ANDROID_SECTION_COUNT] = {
android_color_correction,
android_control,
android_demosaic,
android_edge,
android_flash,
android_flash_info,
android_hot_pixel,
android_jpeg,
android_lens,
android_lens_info,
android_noise_reduction,
android_quirks,
android_request,
android_scaler,
android_sensor,
android_sensor_info,
android_shading,
android_statistics,
android_statistics_info,
android_tonemap,
android_led,
android_info,
android_black_level,
android_sync,
android_reprocess,
android_depth,
android_logical_multi_camera,
android_distortion_correction,
};
下图是Camera Metadata对不同section以及相应section下不同tag的布局图,以最常见的android.control Section为例进行描述:
如果要写数据,那么在native同样需要一个CameraMetadata对象,这里是在Java构造CameraMetadataNative时实现的,调用的native接口是nativeAllocate():
// instance methods
{ "nativeAllocate",
"()J",
(void*)CameraMetadata_allocate },
static jlong CameraMetadata_allocate(JNIEnv *env, jobject thiz) {
ALOGV("%s", __FUNCTION__); return reinterpret_cast<jlong>(new CameraMetadata());
}
CameraMetadata::CameraMetadata(size_t entryCapacity, size_t dataCapacity) :
mLocked(false)
{
mBuffer = allocate_camera_metadata(entryCapacity, dataCapacity);
}
函数allocate_camera_metadata()是重新根据入口数和数据大小计算、申请buffer。紧接着第二个place_camera_metadata()就是对刚申请的buffer,初始化一些变量,为后面更新,插入tag数据做准备。
camera_metadata_t *allocate_camera_metadata(size_t entry_capacity,
size_t data_capacity) { //传入的参数是(2,0)
if (entry_capacity == ) return NULL; size_t memory_needed = calculate_camera_metadata_size(entry_capacity, //返回的是header+2*sizeof(entry)大小
data_capacity);
void *buffer = malloc(memory_needed); //malloc申请一块连续的内存,
return place_camera_metadata(buffer, memory_needed, //并初始化。
entry_capacity,
data_capacity);
} camera_metadata_t *place_camera_metadata(void *dst,
size_t dst_size,
size_t entry_capacity,
size_t data_capacity) {
if (dst == NULL) return NULL;
if (entry_capacity == ) return NULL; size_t memory_needed = calculate_camera_metadata_size(entry_capacity, //再一次计算需要的内存大小,为何??
data_capacity);
if (memory_needed > dst_size) return NULL; camera_metadata_t *metadata = (camera_metadata_t*)dst;
metadata->version = CURRENT_METADATA_VERSION; //版本号,
metadata->flags = ;//没有排序标志
metadata->entry_count = ; //初始化entry_count =0
metadata->entry_capacity = entry_capacity; //最大的入口数量,针对ANDROID_FLASH_MODE这里是2个。
metadata->entries_start =
ALIGN_TO(sizeof(camera_metadata_t), ENTRY_ALIGNMENT); //entry数据域的开始处紧挨着camera_metadata_t 头部。
metadata->data_count = ; //初始化为0
metadata->data_capacity = data_capacity; //因为没有申请内存,这里也是0
metadata->size = memory_needed; //总的内存大小
size_t data_unaligned = (uint8_t*)(get_entries(metadata) +
metadata->entry_capacity) - (uint8_t*)metadata;
metadata->data_start = ALIGN_TO(data_unaligned, DATA_ALIGNMENT); //计算data数据区域的偏移地址。数据区域紧挨着entry区域末尾。 return metadata;
} //根据入口数量和数据数量,计算实际camera_metadata需要的内存块大小(header+sizeof(camera_entry)+sizeof(data)。
size_t calculate_camera_metadata_size(size_t entry_count,
size_t data_count) { //针对我们上面讲的例子,传入的参数是(2,0)
size_t memory_needed = sizeof(camera_metadata_t); //这里计算header大小了。
// Start entry list at aligned boundary
memory_needed = ALIGN_TO(memory_needed, ENTRY_ALIGNMENT); //按字节对齐后的大小
memory_needed += sizeof(camera_metadata_buffer_entry_t[entry_count]); //紧接着是entry数据区的大小了,这里申请了2个entry内存空间。
// Start buffer list at aligned boundary
memory_needed = ALIGN_TO(memory_needed, DATA_ALIGNMENT); //同样对齐
memory_needed += sizeof(uint8_t[data_count]); //data_count = 0
return memory_needed; //返回的最后算出的大小
}
CameraMetadata数据内存块中组成的最小基本单元是struct camera_metadata_buffer_entry,总的entry数目等信息需要struct camera_metadata_t来维护。
结构图如下:
在HAL层代码中通过如下方式获取/更新 entry:
{
UINT32 SensorTimestampTag = 0x000E0010;
camera_metadata_entry_t entry = { };
camera_metadata_t* pMetadata =
const_cast<camera_metadata_t*>(static_cast<const camera_metadata_t*>(pResult->pResultMetadata));
UINT64 timestamp = m_shutterTimestamp[applicationFrameNum % MaxOutstandingRequests];
INT32 status = find_camera_metadata_entry(pMetadata, SensorTimestampTag, &entry); if (-ENOENT == status) //没有查找到tag时,则认为是一个新的tag,需要添加到大数据结构中
{
status = add_camera_metadata_entry(pMetadata, SensorTimestampTag, ×tamp, );
}
else if ( == status)
{
status = update_camera_metadata_entry(pMetadata, entry.index, ×tamp, , NULL);
}
}
find_camera_metadata_entry函数非常好理解,获取对应tag的entry结构体,并将数据保存在entry传入的参数中。
注:struct camera_metadata_buffer_entry_t; //内部使用记录tag数据
struct camera_metadata_entry_t; //外部引用
int find_camera_metadata_entry(camera_metadata_t *src,
uint32_t tag,
camera_metadata_entry_t *entry) {
if (src == NULL) return ERROR; uint32_t index;
if (src->flags & FLAG_SORTED) { //之前初始化时,flags = 0,这里不成立,跳到else处
// Sorted entries, do a binary search
camera_metadata_buffer_entry_t *search_entry = NULL;
camera_metadata_buffer_entry_t key;
key.tag = tag;
search_entry = bsearch(&key,
get_entries(src),
src->entry_count,
sizeof(camera_metadata_buffer_entry_t),
compare_entry_tags);
if (search_entry == NULL) return NOT_FOUND;
index = search_entry - get_entries(src);
} else {
// Not sorted, linear search
camera_metadata_buffer_entry_t *search_entry = get_entries(src);
for (index = ; index < src->entry_count; index++, search_entry++) { //这里由于entry_count =0 因为根本就没有添加任何东西。
if (search_entry->tag == tag) {
break;
}
}
if (index == src->entry_count) return NOT_FOUND; //返回NOT_FOUNT
} return get_camera_metadata_entry(src, index, //找到index的tag entry
entry);
} int add_camera_metadata_entry(camera_metadata_t *dst,
uint32_t tag,
const void *data,
size_t data_count) { //这里传入的参数为(mBuffer,ANDROID_FLASH_MODE,5,1) int type = get_camera_metadata_tag_type(tag);
if (type == -) {
ALOGE("%s: Unknown tag %04x.", __FUNCTION__, tag);
return ERROR;
} return add_camera_metadata_entry_raw(dst, //这里传入的参数为(mBuffer,ANDROID_FLASH_MODE,BYTE_TYPE,5,1) DOWN
tag,
type,
data,
data_count);
}
//下面是真正干实事的方法,这里会将外部传入的tag信息,存放到各自的家中
static int add_camera_metadata_entry_raw(camera_metadata_t *dst,
uint32_t tag,
uint8_t type,
const void *data,
size_t data_count) { if (dst == NULL) return ERROR;
if (dst->entry_count == dst->entry_capacity) return ERROR; //如果成立,就没有空间了。
if (data == NULL) return ERROR; size_t data_bytes =
calculate_camera_metadata_entry_data_size(type, data_count); //计算要使用的内存大小这里1*1,但是返回的是0
if (data_bytes + dst->data_count > dst->data_capacity) return ERROR; //用的空间+当前数据位置指针,不能大于数据最大空间。 size_t data_payload_bytes =
data_count * camera_metadata_type_size[type]; //data_count =1,data_payload_bytes =1;
camera_metadata_buffer_entry_t *entry = get_entries(dst) + dst->entry_count;//得到当前空闲的entry对象。
memset(entry, , sizeof(camera_metadata_buffer_entry_t)); //清0
entry->tag = tag; //ANDROID_FLASH_MODE.
entry->type = type; //BYTE_TYPE
entry->count = data_count; //没有占用data数据域,这里就是0了。 if (data_bytes == ) {
memcpy(entry->data.value, data,
data_payload_bytes); //小于4字节的,直接放到entry数据域。
} else {
entry->data.offset = dst->data_count;
memcpy(get_data(dst) + entry->data.offset, data,
data_payload_bytes);
dst->data_count += data_bytes;
}
dst->entry_count++; //入口位置记录指针+1.
dst->flags &= ~FLAG_SORTED;
return OK; //到这里ANDROID_FLASH_MODE就添加进去了。
}
update更新并建立参数过程:CameraMetadata支持不同类型的数据更新或者保存到camera_metadata_t中tag所在的entry当中去,以一个更新单字节的数据为例,data_count指定了数据的个数,而tag指定了要更新的entry。
status_t CameraMetadata::update(uint32_t tag,
const int32_t *data, size_t data_count) {
status_t res;
if (mLocked) {
ALOGE("%s: CameraMetadata is locked", __FUNCTION__);
return INVALID_OPERATION;
}
if ( (res = checkType(tag, TYPE_INT32)) != OK) {
return res;
}
return updateImpl(tag, (const void*)data, data_count);
}
首先是通过checkType,主要是通过tag找到get_camera_metadata_tag_type其所应当支持的tag_type(因为具体的TAG是已经通过camera_metadata_tag_info.c源文件中的tag_info这个表指定了其应该具备的tag_type),比较两者是否一致,一致后才允许后续的操作。如这里需要TYPE_BYTE一致:
const char *get_camera_metadata_tag_name(uint32_t tag) {
uint32_t tag_section = tag >> ;
if (tag_section >= VENDOR_SECTION && vendor_tag_ops != NULL) {
return vendor_tag_ops->get_tag_name(
vendor_tag_ops,
tag);
}
if (tag_section >= ANDROID_SECTION_COUNT ||
tag >= camera_metadata_section_bounds[tag_section][] ) {
return NULL;
}
uint32_t tag_index = tag & 0xFFFF;//取tag在section中的index,低16位
return tag_info[tag_section][tag_index].tag_name;//定位section然后再说tag
} int get_camera_metadata_tag_type(uint32_t tag) {
uint32_t tag_section = tag >> ;
if (tag_section >= VENDOR_SECTION && vendor_tag_ops != NULL) {
return vendor_tag_ops->get_tag_type(
vendor_tag_ops,
tag);
}
if (tag_section >= ANDROID_SECTION_COUNT ||
tag >= camera_metadata_section_bounds[tag_section][] ) {
return -;
}
uint32_t tag_index = tag & 0xFFFF;
return tag_info[tag_section][tag_index].tag_type;
}
分别是通过tag取货section id即tag>>16,就定位到所属的section tag_info_t[],再通过在在该section中定位tag index一般是tag&0xFFFF的低16位为在该tag在section中的偏移值,进而找到tag自身的struct tag_info_t.
updataImpl函数主要是讲所有要写入的数据进行update操作:
status_t CameraMetadata::updateImpl(uint32_t tag, const void *data,
size_t data_count) {
status_t res;
if (mLocked) {
ALOGE("%s: CameraMetadata is locked", __FUNCTION__);
return INVALID_OPERATION;
}
int type = get_camera_metadata_tag_type(tag);
if (type == -) {
ALOGE("%s: Tag %d not found", __FUNCTION__, tag);
return BAD_VALUE;
}
size_t data_size = calculate_camera_metadata_entry_data_size(type,
data_count); res = resizeIfNeeded(, data_size);//新建camera_metadata_t if (res == OK) {
camera_metadata_entry_t entry;
res = find_camera_metadata_entry(mBuffer, tag, &entry);
if (res == NAME_NOT_FOUND) {
res = add_camera_metadata_entry(mBuffer,
tag, data, data_count);//将当前新的tag以及数据加入到camera_metadata_t
} else if (res == OK) {
res = update_camera_metadata_entry(mBuffer,
entry.index, data, data_count, NULL);
}
} if (res != OK) {
ALOGE("%s: Unable to update metadata entry %s.%s (%x): %s (%d)",
__FUNCTION__, get_camera_metadata_section_name(tag),
get_camera_metadata_tag_name(tag), tag, strerror(-res), res);
} IF_ALOGV() {
ALOGE_IF(validate_camera_metadata_structure(mBuffer, /*size*/NULL) !=
OK, "%s: Failed to validate metadata structure after update %p",
__FUNCTION__, mBuffer);
} return res;
}
流程框图如下:
最终可以明确的是CameraMetadata相关的参数是被Java层来set/get,但本质是在native层进行了实现,后续如果相关控制参数是被打包到CaptureRequest中时传入到native时即操作的还是native中的CameraMetadata。
三、设置AF的工作模式示例
下面以API2中java层中设置AF的工作模式为例,来说明这个参数设置的过程:
//Java部分代码
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE); session.setRepeatingRequest(mPreviewBuilder.build(), mSessionCaptureCallback, mHandler);
其中CONTROL_AF_MODE定义在CaptureRequest.java中如下以一个Key的形式存在:
/* @see #CONTROL_AF_MODE_OFF
* @see #CONTROL_AF_MODE_AUTO
* @see #CONTROL_AF_MODE_MACRO
* @see #CONTROL_AF_MODE_CONTINUOUS_VIDEO
* @see #CONTROL_AF_MODE_CONTINUOUS_PICTURE
* @see #CONTROL_AF_MODE_EDOF
*/
public static final Key<Integer> CONTROL_AF_MODE =
new Key<Integer>("android.control.afMode", int.class);
public Key(String name, Class<T> type) {
mKey = new CameraMetadataNative.Key<T>(name, type);
}
在CameraMetadataNative.java中Key的构造:
public Key(String name, Class<T> type) {
if (name == null) {
throw new NullPointerException("Key needs a valid name");
} else if (type == null) {
throw new NullPointerException("Type needs to be non-null");
}
mName = name;
mType = type;
mTypeReference = TypeReference.createSpecializedTypeReference(type);
mHash = mName.hashCode() ^ mTypeReference.hashCode();
}
其中CONTROL_AF_MODE_CONTINUOUS_PICTURE定义在CameraMetadata.java中
public static final int CONTROL_AF_MODE_CONTINUOUS_PICTURE = ;
逐一定位set的入口:
a. mPreviewBuilder是CaptureRequest.java的build类,其会构建一个CaptureRequest:
public Builder(CameraMetadataNative template) {
mRequest = new CaptureRequest(template);
}
private CaptureRequest() {
mSettings = new CameraMetadataNative();
mSurfaceSet = new HashSet<Surface>();
}
mSetting建立的是一个CameraMetadataNative对象,主要用于和Native层进行接口交互,构造如下:
public CameraMetadataNative() {
super();
mMetadataPtr = nativeAllocate();
if (mMetadataPtr == ) {
throw new OutOfMemoryError("Failed to allocate native CameraMetadata");
}
}
b. CaptureRequest.Build.set()
public <T> void set(Key<T> key, T value) {
mRequest.mSettings.set(key, value);
}
public <T> void set(CaptureRequest.Key<T> key, T value) {
set(key.getNativeKey(), value);
}
考虑到CaptureRequest extend CameraMetadata,则CaptureRequest.java中getNativeKey:
public CameraMetadataNative.Key<T> getNativeKey() {
return mKey;
}
mKey即为之前构造的CameraMetadataNative.Key:
public <T> void set(Key<T> key, T value) {
SetCommand s = sSetCommandMap.get(key);
if (s != null) {
s.setValue(this, value);
return;
}
setBase(key, value);
}
private <T> void setBase(Key<T> key, T value) {
int tag = key.getTag(); if (value == null) {
// Erase the entry
writeValues(tag, /*src*/null);
return;
} // else update the entry to a new value Marshaler<T> marshaler = getMarshalerForKey(key);
int size = marshaler.calculateMarshalSize(value); // TODO: Optimization. Cache the byte[] and reuse if the size is big enough.
byte[] values = new byte[size]; ByteBuffer buffer = ByteBuffer.wrap(values).order(ByteOrder.nativeOrder());
marshaler.marshal(value, buffer); writeValues(tag, values);
}
首先来看key.getTag()函数的实现,他是将这个key交由Native层后转为一个真正的在Java层中的tag值:
public final int getTag() {
if (!mHasTag) {
mTag = CameraMetadataNative.getTag(mName);
mHasTag = true;
}
return mTag;
}
public static int getTag(String key) {
return nativeGetTagFromKey(key);
}
是将Java层的String交由Native来转为一个Java层的tag值。
再来看writeValues的实现,同样调用的是一个native接口,很好的阐明了CameraMetadataNative的意思:
public void writeValues(int tag, byte[] src) {
nativeWriteValues(tag, src);
}
同样和开头native层代码部分对应起来了。
-end-
Android : Camera HAL3的参数传递(CameraMetadata)的更多相关文章
- Android Camera API2中采用CameraMetadata用于从APP到HAL的参数交互
前沿: 在全新的Camera API2架构下,常常会有人疑问再也看不到熟悉的SetParameter/Paramters等相关的身影,取而代之的是一种全新的CameraMetadata结构的出现,他不 ...
- Android Camera2/HAL3
Android : Camera2/HAL3 框架分析 https://www.cnblogs.com/blogs-of-lxl/p/10651611.html Android : Camera之ca ...
- Android Camera调用过程分析
源代码版本:allwinner 4.0.4 frameworks代码: frameworks/base/core/java/android/hardware/Camera.java JNI层代码: f ...
- Android : Camera2/HAL3 框架分析
一.Android O上的Treble机制: 在 Android O 中,系统启动时,会启动一个 CameraProvider 服务,它是从 cameraserver 进程中分离出来,作为一个独立进程 ...
- 【Android】Android Camera原始帧格式转换 —— 获取Camera图像(一)
概述: 做过Android Camera图像采集和处理的朋友们应该都知道,Android手机相机采集的原始帧(RawFrame)默认是横屏格式的,而官方API有没有提供一个设置Camera采集图像的 ...
- android camera setMeteringArea详解
摘要: 本文为作者原创,未经允许不得转载:原文由作者发表在博客园:http://www.cnblogs.com/panxiaochun/p/5802814.html setMeteringArea() ...
- Android — Camera聚焦流程
原文 http://www.cnphp6.com/archives/65098 主题 Android Camera.java autoFocus()聚焦回调函数 @Override public v ...
- android camera setParameters failed 类问题分析总结
在 monkey test 测试中出现了一例 RuntimeException ,即 setParameters failed. LOG显示为:09-01 18:47:17.348 15656 156 ...
- Android Camera 相机程序编写
Android Camera 相机程序编写 要自己写一个相机应用直接使用相机硬件,首先应用需要一个权限设置,在AndroidManifest.xml中加上使用设备相机的权限: <uses-per ...
随机推荐
- 关于ssh_config和sshd_config
转载:https://www.cnblogs.com/panda2046/p/5933498.html 在远程管理linux系统基本上都要使用到ssh,原因很简单:telnet.FTP等传输方式是 ...
- Win10 微软远程桌面很模糊是为什么?
今天又查了一下,解决了问题,是 Intel 集显驱动引起的.在桌面右键 => 图形属性 => 在蓝色的 Intel 核芯显卡控制面板上选择“ 3D ” => 在“保守形态学抗锯齿”中 ...
- springcloud中微服务的优雅停机(已验证)
大部分项目部署中,为了方便,可能都直接使用kill -9 服务的pid来停掉服务. 但是由于Eureka采用心跳的机制来上下线服务,会导致服务消费者调用此已经kill的服务提供者然后出错. 可以采用以 ...
- PAT甲级1012题解——选择一种合适数据存储方式能使题目变得更简单
题目分析: 本题的算法并不复杂,主要是要搞清楚数据的存储方式(选择一种合适的方式存储每个学生的四个成绩很重要)这里由于N的范围为10^6,故选择结构体来存放对应下标为学生的id(N只有2000的范围, ...
- opencv图片压缩视频并读取
import os import cv2 import numpy as np import time path = './new_image/' filelist = os.listdir(path ...
- vue 中 axios 使用
前言 在对接接口的时候时常会有传参问题调调试试很多,是 JSON.From Data还是 URL 传参,没有搞清楚就浪费很多时间. 本文中就结合 axios 来说明这些的区别,以便在以后工作更好对接. ...
- 《BUG创造队》第三次作业:团队项目原型设计与开发
项目 内容 这个作业属于哪个课程 2016级软件工程 这个作业的要求在哪里 实验六 团队作业3:团队项目原型设计与开发 团队名称 BUG创造队 作业学习目标 ①掌握软件原型开发技术:②学会使用软件原型 ...
- LeetCode 979. Distribute Coins in Binary Tree
原题链接在这里:https://leetcode.com/problems/distribute-coins-in-binary-tree/ 题目: Given the root of a binar ...
- 12-ESP8266 SDK开发基础入门篇--PWM,呼吸灯
https://www.cnblogs.com/yangfengwu/p/11094085.html PWM其实没有什么,就是看着官方给的API,,,然后就是用呗 对了,其实对于RTOS SDK版本的 ...
- P1071 潜伏者
//Pro:NOIP2009 T1 P1071 潜伏者 #include<iostream> #include<cstdio> #include<cstring> ...