X264的编码器结构体x264_t中的子结构体字段frames包含了4个临时视频帧序列空间:current、next、unused和reference,分别保存当前编码帧、将编码帧序列、未处理原始视频帧序列和参考帧序列,同时x264编码器申请了fenc和fdec空间用于存放已编码帧和重建帧。H264中“帧”和“片”都是图像帧,如果不加说明,他们的意义基本一样。

编码器处理视频帧的顺序如下:

首先,从YUV视频文件中读取图像存储到临时x264_picture_t变量pic_in,同时为unused申请存储空间,并用fenc指针指向这个空间。

接着,将pic_in中的图片数据拷贝到fenc所指向的空间,并在拷贝完成后对图片大小进行判断,如果长宽不是16的整数,倍则进行像素扩展;将处理后的fenc区域数据放入next区域,之后,如果存在B帧,则从next区域中取出B帧以后的P帧放到current区域中,也就是说先编码I、P帧再编码其间的B帧;否则,则直接从next区域取出一帧存入current区域。此时,current区域中存放的就是已经预处理的即将要编码的帧数据了。

最后,由于fenc区域是编码的直接对象,再将current区域中的内容拷贝到fenc中正式开始编码。x264_encoder_encode()函数实现了这个过程。

x264_encoder_encode():编码一帧YUV为H264。

调用的函数如下:

x264_frame_pop_unused():获取一个x264_frame_t类型结构体fenc,如果frame.unused[]队列不为空,就调用x264_frame_pop从unused[]队列取出一个现成的;否则就调用x264_frame_new()创建一个新的。

x264_frame_copy_picure():将输入的数据拷贝至fenc。

x264_lookahead_put_frame():将fenc放入lookanead.next.list[]队列,等待确定帧的类型。

x264_lookahead_get_frame():通过lookahead分析帧类型。将帧放入frames.current[]队列。

x264_frame_shift():从frames.current[]队列中取出一帧用于编码。

x264_reference_update():更新参考帧队列。

x264_reference_reset():如果为IDR帧,调用该函数清空参考帧列表。

x264_reference_hierarchy_reset():如果是非IDR的I、B、P帧,调用该函数。

x264_reference_build_list():创建参考列表list0和list1。

x264_ratecontrol_start():开启码率控制。

x264_slice_init():创建Slice Header。

x264_slices_write():编码数据。

x264_encoder_frame_end():编码结束后做一些后续处理。

函数代码:

//编码一帧数据
int x264_encoder_encode( x264_t *h,
x264_nal_t **pp_nal, int *pi_nal,
x264_picture_t *pic_in,
x264_picture_t *pic_out )
{
x264_t *thread_current, *thread_prev, *thread_oldest;
int i_nal_type, i_nal_ref_idc, i_global_qp;
int overhead = NALU_OVERHEAD; #if HAVE_OPENCL
if( h->opencl.b_fatal_error )
return -1;
#endif if( h->i_thread_frames > 1 )
{
thread_prev = h->thread[ h->i_thread_phase ];
h->i_thread_phase = (h->i_thread_phase + 1) % h->i_thread_frames;
thread_current = h->thread[ h->i_thread_phase ];
thread_oldest = h->thread[ (h->i_thread_phase + 1) % h->i_thread_frames ];
x264_thread_sync_context( thread_current, thread_prev );
x264_thread_sync_ratecontrol( thread_current, thread_prev, thread_oldest );
h = thread_current;
}
else
{
thread_current =
thread_oldest = h;
}
h->i_cpb_delay_pir_offset = h->i_cpb_delay_pir_offset_next; *pi_nal = 0;
*pp_nal = NULL; if( pic_in != NULL )
{
//获取一帧的空间fenc,用来存放待编码的帧
x264_frame_t *fenc = x264_frame_pop_unused( h, 0 );
if( !fenc )
return -1; //外部像素数据传递到内部系统
//pic_in(外部结构体x264_picture_t)到fenc(内部结构体x264_frame_t)
if( x264_frame_copy_picture( h, fenc, pic_in ) < 0 )
return -1;
//宽和高都确保是16的整数倍(宏块宽度的整数倍)
if( h->param.i_width != 16 * h->mb.i_mb_width ||
h->param.i_height != 16 * h->mb.i_mb_height )
x264_frame_expand_border_mod16( h, fenc );//扩展至16整数倍 fenc->i_frame = h->frames.i_input++; if( fenc->i_frame == 0 )
h->frames.i_first_pts = fenc->i_pts;
if( h->frames.i_bframe_delay && fenc->i_frame == h->frames.i_bframe_delay )
h->frames.i_bframe_delay_time = fenc->i_pts - h->frames.i_first_pts; if( h->param.b_vfr_input && fenc->i_pts <= h->frames.i_largest_pts )
x264_log( h, X264_LOG_WARNING, "non-strictly-monotonic PTS\n" ); h->frames.i_second_largest_pts = h->frames.i_largest_pts;
h->frames.i_largest_pts = fenc->i_pts; if( (fenc->i_pic_struct < PIC_STRUCT_AUTO) || (fenc->i_pic_struct > PIC_STRUCT_TRIPLE) )
fenc->i_pic_struct = PIC_STRUCT_AUTO; if( fenc->i_pic_struct == PIC_STRUCT_AUTO )
{
#if HAVE_INTERLACED
int b_interlaced = fenc->param ? fenc->param->b_interlaced : h->param.b_interlaced;
#else
int b_interlaced = 0;
#endif
if( b_interlaced )
{
int b_tff = fenc->param ? fenc->param->b_tff : h->param.b_tff;
fenc->i_pic_struct = b_tff ? PIC_STRUCT_TOP_BOTTOM : PIC_STRUCT_BOTTOM_TOP;
}
else
fenc->i_pic_struct = PIC_STRUCT_PROGRESSIVE;
} if( h->param.rc.b_mb_tree && h->param.rc.b_stat_read )
{
if( x264_macroblock_tree_read( h, fenc, pic_in->prop.quant_offsets ) )
return -1;
}
else
x264_stack_align( x264_adaptive_quant_frame, h, fenc, pic_in->prop.quant_offsets ); if( pic_in->prop.quant_offsets_free )
pic_in->prop.quant_offsets_free( pic_in->prop.quant_offsets );
//降低分辨率处理(原来的一半),线性内插
//注意这里并不是6抽头滤波器的半像素内插
if( h->frames.b_have_lowres )
x264_frame_init_lowres( h, fenc ); //fenc放入lookahead.next.list[]队列,等待确定帧类型
x264_lookahead_put_frame( h, fenc ); if( h->frames.i_input <= h->frames.i_delay + 1 - h->i_thread_frames )
{
/* Nothing yet to encode, waiting for filling of buffers */
pic_out->i_type = X264_TYPE_AUTO;
return 0;
}
}
else
{
//输入数据为空的时候,不需要lookahead
x264_pthread_mutex_lock( &h->lookahead->ifbuf.mutex );
h->lookahead->b_exit_thread = 1;
x264_pthread_cond_broadcast( &h->lookahead->ifbuf.cv_fill );
x264_pthread_mutex_unlock( &h->lookahead->ifbuf.mutex );
} h->i_frame++;
//通过lookahead分析帧类型
if( !h->frames.current[0] )
x264_lookahead_get_frames( h ); if( !h->frames.current[0] && x264_lookahead_is_empty( h ) )
return x264_encoder_frame_end( thread_oldest, thread_current, pp_nal, pi_nal, pic_out ); //从frames.current[]队列取出1帧[0]用于编码
h->fenc = x264_frame_shift( h->frames.current ); /* If applicable, wait for previous frame reconstruction to finish */
if( h->param.b_sliced_threads )
if( x264_threadpool_wait_all( h ) < 0 )
return -1; if( h->i_frame == h->i_thread_frames - 1 )
h->i_reordered_pts_delay = h->fenc->i_reordered_pts;
if( h->reconfig )
{
x264_encoder_reconfig_apply( h, &h->reconfig_h->param );
h->reconfig = 0;
}
if( h->fenc->param )
{
x264_encoder_reconfig_apply( h, h->fenc->param );
if( h->fenc->param->param_free )
{
h->fenc->param->param_free( h->fenc->param );
h->fenc->param = NULL;
}
}
//更新参考帧队列frames.reference[].若为B帧则不更新
//重建帧fdec移植参考帧列表,新建一个fdec
if( x264_reference_update( h ) )
return -1;
h->fdec->i_lines_completed = -1; if( !IS_X264_TYPE_I( h->fenc->i_type ) )
{
int valid_refs_left = 0;
for( int i = 0; h->frames.reference[i]; i++ )
if( !h->frames.reference[i]->b_corrupt )
valid_refs_left++; if( !valid_refs_left )
{
h->fenc->b_keyframe = 1;
h->fenc->i_type = X264_TYPE_IDR;
}
} if( h->fenc->b_keyframe )
{
h->frames.i_last_keyframe = h->fenc->i_frame;
if( h->fenc->i_type == X264_TYPE_IDR )
{
h->i_frame_num = 0;
h->frames.i_last_idr = h->fenc->i_frame;
}
}
h->sh.i_mmco_command_count =
h->sh.i_mmco_remove_from_end = 0;
h->b_ref_reorder[0] =
h->b_ref_reorder[1] = 0;
h->fdec->i_poc =
h->fenc->i_poc = 2 * ( h->fenc->i_frame - X264_MAX( h->frames.i_last_idr, 0 ) ); if( h->fenc->i_type == X264_TYPE_IDR )
{
//I与IDR区别
//注意IDR会导致参考帧列清空,而I不会
//I图像之后的图像可以引用I图像之间的图像做运动参考
i_nal_type = NAL_SLICE_IDR;
i_nal_ref_idc = NAL_PRIORITY_HIGHEST;
h->sh.i_type = SLICE_TYPE_I;
//若是IDR帧,则清空所有参考帧
x264_reference_reset( h );
h->frames.i_poc_last_open_gop = -1;
}
else if( h->fenc->i_type == X264_TYPE_I )
{
//I与IDR区别
//注意IDR会导致参考帧列清空,而I不会
//I图像之后的图像可以引用I图像之间的图像做运动参考
i_nal_type = NAL_SLICE;
i_nal_ref_idc = NAL_PRIORITY_HIGH; /* Not completely true but for now it is (as all I/P are kept as ref)*/
h->sh.i_type = SLICE_TYPE_I;
x264_reference_hierarchy_reset( h );
if( h->param.b_open_gop )
h->frames.i_poc_last_open_gop = h->fenc->b_keyframe ? h->fenc->i_poc : -1;
}
else if( h->fenc->i_type == X264_TYPE_P )
{
i_nal_type = NAL_SLICE;
i_nal_ref_idc = NAL_PRIORITY_HIGH; /* Not completely true but for now it is (as all I/P are kept as ref)*/
h->sh.i_type = SLICE_TYPE_P;
x264_reference_hierarchy_reset( h );
h->frames.i_poc_last_open_gop = -1;
}
else if( h->fenc->i_type == X264_TYPE_BREF )
{
//可以作为参考帧的B帧,这是个特色 i_nal_type = NAL_SLICE;
i_nal_ref_idc = h->param.i_bframe_pyramid == X264_B_PYRAMID_STRICT ? NAL_PRIORITY_LOW : NAL_PRIORITY_HIGH;
h->sh.i_type = SLICE_TYPE_B;
x264_reference_hierarchy_reset( h );
}
else
{
//最普通 i_nal_type = NAL_SLICE;
i_nal_ref_idc = NAL_PRIORITY_DISPOSABLE;
h->sh.i_type = SLICE_TYPE_B;
}
//重建帧与编码帧的赋值...
h->fdec->i_type = h->fenc->i_type;
h->fdec->i_frame = h->fenc->i_frame;
h->fenc->b_kept_as_ref =
h->fdec->b_kept_as_ref = i_nal_ref_idc != NAL_PRIORITY_DISPOSABLE && h->param.i_keyint_max > 1; h->fdec->mb_info = h->fenc->mb_info;
h->fdec->mb_info_free = h->fenc->mb_info_free;
h->fenc->mb_info = NULL;
h->fenc->mb_info_free = NULL; h->fdec->i_pts = h->fenc->i_pts;
if( h->frames.i_bframe_delay )
{
int64_t *prev_reordered_pts = thread_current->frames.i_prev_reordered_pts;
h->fdec->i_dts = h->i_frame > h->frames.i_bframe_delay
? prev_reordered_pts[ (h->i_frame - h->frames.i_bframe_delay) % h->frames.i_bframe_delay ]
: h->fenc->i_reordered_pts - h->frames.i_bframe_delay_time;
prev_reordered_pts[ h->i_frame % h->frames.i_bframe_delay ] = h->fenc->i_reordered_pts;
}
else
h->fdec->i_dts = h->fenc->i_reordered_pts;
if( h->fenc->i_type == X264_TYPE_IDR )
h->i_last_idr_pts = h->fdec->i_pts; //创建参考帧列表list0和list1
x264_reference_build_list( h, h->fdec->i_poc ); //用于输出
if( h->param.b_sliced_threads )
{
for( int i = 0; i < h->param.i_threads; i++ )
{
bs_init( &h->thread[i]->out.bs, h->thread[i]->out.p_bitstream, h->thread[i]->out.i_bitstream );
h->thread[i]->out.i_nal = 0;
}
}
else
{
bs_init( &h->out.bs, h->out.p_bitstream, h->out.i_bitstream );
h->out.i_nal = 0;
} if( h->param.b_aud )
{
int pic_type; if( h->sh.i_type == SLICE_TYPE_I )
pic_type = 0;
else if( h->sh.i_type == SLICE_TYPE_P )
pic_type = 1;
else if( h->sh.i_type == SLICE_TYPE_B )
pic_type = 2;
else
pic_type = 7; x264_nal_start( h, NAL_AUD, NAL_PRIORITY_DISPOSABLE );
bs_write( &h->out.bs, 3, pic_type );
bs_rbsp_trailing( &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + NALU_OVERHEAD;
} h->i_nal_type = i_nal_type;
h->i_nal_ref_idc = i_nal_ref_idc; if( h->param.b_intra_refresh )
{
if( IS_X264_TYPE_I( h->fenc->i_type ) )
{
h->fdec->i_frames_since_pir = 0;
h->b_queued_intra_refresh = 0;
/* PIR is currently only supported with ref == 1, so any intra frame effectively refreshes
* the whole frame and counts as an intra refresh. */
h->fdec->f_pir_position = h->mb.i_mb_width;
}
else if( h->fenc->i_type == X264_TYPE_P )
{
int pocdiff = (h->fdec->i_poc - h->fref[0][0]->i_poc)/2;
float increment = X264_MAX( ((float)h->mb.i_mb_width-1) / h->param.i_keyint_max, 1 );
h->fdec->f_pir_position = h->fref[0][0]->f_pir_position;
h->fdec->i_frames_since_pir = h->fref[0][0]->i_frames_since_pir + pocdiff;
if( h->fdec->i_frames_since_pir >= h->param.i_keyint_max ||
(h->b_queued_intra_refresh && h->fdec->f_pir_position + 0.5 >= h->mb.i_mb_width) )
{
h->fdec->f_pir_position = 0;
h->fdec->i_frames_since_pir = 0;
h->b_queued_intra_refresh = 0;
h->fenc->b_keyframe = 1;
}
h->fdec->i_pir_start_col = h->fdec->f_pir_position+0.5;
h->fdec->f_pir_position += increment * pocdiff;
h->fdec->i_pir_end_col = h->fdec->f_pir_position+0.5;
/* If our intra refresh has reached the right side of the frame, we're done. */
if( h->fdec->i_pir_end_col >= h->mb.i_mb_width - 1 )
{
h->fdec->f_pir_position = h->mb.i_mb_width;
h->fdec->i_pir_end_col = h->mb.i_mb_width - 1;
}
}
} if( h->fenc->b_keyframe )
{
//每个关键帧前面重复加上SPS和PPS
if( h->param.b_repeat_headers )
{
x264_nal_start( h, NAL_SPS, NAL_PRIORITY_HIGHEST );
x264_sps_write( &h->out.bs, h->sps );
if( x264_nal_end( h ) )
return -1;
if( h->param.i_avcintra_class )
h->out.nal[h->out.i_nal-1].i_padding = 256 - bs_pos( &h->out.bs ) / 8 - 2*NALU_OVERHEAD;
overhead += h->out.nal[h->out.i_nal-1].i_payload + h->out.nal[h->out.i_nal-1].i_padding + NALU_OVERHEAD; x264_nal_start( h, NAL_PPS, NAL_PRIORITY_HIGHEST );
x264_pps_write( &h->out.bs, h->sps, h->pps );
if( x264_nal_end( h ) )
return -1;
if( h->param.i_avcintra_class )
h->out.nal[h->out.i_nal-1].i_padding = 256 - h->out.nal[h->out.i_nal-1].i_payload - NALU_OVERHEAD;
overhead += h->out.nal[h->out.i_nal-1].i_payload + h->out.nal[h->out.i_nal-1].i_padding + NALU_OVERHEAD;
}
if( h->i_thread_frames == 1 && h->sps->vui.b_nal_hrd_parameters_present )
{
x264_hrd_fullness( h );
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_buffering_period_write( h, &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
}
}
for( int i = 0; i < h->fenc->extra_sei.num_payloads; i++ )
{
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_write( &h->out.bs, h->fenc->extra_sei.payloads[i].payload, h->fenc->extra_sei.payloads[i].payload_size,
h->fenc->extra_sei.payloads[i].payload_type );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
if( h->fenc->extra_sei.sei_free )
{
h->fenc->extra_sei.sei_free( h->fenc->extra_sei.payloads[i].payload );
h->fenc->extra_sei.payloads[i].payload = NULL;
}
} if( h->fenc->extra_sei.sei_free )
{
h->fenc->extra_sei.sei_free( h->fenc->extra_sei.payloads );
h->fenc->extra_sei.payloads = NULL;
h->fenc->extra_sei.sei_free = NULL;
}
//特殊的SEI信息(Avid等解码器需要)
if( h->fenc->b_keyframe )
{
if( h->param.b_repeat_headers && h->fenc->i_frame == 0 && !h->param.i_avcintra_class )
{
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
if( x264_sei_version_write( h, &h->out.bs ) )
return -1;
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
} if( h->fenc->i_type != X264_TYPE_IDR )
{
int time_to_recovery = h->param.b_open_gop ? 0 : X264_MIN( h->mb.i_mb_width - 1, h->param.i_keyint_max ) + h->param.i_bframe - 1;
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_recovery_point_write( h, &h->out.bs, time_to_recovery );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
}
} if( h->param.i_frame_packing >= 0 && (h->fenc->b_keyframe || h->param.i_frame_packing == 5) )
{
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_frame_packing_write( h, &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
} if( h->sps->vui.b_pic_struct_present || h->sps->vui.b_nal_hrd_parameters_present )
{
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_pic_timing_write( h, &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
} if( !IS_X264_TYPE_B( h->fenc->i_type ) && h->b_sh_backup )
{
h->b_sh_backup = 0;
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_dec_ref_pic_marking_write( h, &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
} if( h->fenc->b_keyframe && h->param.b_intra_refresh )
h->i_cpb_delay_pir_offset_next = h->fenc->i_cpb_delay; if( h->param.i_avcintra_class )
{
x264_nal_start( h, NAL_FILLER, NAL_PRIORITY_DISPOSABLE );
x264_filler_write( h, &h->out.bs, 0 );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + NALU_OVERHEAD; x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
if( x264_sei_avcintra_umid_write( h, &h->out.bs ) < 0 )
return -1;
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD; int unpadded_len;
int total_len;
if( h->param.i_height == 1080 )
{
unpadded_len = 5780;
total_len = 17*512;
}
else
{
unpadded_len = 2900;
total_len = 9*512;
}
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
if( x264_sei_avcintra_vanc_write( h, &h->out.bs, unpadded_len ) < 0 )
return -1;
if( x264_nal_end( h ) )
return -1; h->out.nal[h->out.i_nal-1].i_padding = total_len - h->out.nal[h->out.i_nal-1].i_payload - SEI_OVERHEAD;
overhead += h->out.nal[h->out.i_nal-1].i_payload + h->out.nal[h->out.i_nal-1].i_padding + SEI_OVERHEAD;
} //码率控制单元初始化
x264_ratecontrol_start( h, h->fenc->i_qpplus1, overhead*8 );
i_global_qp = x264_ratecontrol_qp( h ); pic_out->i_qpplus1 =
h->fdec->i_qpplus1 = i_global_qp + 1; if( h->param.rc.b_stat_read && h->sh.i_type != SLICE_TYPE_I )
{
x264_reference_build_list_optimal( h );
x264_reference_check_reorder( h );
} if( h->i_ref[0] )
h->fdec->i_poc_l0ref0 = h->fref[0][0]->i_poc; //创建Slice Header
x264_slice_init( h, i_nal_type, i_global_qp ); //加权预测
if( h->sh.i_type == SLICE_TYPE_B )
x264_macroblock_bipred_init( h ); x264_weighted_pred_init( h ); if( i_nal_ref_idc != NAL_PRIORITY_DISPOSABLE )
h->i_frame_num++; h->i_threadslice_start = 0;
h->i_threadslice_end = h->mb.i_mb_height; if( h->i_thread_frames > 1 )
{
x264_threadpool_run( h->threadpool, (void*)x264_slices_write, h );
h->b_thread_active = 1;
}
else if( h->param.b_sliced_threads )
{
if( x264_threaded_slices_write( h ) )
return -1;
}
else{
//真正的编码——编码1个图像帧(注意这里“slices”后面有“s”)
if( (intptr_t)x264_slices_write( h ) )
return -1;
}
//结束的时候做一些处理,记录一些统计信息
//输出NALU
//输出重建帧
return x264_encoder_frame_end( thread_oldest, thread_current, pp_nal, pi_nal, pic_out );
}

在函数x264_frame_pop_unused()和函数x264_frame_copy_picture()中,我们得到视频帧,并把视频帧存到fenc之中,即待编码图像。

x264_frame_pop_unused():

//获取一帧的编码帧fenc或者重建帧fdec
x264_frame_t *x264_frame_pop_unused( x264_t *h, int b_fdec )
{
x264_frame_t *frame;
if( h->frames.unused[b_fdec][0] )//unused队列不为空
frame = x264_frame_pop( h->frames.unused[b_fdec] );//从unused队列取
else
frame = x264_frame_new( h, b_fdec );//分配一帧空间
if( !frame )
return NULL;
frame->b_last_minigop_bframe = 0;
frame->i_reference_count = 1;
frame->b_intra_calculated = 0;
frame->b_scenecut = 1;
frame->b_keyframe = 0;
frame->b_corrupt = 0;
frame->i_slice_count = h->param.b_sliced_threads ? h->param.i_threads : 1; memset( frame->weight, 0, sizeof(frame->weight) );
memset( frame->f_weighted_cost_delta, 0, sizeof(frame->f_weighted_cost_delta) );
return frame;
}

x264_frame_copy_picture():

//拷贝帧数据
//src(外部结构体x264_picture_t)到dst(内部结构体x264_frame_t)
int x264_frame_copy_picture( x264_t *h, x264_frame_t *dst, x264_picture_t *src )
{
int i_csp = src->img.i_csp & X264_CSP_MASK;
//注意转换后只有3种内部colorspace:X264_CSP_NV12(对应YUV420),X264_CSP_NV16(对应YUV422),X264_CSP_I444(对应YUV444)
if( dst->i_csp != x264_frame_internal_csp( i_csp ) )
{
x264_log( h, X264_LOG_ERROR, "Invalid input colorspace\n" );
return -1;
} #if HIGH_BIT_DEPTH
if( !(src->img.i_csp & X264_CSP_HIGH_DEPTH) )
{
x264_log( h, X264_LOG_ERROR, "This build of x264 requires high depth input. Rebuild to support 8-bit input.\n" );
return -1;
}
#else
if( src->img.i_csp & X264_CSP_HIGH_DEPTH )
{
x264_log( h, X264_LOG_ERROR, "This build of x264 requires 8-bit input. Rebuild to support high depth input.\n" );
return -1;
}
#endif if( BIT_DEPTH != 10 && i_csp == X264_CSP_V210 )
{
x264_log( h, X264_LOG_ERROR, "v210 input is only compatible with bit-depth of 10 bits\n" );
return -1;
}
//赋值赋值赋值
dst->i_type = src->i_type;
dst->i_qpplus1 = src->i_qpplus1;
dst->i_pts = dst->i_reordered_pts = src->i_pts;
dst->param = src->param;
dst->i_pic_struct = src->i_pic_struct;
dst->extra_sei = src->extra_sei;
dst->opaque = src->opaque;
dst->mb_info = h->param.analyse.b_mb_info ? src->prop.mb_info : NULL;
dst->mb_info_free = h->param.analyse.b_mb_info ? src->prop.mb_info_free : NULL; uint8_t *pix[3];
int stride[3];
if( i_csp == X264_CSP_V210 )
{
stride[0] = src->img.i_stride[0];
pix[0] = src->img.plane[0]; h->mc.plane_copy_deinterleave_v210( dst->plane[0], dst->i_stride[0],
dst->plane[1], dst->i_stride[1],
(uint32_t *)pix[0], stride[0]/sizeof(uint32_t), h->param.i_width, h->param.i_height );
}
else if( i_csp >= X264_CSP_BGR )
{
stride[0] = src->img.i_stride[0];
pix[0] = src->img.plane[0];
if( src->img.i_csp & X264_CSP_VFLIP )
{
pix[0] += (h->param.i_height-1) * stride[0];
stride[0] = -stride[0];
}
int b = i_csp==X264_CSP_RGB;
h->mc.plane_copy_deinterleave_rgb( dst->plane[1+b], dst->i_stride[1+b],
dst->plane[0], dst->i_stride[0],
dst->plane[2-b], dst->i_stride[2-b],
(pixel*)pix[0], stride[0]/sizeof(pixel), i_csp==X264_CSP_BGRA ? 4 : 3, h->param.i_width, h->param.i_height );
}
else
{
int v_shift = CHROMA_V_SHIFT;
get_plane_ptr( h, src, &pix[0], &stride[0], 0, 0, 0 );
//拷贝像素
h->mc.plane_copy( dst->plane[0], dst->i_stride[0], (pixel*)pix[0],
stride[0]/sizeof(pixel), h->param.i_width, h->param.i_height );
if( i_csp == X264_CSP_NV12 || i_csp == X264_CSP_NV16 )
{
get_plane_ptr( h, src, &pix[1], &stride[1], 1, 0, v_shift );
h->mc.plane_copy( dst->plane[1], dst->i_stride[1], (pixel*)pix[1],
stride[1]/sizeof(pixel), h->param.i_width, h->param.i_height>>v_shift );
}
else if( i_csp == X264_CSP_I420 || i_csp == X264_CSP_I422 || i_csp == X264_CSP_YV12 || i_csp == X264_CSP_YV16 )
{
int uv_swap = i_csp == X264_CSP_YV12 || i_csp == X264_CSP_YV16;
get_plane_ptr( h, src, &pix[1], &stride[1], uv_swap ? 2 : 1, 1, v_shift );
get_plane_ptr( h, src, &pix[2], &stride[2], uv_swap ? 1 : 2, 1, v_shift );
h->mc.plane_copy_interleave( dst->plane[1], dst->i_stride[1],
(pixel*)pix[1], stride[1]/sizeof(pixel),
(pixel*)pix[2], stride[2]/sizeof(pixel),
h->param.i_width>>1, h->param.i_height>>v_shift );
}
else //if( i_csp == X264_CSP_I444 || i_csp == X264_CSP_YV24 )
{
get_plane_ptr( h, src, &pix[1], &stride[1], i_csp==X264_CSP_I444 ? 1 : 2, 0, 0 );
get_plane_ptr( h, src, &pix[2], &stride[2], i_csp==X264_CSP_I444 ? 2 : 1, 0, 0 );
h->mc.plane_copy( dst->plane[1], dst->i_stride[1], (pixel*)pix[1],
stride[1]/sizeof(pixel), h->param.i_width, h->param.i_height );
h->mc.plane_copy( dst->plane[2], dst->i_stride[2], (pixel*)pix[2],
stride[2]/sizeof(pixel), h->param.i_width, h->param.i_height );
}
}
return 0;
}

从上面的代码可以看出,x264_frame_t和x264_picture_t结构体中的很多字段一模一样,函数将x264_picture_t中字段的值赋值给了x264_frame_t,没有其他过多操作。

X264-视频帧的存取的更多相关文章

  1. 使用X264编码yuv格式的视频帧使用ffmpeg解码h264视频帧

    前面一篇博客介绍在centos上搭建点击打开链接ffmpeg及x264开发环境.以下就来问个样例: 1.利用x264库将YUV格式视频文件编码为h264格式视频文件 2.利用ffmpeh库将h264格 ...

  2. 使用ffmpeg将BMP图片编码为x264视频文件,将H264视频保存为BMP图片,yuv视频文件保存为图片的代码

    ffmpeg开源库,实现将bmp格式的图片编码成x264文件,并将编码好的H264文件解码保存为BMP文件. 实现将视频文件yuv格式保存的图片格式的測试,图像格式png,jpg, gif等等測试均O ...

  3. [转]android 获取视频帧

    本文转自:http://blog.csdn.net/heart_Moving/article/details/17414067 今天做Android视频文件解码,需求:从一个视频文件获取到一帧一帧的图 ...

  4. A TensorBoard plugin for visualizing arbitrary tensors in a video as your network trains.Beholder是一个TensorBoard插件,用于在模型训练时查看视频帧。

    Beholder is a TensorBoard plugin for viewing frames of a video while your model trains. It comes wit ...

  5. [SimplePlayer] 3. 视频帧同步

    Frame Rate 帧率代表的是每一秒所播放的视频图像数目.通常,视频都会有固定的帧率,具体点地说是每一帧的时间间隔都是一样的,这种情况简称为CFR(Constant Frame Rate);另外一 ...

  6. C++调用ffmpeg.exe提取视频帧

    有时候,我们获得一段视频,需要将其中的每一帧都提取出来,来进行一些相关的处理,这时候我们就可以需要用到ffmpeg.exe来进行视频帧的提取. ffmpeg简介:FFmpeg是一套可以用来记录.转换数 ...

  7. 基于C#利用ffmpeg提取视频帧

    利用ffmepg提取视频帧实际上是利用C#调用ffmepg命令行进行处理对应的视频,然后输出出视频帧 GetPicFromVideo("); static public string Get ...

  8. FFmpeg进行视频帧提取&音频重采样-Process.waitFor()引发的阻塞超时

    由于产品需要对视频做一系列的解析操作,利用FFmpeg命令来完成视频的音频提取.第一帧提取作为封面图片.音频重采样.字幕压缩等功能: 前一篇文章已经记录了FFmpeg在JAVA中的使用-音频提取&am ...

  9. python opencv 按一定间隔截取视频帧

    前言关于opencvOpenCV 是 Intel 开源计算机视觉库 (Computer Version) .它由一系列 C 函数和少量 C++ 类构成,实现了图像处理和计算机视觉方面的很多通用算法. ...

随机推荐

  1. bayaim_mysql_忘记密码 [仅限 5.6以下]

    bayaim_mysql_忘记密码 [仅限 5.6以下] 原创 作者:bayaim 时间:2017-12-26 10:47:41 8 0删除编辑 忘记root密码------------------- ...

  2. mysql常见错误代码解释

    mysql常见错误代码解释 原创 作者:bayaim 时间:2017-12-26 11:07:14 38  ---------------------------------------------- ...

  3. [Go] gocron源码阅读-groutine与channel应用到信号捕获

    直接使用go 函数名()可以开启一个grountine,channel可以接收信息并且如果没有数据时会阻塞住channel对应的是底层数据结构的引用,复制channel和函数传参都是拷贝的引用make ...

  4. Kettle 执行SQL脚本

    以下操作都在5.0.1版本下进行开发,其余版本可以进行自动比对 本文将对Kettle5中常用步骤字段选择(又名选择/改名值,英文原名:Select Values)进行详细解释.这个步骤的功能非常强大, ...

  5. 【BZOJ2655】calc(拉格朗日插值)

    bzoj 题意: 给出\(n\),现在要生成这\(n\)个数,每个数有一个值域\([1,A]\).同时要求这\(n\)个数两两不相同. 问一共有多少种方案. 思路: 因为\(A\)很大,同时随着值域的 ...

  6. pointcnn

    这篇论文先举例子解释了为什么卷积无法直接应用在点云数据上. 如图1, 传统的卷积是作用在2维图像数据上.图像中每个像素的顺序是固定的,也就是说数据是结构化存储的.直接使用conv2d就能从这种潜在的空 ...

  7. 基础知识 wps去广告

    原文:http://www.360doc.com/content/19/0618/15/38017100_843312032.shtml 原文:http://wps.crcc.cn/

  8. C#获取CPU和内存使用率

    获取内存使用率 方式1: using System; using System.Runtime.InteropServices; namespace ConsoleApp1 { public clas ...

  9. 【day06】PHP

    一.字符串函数库 1.安装 2.   (1)strlen:获得字符串的字符长度   (2)substr:字符串截取       格式: string substr(string $var,      ...

  10. web app升级—带进度条的App自动更新

    带进度条的App自动更新,效果如下图所示:   技术:vue.vant-ui.5+ 封装独立组件AppProgress.vue: <template> <div> <va ...