代码基于FFmpeg5.0.1

目录

FFFormatContext

AVFormatContext

AVIOContext

FFIOContext

URLContext

URLProtocol

AVInputFormat

FFStream

AVStream

AVCodecParameters

AVCodec

AVCodecContext


FFFormatContext

该结构体声明在 libavformat/internal.h:73,之前4.x的版本是没有这个结构体的,它是AVFormatContext更上一层的封装,保存了AVFormatContext的一些状态信息,具体其他成员是做什么用的后续碰到了再补充

typedef struct FFFormatContext {
/**
* The public context.
*/
// 封装的AVFormatContext对象
AVFormatContext pub; /**
* Number of streams relevant for interleaving.
* Muxing only.
*/
int nb_interleaved_streams; /**
* The interleavement function in use. Always set for muxers.
*/
int (*interleave_packet)(struct AVFormatContext *s, AVPacket *pkt,
int flush, int has_packet); /**
* This buffer is only needed when packets were already buffered but
* not decoded, for example to get the codec parameters in MPEG
* streams.
*/
// 这个buffer在缓存数据,获取解码参数时使用
PacketList packet_buffer; /* av_seek_frame() support */
// 第一包数据的offset
int64_t data_offset; /**< offset of the first packet */ /**
* Raw packets from the demuxer, prior to parsing and decoding.
* This buffer is used for buffering packets until the codec can
* be identified, as parsing cannot be done without knowing the
* codec.
*/
// 用于保存demux的结果,这个buffer只在获取解码参数时使用
PacketList raw_packet_buffer;
/**
* Packets split by the parser get queued here.
*/
PacketList parse_queue;
/**
* The generic code uses this as a temporary packet
* to parse packets or for muxing, especially flushing.
* For demuxers, it may also be used for other means
* for short periods that are guaranteed not to overlap
* with calls to av_read_frame() (or ff_read_packet())
* or with each other.
* It may be used by demuxers as a replacement for
* stack packets (unless they call one of the aforementioned
* functions with their own AVFormatContext).
* Every user has to ensure that this packet is blank
* after using it.
*/
AVPacket *parse_pkt; /**
* Used to hold temporary packets for the generic demuxing code.
* When muxing, it may be used by muxers to hold packets (even
* permanent ones).
*/
AVPacket *pkt;
/**
* Sum of the size of packets in raw_packet_buffer, in bytes.
*/
int raw_packet_buffer_size; /**
* Offset to remap timestamps to be non-negative.
* Expressed in timebase units.
* @see AVStream.mux_ts_offset
*/
int64_t offset; /**
* Timebase for the timestamp offset.
*/
AVRational offset_timebase; #if FF_API_COMPUTE_PKT_FIELDS2
int missing_ts_warning;
#endif int inject_global_side_data; int avoid_negative_ts_use_pts; /**
* Timestamp of the end of the shortest stream.
*/
int64_t shortest_end; /**
* Whether or not avformat_init_output has already been called
*/
int initialized; /**
* Whether or not avformat_init_output fully initialized streams
*/
int streams_initialized; /**
* ID3v2 tag useful for MP3 demuxing
*/
AVDictionary *id3v2_meta; /*
* Prefer the codec framerate for avg_frame_rate computation.
*/
int prefer_codec_framerate; /**
* Set if chapter ids are strictly monotonic.
*/
int chapter_ids_monotonic;
} FFFormatContext;

AVFormatContext

该结构体声明在 libavformat/avformat.h:1202,这个结构体包含了所有的和Format相关的上下文信息。该结构体非常长,这里提取出几个比较重要的成员类型:

AVInputFormat、AVOutputFormat、AVIOContext、AVStream、AVCodec,可以说是包含了所有编解码相关的内容

/**
* Format I/O context.
* New fields can be added to the end with minor version bumps.
* Removal, reordering and changes to existing fields require a major
* version bump.
* sizeof(AVFormatContext) must not be used outside libav*, use
* avformat_alloc_context() to create an AVFormatContext.
*
* Fields can be accessed through AVOptions (av_opt*),
* the name string used matches the associated command line parameter name and
* can be found in libavformat/options_table.h.
* The AVOption/command line parameter names differ in some cases from the C
* structure field names for historic reasons or brevity.
*/
typedef struct AVFormatContext {
/**
* A class for logging and @ref avoptions. Set by avformat_alloc_context().
* Exports (de)muxer private options if they exist.
*/
const AVClass *av_class; /**
* The input container format.
*
* Demuxing only, set by avformat_open_input().
*/
// 输入数据的封装格式
const struct AVInputFormat *iformat; /**
* The output container format.
*
* Muxing only, must be set by the caller before avformat_write_header().
*/
// 输出数据的封装格式
const struct AVOutputFormat *oformat; /**
* Format private data. This is an AVOptions-enabled struct
* if and only if iformat/oformat.priv_class is not NULL.
*
* - muxing: set by avformat_write_header()
* - demuxing: set by avformat_open_input()
*/
// 解码阶段:在avformat_open_input时被设置
void *priv_data; /**
* I/O context.
*
* - demuxing: either set by the user before avformat_open_input() (then
* the user must close it manually) or set by avformat_open_input().
* - muxing: set by the user before avformat_write_header(). The caller must
* take care of closing / freeing the IO context.
*
* Do NOT set this field if AVFMT_NOFILE flag is set in
* iformat/oformat.flags. In such a case, the (de)muxer will handle
* I/O in some other way and this field will be NULL.
*/
// 数据的IO上下文
AVIOContext *pb; /* stream info */
/**
* Flags signalling stream properties. A combination of AVFMTCTX_*.
* Set by libavformat.
*/
// 该上下文的标志位
int ctx_flags; /**
* Number of elements in AVFormatContext.streams.
*
* Set by avformat_new_stream(), must not be modified by any other code.
*/
// 音视频中的流的数量,每次调用avformat_new_stream,该数量都会加1
unsigned int nb_streams;
/**
* A list of all streams in the file. New streams are created with
* avformat_new_stream().
*
* - demuxing: streams are created by libavformat in avformat_open_input().
* If AVFMTCTX_NOHEADER is set in ctx_flags, then new streams may also
* appear in av_read_frame().
* - muxing: streams are created by the user before avformat_write_header().
*
* Freed by libavformat in avformat_free_context().
*/
// 流数组,通过调用avformat_new_stream创建,或者通过av_read_frame来创建
AVStream **streams; /**
* input or output URL. Unlike the old filename field, this field has no
* length restriction.
*
* - demuxing: set by avformat_open_input(), initialized to an empty
* string if url parameter was NULL in avformat_open_input().
* - muxing: may be set by the caller before calling avformat_write_header()
* (or avformat_init_output() if that is called first) to a string
* which is freeable by av_free(). Set to an empty string if it
* was NULL in avformat_init_output().
*
* Freed by libavformat in avformat_free_context().
*/
char *url; /**
* Position of the first frame of the component, in
* AV_TIME_BASE fractional seconds. NEVER set this value directly:
* It is deduced from the AVStream values.
*
* Demuxing only, set by libavformat.
*/
// 第一帧的时间
int64_t start_time; /**
* Duration of the stream, in AV_TIME_BASE fractional
* seconds. Only set this value if you know none of the individual stream
* durations and also do not set any of them. This is deduced from the
* AVStream values if not set.
*
* Demuxing only, set by libavformat.
*/
// 持续时长
int64_t duration; /**
* Total stream bitrate in bit/s, 0 if not
* available. Never set it directly if the file_size and the
* duration are known as FFmpeg can compute it automatically.
*/
// 比特率
int64_t bit_rate; unsigned int packet_size;
int max_delay; /**
* Flags modifying the (de)muxer behaviour. A combination of AVFMT_FLAG_*.
* Set by the user before avformat_open_input() / avformat_write_header().
*/
int flags;
#define AVFMT_FLAG_GENPTS 0x0001 ///< Generate missing pts even if it requires parsing future frames.
#define AVFMT_FLAG_IGNIDX 0x0002 ///< Ignore index.
#define AVFMT_FLAG_NONBLOCK 0x0004 ///< Do not block when reading packets from input.
#define AVFMT_FLAG_IGNDTS 0x0008 ///< Ignore DTS on frames that contain both DTS & PTS
#define AVFMT_FLAG_NOFILLIN 0x0010 ///< Do not infer any values from other values, just return what is stored in the container
#define AVFMT_FLAG_NOPARSE 0x0020 ///< Do not use AVParsers, you also must set AVFMT_FLAG_NOFILLIN as the fillin code works on frames and no parsing -> no frames. Also seeking to frames can not work if parsing to find frame boundaries has been disabled
#define AVFMT_FLAG_NOBUFFER 0x0040 ///< Do not buffer frames when possible
#define AVFMT_FLAG_CUSTOM_IO 0x0080 ///< The caller has supplied a custom AVIOContext, don't avio_close() it.
#define AVFMT_FLAG_DISCARD_CORRUPT 0x0100 ///< Discard frames marked corrupted
#define AVFMT_FLAG_FLUSH_PACKETS 0x0200 ///< Flush the AVIOContext every packet.
/**
* When muxing, try to avoid writing any random/volatile data to the output.
* This includes any random IDs, real-time timestamps/dates, muxer version, etc.
*
* This flag is mainly intended for testing.
*/
#define AVFMT_FLAG_BITEXACT 0x0400
#define AVFMT_FLAG_SORT_DTS 0x10000 ///< try to interleave outputted packets by dts (using this flag can slow demuxing down)
#if FF_API_LAVF_PRIV_OPT
#define AVFMT_FLAG_PRIV_OPT 0x20000 ///< Enable use of private options by delaying codec open (deprecated, does nothing)
#endif
#define AVFMT_FLAG_FAST_SEEK 0x80000 ///< Enable fast, but inaccurate seeks for some formats
#define AVFMT_FLAG_SHORTEST 0x100000 ///< Stop muxing when the shortest stream stops.
#define AVFMT_FLAG_AUTO_BSF 0x200000 ///< Add bitstream filters as requested by the muxer /**
* Maximum number of bytes read from input in order to determine stream
* properties. Used when reading the global header and in
* avformat_find_stream_info().
*
* Demuxing only, set by the caller before avformat_open_input().
*
* @note this is \e not used for determining the \ref AVInputFormat
* "input format"
* @sa format_probesize
*/
// 最大的探测范围,用于确定流的属性,在avformat_find_stream_info中被使用,在avformat_open_input中被设定
int64_t probesize; /**
* Maximum duration (in AV_TIME_BASE units) of the data read
* from input in avformat_find_stream_info().
* Demuxing only, set by the caller before avformat_find_stream_info().
* Can be set to 0 to let avformat choose using a heuristic.
*/
int64_t max_analyze_duration; const uint8_t *key;
int keylen; unsigned int nb_programs;
AVProgram **programs; /**
* Forced video codec_id.
* Demuxing: Set by user.
*/
// 视频解码器的id
enum AVCodecID video_codec_id; /**
* Forced audio codec_id.
* Demuxing: Set by user.
*/
// 音频解码器的id
enum AVCodecID audio_codec_id; /**
* Forced subtitle codec_id.
* Demuxing: Set by user.
*/
// 字幕解码器的id
enum AVCodecID subtitle_codec_id; /**
* Maximum amount of memory in bytes to use for the index of each stream.
* If the index exceeds this size, entries will be discarded as
* needed to maintain a smaller size. This can lead to slower or less
* accurate seeking (depends on demuxer).
* Demuxers for which a full in-memory index is mandatory will ignore
* this.
* - muxing: unused
* - demuxing: set by user
*/
unsigned int max_index_size; /**
* Maximum amount of memory in bytes to use for buffering frames
* obtained from realtime capture devices.
*/
unsigned int max_picture_buffer; /**
* Number of chapters in AVChapter array.
* When muxing, chapters are normally written in the file header,
* so nb_chapters should normally be initialized before write_header
* is called. Some muxers (e.g. mov and mkv) can also write chapters
* in the trailer. To write chapters in the trailer, nb_chapters
* must be zero when write_header is called and non-zero when
* write_trailer is called.
* - muxing: set by user
* - demuxing: set by libavformat
*/
unsigned int nb_chapters;
AVChapter **chapters; /**
* Metadata that applies to the whole file.
*
* - demuxing: set by libavformat in avformat_open_input()
* - muxing: may be set by the caller before avformat_write_header()
*
* Freed by libavformat in avformat_free_context().
*/
// 文件的元数据
AVDictionary *metadata; /**
* Start time of the stream in real world time, in microseconds
* since the Unix epoch (00:00 1st January 1970). That is, pts=0 in the
* stream was captured at this real world time.
* - muxing: Set by the caller before avformat_write_header(). If set to
* either 0 or AV_NOPTS_VALUE, then the current wall-time will
* be used.
* - demuxing: Set by libavformat. AV_NOPTS_VALUE if unknown. Note that
* the value may become known after some number of frames
* have been received.
*/
int64_t start_time_realtime; /**
* The number of frames used for determining the framerate in
* avformat_find_stream_info().
* Demuxing only, set by the caller before avformat_find_stream_info().
*/
// 用于探测fps的最大size
int fps_probe_size; /**
* Error recognition; higher values will detect more errors but may
* misdetect some more or less valid parts as errors.
* Demuxing only, set by the caller before avformat_open_input().
*/
int error_recognition; /**
* Custom interrupt callbacks for the I/O layer.
*
* demuxing: set by the user before avformat_open_input().
* muxing: set by the user before avformat_write_header()
* (mainly useful for AVFMT_NOFILE formats). The callback
* should also be passed to avio_open2() if it's used to
* open the file.
*/
AVIOInterruptCB interrupt_callback; /**
* Flags to enable debugging.
*/
int debug;
#define FF_FDEBUG_TS 0x0001 /**
* Maximum buffering duration for interleaving.
*
* To ensure all the streams are interleaved correctly,
* av_interleaved_write_frame() will wait until it has at least one packet
* for each stream before actually writing any packets to the output file.
* When some streams are "sparse" (i.e. there are large gaps between
* successive packets), this can result in excessive buffering.
*
* This field specifies the maximum difference between the timestamps of the
* first and the last packet in the muxing queue, above which libavformat
* will output a packet regardless of whether it has queued a packet for all
* the streams.
*
* Muxing only, set by the caller before avformat_write_header().
*/
int64_t max_interleave_delta; /**
* Allow non-standard and experimental extension
* @see AVCodecContext.strict_std_compliance
*/
int strict_std_compliance; /**
* Flags indicating events happening on the file, a combination of
* AVFMT_EVENT_FLAG_*.
*
* - demuxing: may be set by the demuxer in avformat_open_input(),
* avformat_find_stream_info() and av_read_frame(). Flags must be cleared
* by the user once the event has been handled.
* - muxing: may be set by the user after avformat_write_header() to
* indicate a user-triggered event. The muxer will clear the flags for
* events it has handled in av_[interleaved]_write_frame().
*/
int event_flags;
/**
* - demuxing: the demuxer read new metadata from the file and updated
* AVFormatContext.metadata accordingly
* - muxing: the user updated AVFormatContext.metadata and wishes the muxer to
* write it into the file
*/
#define AVFMT_EVENT_FLAG_METADATA_UPDATED 0x0001 /**
* Maximum number of packets to read while waiting for the first timestamp.
* Decoding only.
*/
int max_ts_probe; /**
* Avoid negative timestamps during muxing.
* Any value of the AVFMT_AVOID_NEG_TS_* constants.
* Note, this only works when using av_interleaved_write_frame. (interleave_packet_per_dts is in use)
* - muxing: Set by user
* - demuxing: unused
*/
int avoid_negative_ts;
#define AVFMT_AVOID_NEG_TS_AUTO -1 ///< Enabled when required by target format
#define AVFMT_AVOID_NEG_TS_MAKE_NON_NEGATIVE 1 ///< Shift timestamps so they are non negative
#define AVFMT_AVOID_NEG_TS_MAKE_ZERO 2 ///< Shift timestamps so that they start at 0 /**
* Transport stream id.
* This will be moved into demuxer private options. Thus no API/ABI compatibility
*/
int ts_id; /**
* Audio preload in microseconds.
* Note, not all formats support this and unpredictable things may happen if it is used when not supported.
* - encoding: Set by user
* - decoding: unused
*/
int audio_preload; /**
* Max chunk time in microseconds.
* Note, not all formats support this and unpredictable things may happen if it is used when not supported.
* - encoding: Set by user
* - decoding: unused
*/
int max_chunk_duration; /**
* Max chunk size in bytes
* Note, not all formats support this and unpredictable things may happen if it is used when not supported.
* - encoding: Set by user
* - decoding: unused
*/
int max_chunk_size; /**
* forces the use of wallclock timestamps as pts/dts of packets
* This has undefined results in the presence of B frames.
* - encoding: unused
* - decoding: Set by user
*/
int use_wallclock_as_timestamps; /**
* avio flags, used to force AVIO_FLAG_DIRECT.
* - encoding: unused
* - decoding: Set by user
*/
int avio_flags; /**
* The duration field can be estimated through various ways, and this field can be used
* to know how the duration was estimated.
* - encoding: unused
* - decoding: Read by user
*/
enum AVDurationEstimationMethod duration_estimation_method; /**
* Skip initial bytes when opening stream
* - encoding: unused
* - decoding: Set by user
*/
int64_t skip_initial_bytes; /**
* Correct single timestamp overflows
* - encoding: unused
* - decoding: Set by user
*/
unsigned int correct_ts_overflow; /**
* Force seeking to any (also non key) frames.
* - encoding: unused
* - decoding: Set by user
*/
int seek2any; /**
* Flush the I/O context after each packet.
* - encoding: Set by user
* - decoding: unused
*/
int flush_packets; /**
* format probing score.
* The maximal score is AVPROBE_SCORE_MAX, its set when the demuxer probes
* the format.
* - encoding: unused
* - decoding: set by avformat, read by user
*/
int probe_score; /**
* Maximum number of bytes read from input in order to identify the
* \ref AVInputFormat "input format". Only used when the format is not set
* explicitly by the caller.
*
* Demuxing only, set by the caller before avformat_open_input().
*
* @sa probesize
*/
int format_probesize; /**
* ',' separated list of allowed decoders.
* If NULL then all are allowed
* - encoding: unused
* - decoding: set by user
*/
// 编解码器白名单
char *codec_whitelist; /**
* ',' separated list of allowed demuxers.
* If NULL then all are allowed
* - encoding: unused
* - decoding: set by user
*/
// demux组件白名单
char *format_whitelist; /**
* IO repositioned flag.
* This is set by avformat when the underlaying IO context read pointer
* is repositioned, for example when doing byte based seeking.
* Demuxers can use the flag to detect such changes.
*/
int io_repositioned; /**
* Forced video codec.
* This allows forcing a specific decoder, even when there are multiple with
* the same codec_id.
* Demuxing: Set by user
*/
// 视频解码器
const AVCodec *video_codec; /**
* Forced audio codec.
* This allows forcing a specific decoder, even when there are multiple with
* the same codec_id.
* Demuxing: Set by user
*/
// 音频解码器
const AVCodec *audio_codec; /**
* Forced subtitle codec.
* This allows forcing a specific decoder, even when there are multiple with
* the same codec_id.
* Demuxing: Set by user
*/
const AVCodec *subtitle_codec; /**
* Forced data codec.
* This allows forcing a specific decoder, even when there are multiple with
* the same codec_id.
* Demuxing: Set by user
*/
const AVCodec *data_codec; /**
* Number of bytes to be written as padding in a metadata header.
* Demuxing: Unused.
* Muxing: Set by user via av_format_set_metadata_header_padding.
*/
int metadata_header_padding; /**
* User data.
* This is a place for some private data of the user.
*/
void *opaque; /**
* Callback used by devices to communicate with application.
*/
av_format_control_message control_message_cb; /**
* Output timestamp offset, in microseconds.
* Muxing: set by user
*/
int64_t output_ts_offset; /**
* dump format separator.
* can be ", " or "\n " or anything else
* - muxing: Set by user.
* - demuxing: Set by user.
*/
uint8_t *dump_separator; /**
* Forced Data codec_id.
* Demuxing: Set by user.
*/
enum AVCodecID data_codec_id; /**
* ',' separated list of allowed protocols.
* - encoding: unused
* - decoding: set by user
*/
// protocol的白名单
char *protocol_whitelist; /**
* A callback for opening new IO streams.
*
* Whenever a muxer or a demuxer needs to open an IO stream (typically from
* avformat_open_input() for demuxers, but for certain formats can happen at
* other times as well), it will call this callback to obtain an IO context.
*
* @param s the format context
* @param pb on success, the newly opened IO context should be returned here
* @param url the url to open
* @param flags a combination of AVIO_FLAG_*
* @param options a dictionary of additional options, with the same
* semantics as in avio_open2()
* @return 0 on success, a negative AVERROR code on failure
*
* @note Certain muxers and demuxers do nesting, i.e. they open one or more
* additional internal format contexts. Thus the AVFormatContext pointer
* passed to this callback may be different from the one facing the caller.
* It will, however, have the same 'opaque' field.
*/
// 选择合适的AVIOContext
int (*io_open)(struct AVFormatContext *s, AVIOContext **pb, const char *url,
int flags, AVDictionary **options); /**
* A callback for closing the streams opened with AVFormatContext.io_open().
*/
void (*io_close)(struct AVFormatContext *s, AVIOContext *pb); /**
* ',' separated list of disallowed protocols.
* - encoding: unused
* - decoding: set by user
*/
char *protocol_blacklist; /**
* The maximum number of streams.
* - encoding: unused
* - decoding: set by user
*/
int max_streams; /**
* Skip duration calcuation in estimate_timings_from_pts.
* - encoding: unused
* - decoding: set by user
*/
int skip_estimate_duration_from_pts; /**
* Maximum number of packets that can be probed
* - encoding: unused
* - decoding: set by user
*/
int max_probe_packets; /**
* A callback for closing the streams opened with AVFormatContext.io_open().
*
* Using this is preferred over io_close, because this can return an error.
* Therefore this callback is used instead of io_close by the generic
* libavformat code if io_close is NULL or the default.
*
* @param s the format context
* @param pb IO context to be closed and freed
* @return 0 on success, a negative AVERROR code on failure
*/
int (*io_close2)(struct AVFormatContext *s, AVIOContext *pb);
} AVFormatContext;

AVIOContext

该结构体声明位于libavformat/avio.h:161,保存有和所有IO相关的上下文信息,从结构体成员可以看出,它主要是做数据读取、数据写入的控制,读的写的是谁呢?就是里面的void* opaque(URLContext)

typedef struct AVIOContext {
/**
* A class for private options.
*
* If this AVIOContext is created by avio_open2(), av_class is set and
* passes the options down to protocols.
*
* If this AVIOContext is manually allocated, then av_class may be set by
* the caller.
*
* warning -- this field can be NULL, be sure to not pass this AVIOContext
* to any av_opt_* functions in that case.
*/
const AVClass *av_class; /*
* The following shows the relationship between buffer, buf_ptr,
* buf_ptr_max, buf_end, buf_size, and pos, when reading and when writing
* (since AVIOContext is used for both):
*
**********************************************************************************
* READING
**********************************************************************************
*
* | buffer_size |
* |---------------------------------------|
* | |
*
* buffer buf_ptr buf_end
* +---------------+-----------------------+
* |/ / / / / / / /|/ / / / / / /| |
* read buffer: |/ / consumed / | to be read /| |
* |/ / / / / / / /|/ / / / / / /| |
* +---------------+-----------------------+
*
* pos
* +-------------------------------------------+-----------------+
* input file: | | |
* +-------------------------------------------+-----------------+
*
*
**********************************************************************************
* WRITING
**********************************************************************************
*
* | buffer_size |
* |--------------------------------------|
* | |
*
* buf_ptr_max
* buffer (buf_ptr) buf_end
* +-----------------------+--------------+
* |/ / / / / / / / / / / /| |
* write buffer: | / / to be flushed / / | |
* |/ / / / / / / / / / / /| |
* +-----------------------+--------------+
* buf_ptr can be in this
* due to a backward seek
*
* pos
* +-------------+----------------------------------------------+
* output file: | | |
* +-------------+----------------------------------------------+
*
*/
// 读取的buffer
unsigned char *buffer; /**< Start of the buffer. */
// buffer最大的大小
int buffer_size; /**< Maximum buffer size */
// 已经读取到的位置
unsigned char *buf_ptr; /**< Current position in the buffer */
// 数据末尾的位置
unsigned char *buf_end; /**< End of the data, may be less than
buffer+buffer_size if the read function returned
less data than requested, e.g. for streams where
no more data has been received yet. */
// 该void*存储的是URLContext对象的地址
void *opaque; /**< A private pointer, passed to the read/write/seek/...
functions. */
// 读取数据包
int (*read_packet)(void *opaque, uint8_t *buf, int buf_size);
// 写入数据包
int (*write_packet)(void *opaque, uint8_t *buf, int buf_size);
// seek
int64_t (*seek)(void *opaque, int64_t offset, int whence);
// 当前文件的读写位置
int64_t pos; /**< position in the file of the current buffer */
// eof标志位
int eof_reached; /**< true if was unable to read due to error or eof */
int error; /**< contains the error code or 0 if no error happened */
int write_flag; /**< true if open for writing */
int max_packet_size;
int min_packet_size; /**< Try to buffer at least this amount of data
before flushing it. */
unsigned long checksum;
unsigned char *checksum_ptr;
unsigned long (*update_checksum)(unsigned long checksum, const uint8_t *buf, unsigned int size);
/**
* Pause or resume playback for network streaming protocols - e.g. MMS.
*/
int (*read_pause)(void *opaque, int pause);
/**
* Seek to a given timestamp in stream with the specified stream_index.
* Needed for some network streaming protocols which don't support seeking
* to byte position.
*/
// 定位到指定pts的位置
int64_t (*read_seek)(void *opaque, int stream_index,
int64_t timestamp, int flags);
/**
* A combination of AVIO_SEEKABLE_ flags or 0 when the stream is not seekable.
*/
// 是否可以seek的标志位
int seekable; /**
* avio_read and avio_write should if possible be satisfied directly
* instead of going through a buffer, and avio_seek will always
* call the underlying seek function directly.
*/
int direct; /**
* ',' separated list of allowed protocols.
*/
// 文件协议的报名单
const char *protocol_whitelist; /**
* ',' separated list of disallowed protocols.
*/
const char *protocol_blacklist; /**
* A callback that is used instead of write_packet.
*/
int (*write_data_type)(void *opaque, uint8_t *buf, int buf_size,
enum AVIODataMarkerType type, int64_t time);
/**
* If set, don't call write_data_type separately for AVIO_DATA_MARKER_BOUNDARY_POINT,
* but ignore them and treat them as AVIO_DATA_MARKER_UNKNOWN (to avoid needlessly
* small chunks of data returned from the callback).
*/
int ignore_boundary_point; #if FF_API_AVIOCONTEXT_WRITTEN
/**
* @deprecated field utilized privately by libavformat. For a public
* statistic of how many bytes were written out, see
* AVIOContext::bytes_written.
*/
attribute_deprecated
int64_t written;
#endif /**
* Maximum reached position before a backward seek in the write buffer,
* used keeping track of already written data for a later flush.
*/
unsigned char *buf_ptr_max; /**
* Read-only statistic of bytes read for this AVIOContext.
*/
// 已经读取到的数据
int64_t bytes_read; /**
* Read-only statistic of bytes written for this AVIOContext.
*/
int64_t bytes_written;
} AVIOContext;

FFIOContext

它的声明位于libavformat/avio_internal.h:29,是AVIOContext更上一层的封装,保存了AVIOcontext的一些状态信息

typedef struct FFIOContext {
// 封装的AVIOContext对象
AVIOContext pub;
/**
* A callback that is used instead of short_seek_threshold.
*/
int (*short_seek_get)(void *opaque); /**
* Threshold to favor readahead over seek.
*/
int short_seek_threshold; enum AVIODataMarkerType current_type;
int64_t last_time; /**
* max filesize, used to limit allocations
*/
int64_t maxsize; /**
* Bytes read statistic
*/
int64_t bytes_read; /**
* Bytes written statistic
*/
int64_t bytes_written; /**
* seek statistic
*/
int seek_count; /**
* writeout statistic
*/
int writeout_count; /**
* Original buffer size
* used after probing to ensure seekback and to reset the buffer size
*/
int orig_buffer_size; /**
* Written output size
* is updated each time a successful writeout ends up further position-wise
*/
int64_t written_output_size;
} FFIOContext;

URLContext

该结构体声明位于libavformat/url.h:38,它只是URLProtocol的一层封装

typedef struct URLContext {
const AVClass *av_class; /**< information for av_log(). Set by url_open(). */
// 真正文件读取的IO
const struct URLProtocol *prot;
void *priv_data;
// url字符串
char *filename; /**< specified URL */
int flags;
// 数据包的最大size
int max_packet_size; /**< if non zero, the stream is packetized with this max packet size */
// 是否为streaming
int is_streamed; /**< true if streamed (no seek possible), default = false */
int is_connected;
AVIOInterruptCB interrupt_callback;
int64_t rw_timeout; /**< maximum time to wait for (network) read/write operation completion, in mcs */
const char *protocol_whitelist;
const char *protocol_blacklist;
int min_packet_size; /**< if non zero, the stream is packetized with this min packet size */
} URLContext;

URLProtocol

该结构体声明位于libavformat/url.h:54,它是一个模板,里面有很多函数指针用于实现多态的效果。盲猜调用AVIOContext的read_packet方法会调用URLContext的prot的url_read方法,这一点后续读源码时再验证

typedef struct URLProtocol {
// 被选中的Protocol名称
const char *name;
// 尝试打开url,如果可以打开则说明是正确的protocol
int (*url_open)( URLContext *h, const char *url, int flags);
/**
* This callback is to be used by protocols which open further nested
* protocols. options are then to be passed to ffurl_open_whitelist()
* or ffurl_connect() for those nested protocols.
*/
int (*url_open2)(URLContext *h, const char *url, int flags, AVDictionary **options);
int (*url_accept)(URLContext *s, URLContext **c);
int (*url_handshake)(URLContext *c); /**
* Read data from the protocol.
* If data is immediately available (even less than size), EOF is
* reached or an error occurs (including EINTR), return immediately.
* Otherwise:
* In non-blocking mode, return AVERROR(EAGAIN) immediately.
* In blocking mode, wait for data/EOF/error with a short timeout (0.1s),
* and return AVERROR(EAGAIN) on timeout.
* Checking interrupt_callback, looping on EINTR and EAGAIN and until
* enough data has been read is left to the calling function; see
* retry_transfer_wrapper in avio.c.
*/
int (*url_read)( URLContext *h, unsigned char *buf, int size);
int (*url_write)(URLContext *h, const unsigned char *buf, int size);
int64_t (*url_seek)( URLContext *h, int64_t pos, int whence);
int (*url_close)(URLContext *h);
int (*url_read_pause)(URLContext *h, int pause);
int64_t (*url_read_seek)(URLContext *h, int stream_index,
int64_t timestamp, int flags);
int (*url_get_file_handle)(URLContext *h);
int (*url_get_multi_file_handle)(URLContext *h, int **handles,
int *numhandles);
int (*url_get_short_seek)(URLContext *h);
int (*url_shutdown)(URLContext *h, int flags);
const AVClass *priv_data_class;
int priv_data_size;
int flags;
int (*url_check)(URLContext *h, int mask);
int (*url_open_dir)(URLContext *h);
int (*url_read_dir)(URLContext *h, AVIODirEntry **next);
int (*url_close_dir)(URLContext *h);
int (*url_delete)(URLContext *h);
int (*url_move)(URLContext *h_src, URLContext *h_dst);
const char *default_whitelist;
} URLProtocol;

AVInputFormat

该结构体声明位于 libavformat/avformat.h:650,该结构体也是一个demux组件模板,里面有很多函数指针,用于实现多态的效果。可以看到这里面也有read_packet、read_seek等方法,这里读取的是demux之后的数据,传入参数AVFormatContetx,应该会利用里面的IO去读取,然后传给demux组件中的方法去做demux操作

typedef struct AVInputFormat {
/**
* A comma separated list of short names for the format. New names
* may be appended with a minor bump.
*/
// 当前的demux组件的名称
const char *name; /**
* Descriptive name for the format, meant to be more human-readable
* than name. You should use the NULL_IF_CONFIG_SMALL() macro
* to define it.
*/
const char *long_name; /**
* Can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER, AVFMT_SHOW_IDS,
* AVFMT_NOTIMESTAMPS, AVFMT_GENERIC_INDEX, AVFMT_TS_DISCONT, AVFMT_NOBINSEARCH,
* AVFMT_NOGENSEARCH, AVFMT_NO_BYTE_SEEK, AVFMT_SEEK_TO_PTS.
*/
int flags; /**
* If extensions are defined, then no probe is done. You should
* usually not use extension format guessing because it is not
* reliable enough
*/
// 当前demux组件适用的后缀名
const char *extensions; const struct AVCodecTag * const *codec_tag; const AVClass *priv_class; ///< AVClass for the private context /**
* Comma-separated list of mime types.
* It is used check for matching mime types while probing.
* @see av_probe_input_format2
*/
// 媒体类型
const char *mime_type; /*****************************************************************
* No fields below this line are part of the public API. They
* may not be used outside of libavformat and can be changed and
* removed at will.
* New public fields should be added right above.
*****************************************************************
*/
/**
* Raw demuxers store their codec ID here.
*/
int raw_codec_id; /**
* Size of private data so that it can be allocated in the wrapper.
*/
int priv_data_size; /**
* Internal flags. See FF_FMT_FLAG_* in internal.h.
*/
int flags_internal; /**
* Tell if a given file has a chance of being parsed as this format.
* The buffer provided is guaranteed to be AVPROBE_PADDING_SIZE bytes
* big so you do not have to check for that unless you need more.
*/
// 用于探测当前demux组件是否合适
int (*read_probe)(const AVProbeData *); /**
* Read the format header and initialize the AVFormatContext
* structure. Return 0 if OK. 'avformat_new_stream' should be
* called to create new streams.
*/
// 有header的文件格式,从header中读取stream等信息
int (*read_header)(struct AVFormatContext *); /**
* Read one packet and put it in 'pkt'. pts and flags are also
* set. 'avformat_new_stream' can be called only if the flag
* AVFMTCTX_NOHEADER is used and only in the calling thread (not in a
* background thread).
* @return 0 on success, < 0 on error.
* Upon returning an error, pkt must be unreferenced by the caller.
*/
// 读取一个数据包,用pkt返回
int (*read_packet)(struct AVFormatContext *, AVPacket *pkt); /**
* Close the stream. The AVFormatContext and AVStreams are not
* freed by this function
*/
int (*read_close)(struct AVFormatContext *); /**
* Seek to a given timestamp relative to the frames in
* stream component stream_index.
* @param stream_index Must not be -1.
* @param flags Selects which direction should be preferred if no exact
* match is available.
* @return >= 0 on success (but not necessarily the new offset)
*/
int (*read_seek)(struct AVFormatContext *,
int stream_index, int64_t timestamp, int flags); /**
* Get the next timestamp in stream[stream_index].time_base units.
* @return the timestamp or AV_NOPTS_VALUE if an error occurred
*/
int64_t (*read_timestamp)(struct AVFormatContext *s, int stream_index,
int64_t *pos, int64_t pos_limit); /**
* Start/resume playing - only meaningful if using a network-based format
* (RTSP).
*/
int (*read_play)(struct AVFormatContext *); /**
* Pause playing - only meaningful if using a network-based format
* (RTSP).
*/
int (*read_pause)(struct AVFormatContext *); /**
* Seek to timestamp ts.
* Seeking will be done so that the point from which all active streams
* can be presented successfully will be closest to ts and within min/max_ts.
* Active streams are all streams that have AVStream.discard < AVDISCARD_ALL.
*/
int (*read_seek2)(struct AVFormatContext *s, int stream_index, int64_t min_ts, int64_t ts, int64_t max_ts, int flags); /**
* Returns device list with it properties.
* @see avdevice_list_devices() for more details.
*/
int (*get_device_list)(struct AVFormatContext *s, struct AVDeviceInfoList *device_list); } AVInputFormat;

FFStream

typedef struct FFStream {
/**
* The public context.
*/
// AVStream对象,通过
AVStream pub; /**
* Set to 1 if the codec allows reordering, so pts can be different
* from dts.
*/
int reorder; /**
* bitstream filter to run on stream
* - encoding: Set by muxer using ff_stream_add_bitstream_filter
* - decoding: unused
*/
AVBSFContext *bsfc; /**
* Whether or not check_bitstream should still be run on each packet
*/
int bitstream_checked; /**
* The codec context used by avformat_find_stream_info, the parser, etc.
*/
// 由avcodec_alloc_context3创建
AVCodecContext *avctx;
/**
* 1 if avctx has been initialized with the values from the codec parameters
*/
int avctx_inited; /* the context for extracting extradata in find_stream_info()
* inited=1/bsf=NULL signals that extracting is not possible (codec not
* supported) */
struct {
AVBSFContext *bsf;
int inited;
} extract_extradata; /**
* Whether the internal avctx needs to be updated from codecpar (after a late change to codecpar)
*/
int need_context_update; int is_intra_only; FFFrac *priv_pts; #define MAX_STD_TIMEBASES (30*12+30+3+6)
/**
* Stream information used internally by avformat_find_stream_info()
*/
struct {
int64_t last_dts;
int64_t duration_gcd;
int duration_count;
int64_t rfps_duration_sum;
double (*duration_error)[2][MAX_STD_TIMEBASES];
int64_t codec_info_duration;
int64_t codec_info_duration_fields;
int frame_delay_evidence; /**
* 0 -> decoder has not been searched for yet.
* >0 -> decoder found
* <0 -> decoder with codec_id == -found_decoder has not been found
*/
int found_decoder; int64_t last_duration; /**
* Those are used for average framerate estimation.
*/
int64_t fps_first_dts;
int fps_first_dts_idx;
int64_t fps_last_dts;
int fps_last_dts_idx; } *info; AVIndexEntry *index_entries; /**< Only used if the format does not
support seeking natively. */
int nb_index_entries;
unsigned int index_entries_allocated_size; int64_t interleaver_chunk_size;
int64_t interleaver_chunk_duration; /**
* stream probing state
* -1 -> probing finished
* 0 -> no probing requested
* rest -> perform probing with request_probe being the minimum score to accept.
*/
int request_probe;
/**
* Indicates that everything up to the next keyframe
* should be discarded.
*/
int skip_to_keyframe; /**
* Number of samples to skip at the start of the frame decoded from the next packet.
*/
int skip_samples; /**
* If not 0, the number of samples that should be skipped from the start of
* the stream (the samples are removed from packets with pts==0, which also
* assumes negative timestamps do not happen).
* Intended for use with formats such as mp3 with ad-hoc gapless audio
* support.
*/
int64_t start_skip_samples; /**
* If not 0, the first audio sample that should be discarded from the stream.
* This is broken by design (needs global sample count), but can't be
* avoided for broken by design formats such as mp3 with ad-hoc gapless
* audio support.
*/
int64_t first_discard_sample; /**
* The sample after last sample that is intended to be discarded after
* first_discard_sample. Works on frame boundaries only. Used to prevent
* early EOF if the gapless info is broken (considered concatenated mp3s).
*/
int64_t last_discard_sample; /**
* Number of internally decoded frames, used internally in libavformat, do not access
* its lifetime differs from info which is why it is not in that structure.
*/
int nb_decoded_frames; /**
* Timestamp offset added to timestamps before muxing
*/
int64_t mux_ts_offset; /**
* Internal data to check for wrapping of the time stamp
*/
int64_t pts_wrap_reference; /**
* Options for behavior, when a wrap is detected.
*
* Defined by AV_PTS_WRAP_ values.
*
* If correction is enabled, there are two possibilities:
* If the first time stamp is near the wrap point, the wrap offset
* will be subtracted, which will create negative time stamps.
* Otherwise the offset will be added.
*/
int pts_wrap_behavior; /**
* Internal data to prevent doing update_initial_durations() twice
*/
int update_initial_durations_done; #define MAX_REORDER_DELAY 16 /**
* Internal data to generate dts from pts
*/
int64_t pts_reorder_error[MAX_REORDER_DELAY+1];
uint8_t pts_reorder_error_count[MAX_REORDER_DELAY+1]; int64_t pts_buffer[MAX_REORDER_DELAY+1]; /**
* Internal data to analyze DTS and detect faulty mpeg streams
*/
int64_t last_dts_for_order_check;
uint8_t dts_ordered;
uint8_t dts_misordered; /**
* Internal data to inject global side data
*/
int inject_global_side_data; /**
* display aspect ratio (0 if unknown)
* - encoding: unused
* - decoding: Set by libavformat to calculate sample_aspect_ratio internally
*/
AVRational display_aspect_ratio; AVProbeData probe_data; /**
* last packet in packet_buffer for this stream when muxing.
*/
PacketListEntry *last_in_packet_buffer; int64_t last_IP_pts;
int last_IP_duration; /**
* Number of packets to buffer for codec probing
*/
int probe_packets; /* av_read_frame() support */
enum AVStreamParseType need_parsing;
struct AVCodecParserContext *parser; /**
* Number of frames that have been demuxed during avformat_find_stream_info()
*/
int codec_info_nb_frames; /**
* Stream Identifier
* This is the MPEG-TS stream identifier +1
* 0 means unknown
*/
int stream_identifier; // Timestamp generation support:
/**
* Timestamp corresponding to the last dts sync point.
*
* Initialized when AVCodecParserContext.dts_sync_point >= 0 and
* a DTS is received from the underlying container. Otherwise set to
* AV_NOPTS_VALUE by default.
*/
int64_t first_dts;
int64_t cur_dts;
} FFStream;

AVStream

该结构体声明位于 libavformat/avformat.h:937,该结构体存储了音视频文件中的stream信息

typedef struct AVStream {
#if FF_API_AVSTREAM_CLASS
/**
* A class for @ref avoptions. Set on stream creation.
*/
const AVClass *av_class;
#endif // 对应于AVFormatContext中的Stream索引
int index; /**< stream index in AVFormatContext */
/**
* Format-specific stream ID.
* decoding: set by libavformat
* encoding: set by the user, replaced by libavformat if left unset
*/
// 暂时不确定?
int id;
// 指向的是每个AVInputFormat所独有的meta data存储对象
void *priv_data; /**
* This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented.
*
* decoding: set by libavformat
* encoding: May be set by the caller before avformat_write_header() to
* provide a hint to the muxer about the desired timebase. In
* avformat_write_header(), the muxer will overwrite this field
* with the timebase that will actually be used for the timestamps
* written into the file (which may or may not be related to the
* user-provided one, depending on the format).
*/
AVRational time_base; /**
* Decoding: pts of the first frame of the stream in presentation order, in stream time base.
* Only set this if you are absolutely 100% sure that the value you set
* it to really is the pts of the first frame.
* This may be undefined (AV_NOPTS_VALUE).
* @note The ASF header does NOT contain a correct start_time the ASF
* demuxer must NOT set this.
*/
// 第一帧的PTS
int64_t start_time; /**
* Decoding: duration of the stream, in stream time base.
* If a source file does not specify a duration, but does specify
* a bitrate, this value will be estimated from bitrate and file size.
*
* Encoding: May be set by the caller before avformat_write_header() to
* provide a hint to the muxer about the estimated duration.
*/
// 流的时长
int64_t duration;
// 流的帧数
int64_t nb_frames; ///< number of frames in this stream if known or 0 /**
* Stream disposition - a combination of AV_DISPOSITION_* flags.
* - demuxing: set by libavformat when creating the stream or in
* avformat_find_stream_info().
* - muxing: may be set by the caller before avformat_write_header().
*/
int disposition; enum AVDiscard discard; ///< Selects which packets can be discarded at will and do not need to be demuxed. /**
* sample aspect ratio (0 if unknown)
* - encoding: Set by user.
* - decoding: Set by libavformat.
*/
AVRational sample_aspect_ratio;
// 流的信息
AVDictionary *metadata; /**
* Average framerate
*
* - demuxing: May be set by libavformat when creating the stream or in
* avformat_find_stream_info().
* - muxing: May be set by the caller before avformat_write_header().
*/
// 平均帧率
AVRational avg_frame_rate; /**
* For streams with AV_DISPOSITION_ATTACHED_PIC disposition, this packet
* will contain the attached picture.
*
* decoding: set by libavformat, must not be modified by the caller.
* encoding: unused
*/
AVPacket attached_pic; /**
* An array of side data that applies to the whole stream (i.e. the
* container does not allow it to change between packets).
*
* There may be no overlap between the side data in this array and side data
* in the packets. I.e. a given side data is either exported by the muxer
* (demuxing) / set by the caller (muxing) in this array, then it never
* appears in the packets, or the side data is exported / sent through
* the packets (always in the first packet where the value becomes known or
* changes), then it does not appear in this array.
*
* - demuxing: Set by libavformat when the stream is created.
* - muxing: May be set by the caller before avformat_write_header().
*
* Freed by libavformat in avformat_free_context().
*
* @see av_format_inject_global_side_data()
*/
AVPacketSideData *side_data;
/**
* The number of elements in the AVStream.side_data array.
*/
int nb_side_data; /**
* Flags indicating events happening on the stream, a combination of
* AVSTREAM_EVENT_FLAG_*.
*
* - demuxing: may be set by the demuxer in avformat_open_input(),
* avformat_find_stream_info() and av_read_frame(). Flags must be cleared
* by the user once the event has been handled.
* - muxing: may be set by the user after avformat_write_header(). to
* indicate a user-triggered event. The muxer will clear the flags for
* events it has handled in av_[interleaved]_write_frame().
*/
int event_flags;
/**
* - demuxing: the demuxer read new metadata from the file and updated
* AVStream.metadata accordingly
* - muxing: the user updated AVStream.metadata and wishes the muxer to write
* it into the file
*/
#define AVSTREAM_EVENT_FLAG_METADATA_UPDATED 0x0001
/**
* - demuxing: new packets for this stream were read from the file. This
* event is informational only and does not guarantee that new packets
* for this stream will necessarily be returned from av_read_frame().
*/
#define AVSTREAM_EVENT_FLAG_NEW_PACKETS (1 << 1) /**
* Real base framerate of the stream.
* This is the lowest framerate with which all timestamps can be
* represented accurately (it is the least common multiple of all
* framerates in the stream). Note, this value is just a guess!
* For example, if the time base is 1/90000 and all frames have either
* approximately 3600 or 1800 timer ticks, then r_frame_rate will be 50/1.
*/
AVRational r_frame_rate; /**
* Codec parameters associated with this stream. Allocated and freed by
* libavformat in avformat_new_stream() and avformat_free_context()
* respectively.
*
* - demuxing: filled by libavformat on stream creation or in
* avformat_find_stream_info()
* - muxing: filled by the caller before avformat_write_header()
*/
// 解码器的参数信息,在avformat_find_stream_info中被填充
AVCodecParameters *codecpar; /**
* Number of bits in timestamps. Used for wrapping control.
*
* - demuxing: set by libavformat
* - muxing: set by libavformat
*
*/
int pts_wrap_bits;
} AVStream;

AVCodecParameters

该结构体声明位于 libavcodec/codec_par.h:52,存储了编解码器的相关参数,保存在AVStream当中

typedef struct AVCodecParameters {
/**
* General type of the encoded data.
*/
enum AVMediaType codec_type;
/**
* Specific type of the encoded data (the codec used).
*/
enum AVCodecID codec_id;
/**
* Additional information about the codec (corresponds to the AVI FOURCC).
*/
uint32_t codec_tag; /**
* Extra binary data needed for initializing the decoder, codec-dependent.
*
* Must be allocated with av_malloc() and will be freed by
* avcodec_parameters_free(). The allocated size of extradata must be at
* least extradata_size + AV_INPUT_BUFFER_PADDING_SIZE, with the padding
* bytes zeroed.
*/
uint8_t *extradata;
/**
* Size of the extradata content in bytes.
*/
int extradata_size; /**
* - video: the pixel format, the value corresponds to enum AVPixelFormat.
* - audio: the sample format, the value corresponds to enum AVSampleFormat.
*/
int format; /**
* The average bitrate of the encoded data (in bits per second).
*/
int64_t bit_rate; /**
* The number of bits per sample in the codedwords.
*
* This is basically the bitrate per sample. It is mandatory for a bunch of
* formats to actually decode them. It's the number of bits for one sample in
* the actual coded bitstream.
*
* This could be for example 4 for ADPCM
* For PCM formats this matches bits_per_raw_sample
* Can be 0
*/
int bits_per_coded_sample; /**
* This is the number of valid bits in each output sample. If the
* sample format has more bits, the least significant bits are additional
* padding bits, which are always 0. Use right shifts to reduce the sample
* to its actual size. For example, audio formats with 24 bit samples will
* have bits_per_raw_sample set to 24, and format set to AV_SAMPLE_FMT_S32.
* To get the original sample use "(int32_t)sample >> 8"."
*
* For ADPCM this might be 12 or 16 or similar
* Can be 0
*/
int bits_per_raw_sample; /**
* Codec-specific bitstream restrictions that the stream conforms to.
*/
int profile;
int level; /**
* Video only. The dimensions of the video frame in pixels.
*/
int width;
int height; /**
* Video only. The aspect ratio (width / height) which a single pixel
* should have when displayed.
*
* When the aspect ratio is unknown / undefined, the numerator should be
* set to 0 (the denominator may have any value).
*/
AVRational sample_aspect_ratio; /**
* Video only. The order of the fields in interlaced video.
*/
enum AVFieldOrder field_order; /**
* Video only. Additional colorspace characteristics.
*/
enum AVColorRange color_range;
enum AVColorPrimaries color_primaries;
enum AVColorTransferCharacteristic color_trc;
enum AVColorSpace color_space;
enum AVChromaLocation chroma_location; /**
* Video only. Number of delayed frames.
*/
int video_delay; /**
* Audio only. The channel layout bitmask. May be 0 if the channel layout is
* unknown or unspecified, otherwise the number of bits set must be equal to
* the channels field.
*/
uint64_t channel_layout;
/**
* Audio only. The number of audio channels.
*/
int channels;
/**
* Audio only. The number of audio samples per second.
*/
int sample_rate;
/**
* Audio only. The number of bytes per coded audio frame, required by some
* formats.
*
* Corresponds to nBlockAlign in WAVEFORMATEX.
*/
int block_align;
/**
* Audio only. Audio frame size, if known. Required by some formats to be static.
*/
int frame_size; /**
* Audio only. The amount of padding (in samples) inserted by the encoder at
* the beginning of the audio. I.e. this number of leading decoded samples
* must be discarded by the caller to get the original audio without leading
* padding.
*/
int initial_padding;
/**
* Audio only. The amount of padding (in samples) appended by the encoder to
* the end of the audio. I.e. this number of decoded samples must be
* discarded by the caller from the end of the stream to get the original
* audio without any trailing padding.
*/
int trailing_padding;
/**
* Audio only. Number of samples to skip after a discontinuity.
*/
int seek_preroll;
} AVCodecParameters;

AVCodec

该结构体声明位于 libavcodec/codec.h:202,该结构体应该也是一个模板

typedef struct AVCodec {
/**
* Name of the codec implementation.
* The name is globally unique among encoders and among decoders (but an
* encoder and a decoder can share the same name).
* This is the primary way to find a codec from the user perspective.
*/
const char *name;
/**
* Descriptive name for the codec, meant to be more human readable than name.
* You should use the NULL_IF_CONFIG_SMALL() macro to define it.
*/
const char *long_name;
enum AVMediaType type;
enum AVCodecID id;
/**
* Codec capabilities.
* see AV_CODEC_CAP_*
*/
int capabilities;
uint8_t max_lowres; ///< maximum value for lowres supported by the decoder
const AVRational *supported_framerates; ///< array of supported framerates, or NULL if any, array is terminated by {0,0}
const enum AVPixelFormat *pix_fmts; ///< array of supported pixel formats, or NULL if unknown, array is terminated by -1
const int *supported_samplerates; ///< array of supported audio samplerates, or NULL if unknown, array is terminated by 0
const enum AVSampleFormat *sample_fmts; ///< array of supported sample formats, or NULL if unknown, array is terminated by -1
const uint64_t *channel_layouts; ///< array of support channel layouts, or NULL if unknown. array is terminated by 0
const AVClass *priv_class; ///< AVClass for the private context
const AVProfile *profiles; ///< array of recognized profiles, or NULL if unknown, array is terminated by {FF_PROFILE_UNKNOWN} /**
* Group name of the codec implementation.
* This is a short symbolic name of the wrapper backing this codec. A
* wrapper uses some kind of external implementation for the codec, such
* as an external library, or a codec implementation provided by the OS or
* the hardware.
* If this field is NULL, this is a builtin, libavcodec native codec.
* If non-NULL, this will be the suffix in AVCodec.name in most cases
* (usually AVCodec.name will be of the form "<codec_name>_<wrapper_name>").
*/
const char *wrapper_name; /*****************************************************************
* No fields below this line are part of the public API. They
* may not be used outside of libavcodec and can be changed and
* removed at will.
* New public fields should be added right above.
*****************************************************************
*/
/**
* Internal codec capabilities.
* See FF_CODEC_CAP_* in internal.h
*/
int caps_internal; int priv_data_size;
/**
* @name Frame-level threading support functions
* @{
*/
/**
* Copy necessary context variables from a previous thread context to the current one.
* If not defined, the next thread will start automatically; otherwise, the codec
* must call ff_thread_finish_setup().
*
* dst and src will (rarely) point to the same context, in which case memcpy should be skipped.
*/
int (*update_thread_context)(struct AVCodecContext *dst, const struct AVCodecContext *src); /**
* Copy variables back to the user-facing context
*/
int (*update_thread_context_for_user)(struct AVCodecContext *dst, const struct AVCodecContext *src);
/** @} */ /**
* Private codec-specific defaults.
*/
const AVCodecDefault *defaults; /**
* Initialize codec static data, called from av_codec_iterate().
*
* This is not intended for time consuming operations as it is
* run for every codec regardless of that codec being used.
*/
void (*init_static_data)(struct AVCodec *codec); int (*init)(struct AVCodecContext *);
int (*encode_sub)(struct AVCodecContext *, uint8_t *buf, int buf_size,
const struct AVSubtitle *sub);
/**
* Encode data to an AVPacket.
*
* @param avctx codec context
* @param avpkt output AVPacket
* @param[in] frame AVFrame containing the raw data to be encoded
* @param[out] got_packet_ptr encoder sets to 0 or 1 to indicate that a
* non-empty packet was returned in avpkt.
* @return 0 on success, negative error code on failure
*/
int (*encode2)(struct AVCodecContext *avctx, struct AVPacket *avpkt,
const struct AVFrame *frame, int *got_packet_ptr);
/**
* Decode picture or subtitle data.
*
* @param avctx codec context
* @param outdata codec type dependent output struct
* @param[out] got_frame_ptr decoder sets to 0 or 1 to indicate that a
* non-empty frame or subtitle was returned in
* outdata.
* @param[in] avpkt AVPacket containing the data to be decoded
* @return amount of bytes read from the packet on success, negative error
* code on failure
*/
int (*decode)(struct AVCodecContext *avctx, void *outdata,
int *got_frame_ptr, struct AVPacket *avpkt);
int (*close)(struct AVCodecContext *);
/**
* Encode API with decoupled frame/packet dataflow. This function is called
* to get one output packet. It should call ff_encode_get_frame() to obtain
* input data.
*/
int (*receive_packet)(struct AVCodecContext *avctx, struct AVPacket *avpkt); /**
* Decode API with decoupled packet/frame dataflow. This function is called
* to get one output frame. It should call ff_decode_get_packet() to obtain
* input data.
*/
int (*receive_frame)(struct AVCodecContext *avctx, struct AVFrame *frame);
/**
* Flush buffers.
* Will be called when seeking
*/
void (*flush)(struct AVCodecContext *); /**
* Decoding only, a comma-separated list of bitstream filters to apply to
* packets before decoding.
*/
const char *bsfs; /**
* Array of pointers to hardware configurations supported by the codec,
* or NULL if no hardware supported. The array is terminated by a NULL
* pointer.
*
* The user can only access this field via avcodec_get_hw_config().
*/
const struct AVCodecHWConfigInternal *const *hw_configs; /**
* List of supported codec_tags, terminated by FF_CODEC_TAGS_END.
*/
const uint32_t *codec_tags;
} AVCodec;

AVCodecContext

该结构体声明位于libavcodec/avcodec.h:383,从名字可以看出来是AVCodec的上下文,由于太长了,这里暂时就不贴出来了

FFmpeg中的常见结构体的更多相关文章

  1. FFmpeg中几个结构体的意义

    AVCodec是存储编解码器信息的结构体,特指一个特定的解码器,比如H264编码器的名字,ID,支持的视频格式,支持的采样率等: AVCodecContext是一个描述编解码器采用的具体参数,比如采用 ...

  2. FFmpeg源代码简单分析:常见结构体的初始化和销毁(AVFormatContext,AVFrame等)

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  3. [转载] FFmpeg源代码简单分析:常见结构体的初始化和销毁(AVFormatContext,AVFrame等)

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  4. MFC中的NMHDR结构体和NMUPDOWN结构体

    建立spin控件,创建UDN_DELTAPOS一个消息函数后: void CSpinDlg::OnDeltaposSpin1(NMHDR* pNMHDR, LRESULT* pResult) { NM ...

  5. C语言中 不定义结构体变量求成员大小

    所谓的求成员大小, 是求成员在该结构体中 用 sizeof(结构体名.结构体成员名) 求来的. 很多时候我们需要知道一个结构体成员中的某个成员的大小, 但是我们又不需要定义该结构体类型的变量(定义的话 ...

  6. 剔除list中相同的结构体数据

    剔除list中相同的结构体数据,有三个思路:1.两层循环,逐个比较 2.使用set容器来剔除 3.使用unique方法去重 // deduplication.cpp : 定义控制台应用程序的入口点. ...

  7. 如何系统学习C 语言(中)之 结构体篇

    1,结构体 在前面我们知道变量和数组都可以用来存储数据,变量用来存储单个数据,数组可以用来存储一组同类型的数据,但你有没有发现--它们都只适合单一属性的数据.那现实生活中,很多对象都是具有多属性的.例 ...

  8. 【2016-08-18】转载:总结C++中几种结构体初始化的方法

    作者:Ac_Von 博客地址:http://www.cnblogs.com/vongang/ 文章地址:http://www.cnblogs.com/vongang/archive/2011/07/3 ...

  9. Objective-C中常用的结构体NSRange,NSPoint,NSSize(CGSize),NSRect

    本节要点:红色标记 需要记下来 1   NSRange typedef struct _NSRange {     NSUInteger location;     NSUInteger length ...

  10. f2fs解析(七)node管理器中的 free_nid 结构体

    除了node_info之外, node管理器中还有还有个重要的数据结构: struct free_nid { struct list_head list; /* for free node id li ...

随机推荐

  1. C#的AOP(最经典实现)

    (适用于.NET/.NET Core/.NET Framework) [目录]0.前言1.第一个AOP程序2.Aspect横切面编程3.一个横切面程序拦截多个主程序4.多个横切面程序拦截一个主程序5. ...

  2. HarmonyOS应用事件打点开发指导

      简介 传统的日志系统里汇聚了整个设备上所有程序运行的过程流水日志,难以识别其中的关键信息.因此,应用开发者需要一种数据打点机制,用来评估如访问数.日活.用户操作习惯以及影响用户使用的关键因素等关键 ...

  3. C++ 解引用与函数基础:内存地址、调用方法及声明

    C++ 解引用 获取内存地址和值 在上一页的示例中,我们使用了指针变量来获取变量的内存地址(与引用运算符 & 一起使用).但是,你也可以使用指针来获取变量的值,这可以通过使用 * 运算符(解引 ...

  4. 剑指offer56(Java)-数组中出现的次数Ⅰ(中等)

    题目: 一个整型数组 nums 里除两个数字之外,其他数字都出现了两次.请写程序找出这两个只出现一次的数字.要求时间复杂度是O(n),空间复杂度是O(1). 示例 1: 输入:nums = [4,1, ...

  5. ZooKeeper 在阿里巴巴的服务形态演进

    简介: 本文将给大家介绍下 ZooKeeper 的最佳实践场景,归为了 3 类,分别是:微服务领域,代表的集成产品是 Dubbo/SpringCloud:大数据领域,代表的集成产品是 Flink/Hb ...

  6. MaxCompute跨境访问加速解决方案

    简介: MaxCompute联合全球加速服务,为有跨境访问需求的MaxCompute客户提供一套高效稳定的跨境访问加速方案. MaxCompute联合全球加速服务,为有跨境访问需求的MaxComput ...

  7. XAML 给资源起个好名字 用 StaticResource 起一个别名

    本文来和大家聊一下关于 XAML 资源的定义的事情,和开发技术关系不大,更多的是开发的思路 在稍微大一点的项目里,肯定 XAML 资源是少不了的.对于 XAML 资源,行业里讨论多(非小白讨论)的是关 ...

  8. MQTT GUI 客户端 可视化管理工具

    MQTT GUI 客户端 可视化管理工具 介绍 多标签页管理,同时打开多个连接 提供原生性能,并且比使用 Electron 等 Web 技术开发的同等应用程序消耗的资源少得多 支持 MQTT v5.0 ...

  9. SpringBoot获取配置:@Value、@ConfigurationProperties方式

    配置文件yml # phantomjs的位置地址 phantomjs: binPath: windows: binPath-win linux: binPath-linux jsPath: windo ...

  10. XYCTF pwn部分题解 (部分题目详解)

    hello_world(签到) 思路: 这道题就是利用printf函数泄露libc的基地址,然后再次进行栈溢出通过system,/bin/sh来获取shell wp: invisible_flag 思 ...