OpenH264
转自:http://blog.csdn.net/chinabinlang/article/details/41209053
目前最常用的264工程师x264;
最近有又有一个开源工程OpenH264,WebRTC再用用,具体工程具体没有详细研究;
编译完成工程后,用h264dec.exe命令行测试解码x264编码的文件,解码后正常显示,也就是说这两个工程编码的格式应该相互兼容;
地址:http://www.openh264.org/faq.html
提供C语言接口和C++接口;
目前网络可以找到的文档相对较少,大概浏览了一下这个工程,觉得还不错;
工程编译Win7:
我在win7上简单编译了一些工程,没有特别需要配置的地方,
1:下载源码,解压;
2:msys进入解压目录,直接执行 make 命令;
3:几分钟后编译完成,在解压目录生成相关exe文件,几个.a文件,和一个libopenh264.dll文件;
4:在目录openh264-1.0.0\codec\build\win32\enc下有VS工程,打开可以编译出控制台程序和相关dll文件夹;生成的文件在\openh264-1.0.0\bin\win32\Debug;
当然其它平台也有相关的工程IDE文件;
5:还有一个方法:进入目录\openh264-1.0.0\testbin,用管理员权限执行 AutoBuild_Windows_VS2008.bat 文件,注意修改文件中的VS路径与计算机上的安装路径一直;
然后编译就可以了,文件在openh264-1.0.0\bin\win32目录中,有debug和release两个文件夹;
以下是工程中的说明文档:
OpenH264_API_v1.0.docx文档
Contents
Step#1: create and destroy the encoder 2
Step#2: initialize the encoder 2
Step#3: invoker the encoding 2
Step#4: control the encoding 3
Encoder Return Value: CM_RETURN 9
Encoder Return Value: EVideoFrameType: 9
Step#1: create and destroy the decoder 10
Step#2: initialize the decoder 10
Step#3: invoker the decoding 10
Step#4: control the decoding 11
Revisions history
Date |
Version |
Author |
Description |
12/23/2013 |
0.1 |
Sijia Chen, Wayne Liu |
Initial version |
12/25/2013 |
0.2 |
Sijia Chen |
Add encoder return value explanation |
12/25/2013 |
0.3 |
Sijia Chen |
Add explanation on range of some parameters in encoder |
03/04/2014 |
0.4 |
Karina Li |
Interface and parameters update |
05/23/2014 |
1.0 |
Wayne Liu, Karina Li |
Update for v1.0 release |
Encoder Interface Usage:
Step#1: create and destroy the encoder
int WelsCreateSVCEncoder(ISVCEncoder** ppEncoder);
void WelsDestroySVCEncoder(ISVCEncoder* pEncoder);
Step#2: initialize the encoder
/*
Initilaize encoder by using base parameters.
*/
virtual int Initialize (const SEncParamBase* pParam) = 0;
/*
Initilaize encoder by using extension parameters. If the user needs to set the more details, refer to use this interface.
*/
virtual int InitializeExt (const SEncParamExt* pParam) = 0;
/*
Get the default extension parameters . some time the user doesn’t care about all of the parameters. So he can get use the interface to get the default parameter, then updates these parameters he cares.
*/
virtual int GetDefaultParams (SEncParamExt* pParam) = 0;
virtual int Unintialize() = 0;
Step#3: invoker the encoding
/*
* return: 0 - success; otherwise - failed;
*/
virtual int EncodeFrame(const SSourcePicture* kpSrcPic, SFrameBSInfo* pBsInfo) = 0;
SSourcePicture:
Format |
Parameter Name |
Meaning/Constraint |
int |
iColorFormat |
the input image color space format, currently only supports videoFormatI420 |
int |
iStride[4] |
The stride of picture buffer |
unsigned char |
pData[4] |
Pointer to the source data |
int |
iPicWidth |
width of picture in luminance samples |
int |
iPicHeight |
height of picture in luminance samples |
Long long |
uiTimeStamp |
Time stamp of the frame. |
// kpSrc = the pointer to the source luminance plane
// chrominance data:
// CbData = kpSrc + m_iMaxPicWidth * m_iMaxPicHeight;
// CrData = CbData + (m_iMaxPicWidth * m_iMaxPicHeight)/4;
//the application calling this interface needs to ensure the data validation between the location of [kpSrc, kpSrc+framesize-1]
Step#4: control the encoding
The upper layer application should ensure threading safety between calling these control interfaces and calling the encoding.
/*
* return: 0 - success; otherwise - failed;
*/
virtual int PauseFrame(const SSourcePicture* kpSrcPic, SFrameBSInfo* pBsInfo) = 0;
/*
* return: 0 - success; otherwise - failed;
*/
virtual int ForceIntraFrame(bool bIDR) = 0;
/***********************************************************************
* InDataFormat, IDRInterval, SVC Encode Param, Frame Rate, Bitrate,..
**************************** ******************************************/
/*
* return: CM_RETURN: 0 - success; otherwise - failed;
*/
virtual int SetOption(ENCODER_OPTION eOptionId, void* pOption) = 0;
virtual int GetOption(ENCODER_OPTION eOptionId, void* pOption) = 0;
Encoder Option List:
int CWelsH264SVCEncoder::SetOption(ENCODER_OPTION eOptionId, void* pOption)
Option ID |
Input Format |
Meaning/Constraint |
ENCODER_OPTION_SVC_ENCODE_PARAM_BASE |
Structure of Base Param |
|
ENCODER_OPTION_SVC_ENCODE_PARAM_EXT |
Structure of Extension Param |
|
ENCODER_OPTION_IDR_INTERVAL |
int |
IDR period, 0/-1 means no Intra period (only the first frame) >0 means the desired IDR period, must be multiple of (2^temporal_layer) |
ENCODER_OPTION_FRAME_RATE |
float |
Maximal input frame rate, current supported range: MAX_FRAME_RATE = 30 MIN_FRAME_RATE = 1 |
ENCODER_OPTION_BITRATE |
int |
|
ENCODER_OPTION_MAX_BITRATE |
int |
|
ENCODER_PADDING_PADDING |
int |
0:disable padding;1:padding |
ENCODER_LTR_RECOVERY_REQUEST |
Structure of SLTRRecoverRequest |
|
ENCODER_LTR_MARKING_FEEDBACK |
Structure of SLTRMarkingFeedback |
|
ENCOCER_LTR_MARKING_PERIOD |
Unsigned int |
|
ENCODER_OPTION_LTR |
Unsigned int |
0:not enable LTR;>0 enable LTR; LTR number is fixed to be 2 in current encoder |
ENCODER_OPTION_ENABLE_PREFIX_NAL_ADDING |
Bool |
false:not use Prefix NAL; true: use Prefix NAL |
ENCODER_OPTION_ENABLE_SPS_PPS_ID_ADDITION |
Bool |
false:not adjust ID in SPS/PPS; true: adjust ID in SPS/PPS |
ENCODER_OPTION_CURRENT_PATH |
string |
|
ENCODER_OPTION_DUMP_FILE |
Structure of SDumpLayer |
Dump layer reconstruct frame to a specified file |
ENCODER_OPTION_TRACE_LEVEL |
int |
Output some info accoding to the trace level |
Encoder Parameter List:
(Note: some parameters in mentioned structure are not explained because they are to be removed)
SEncParamBase:
Format |
Parameter Name |
Meaning/Constraint |
int |
iUsageType |
Currently two type applications are supported. 0: camera video 1: screen content |
int |
iInputCsp |
color space of input sequence currently only supports videoFormatI420 |
int |
iPicWidth |
width of picture in luminance samples (the maximum of all layers if multiple spatial layers presents) |
int |
iPicHeight |
height of picture in luminance samples((the maximum of all layers if multiple spatial layers presents) |
int |
iTargetBitrate |
target bitrate |
int |
iRCMode |
Rate control mode |
float |
fMaxFrameRate |
maximal input frame rate |
SEncParamExt:
Format |
Parameter Name |
Meaning/Constraint |
int |
iUsageType |
Currently two type applications are supported. 0: camera video 1: screen content |
int |
iInputCsp |
color space of input sequence currently only supports videoFormatI420 |
int |
iPicWidth |
width of picture in luminance samples (the maximum of all layers if multiple spatial layers presents) |
int |
iPicHeight |
height of picture in luminance samples((the maximum of all layers if multiple spatial layers presents) |
int |
iTargetBitrate |
target bitrate |
int |
iRCMode |
Rate control mode |
float |
fMaxFrameRate |
maximal input frame rate |
int |
iTemporalLayerNum |
temporal layer number, max temporal layer = 4 |
int |
iSpatialLayerNum |
spatial layer number, 1<= iSpatialLayerNum <= MAX_SPATIAL_LAYER_NUM MAX_SPATIAL_LAYER_NUM = 4 |
SSpatialLayerConfig |
sSpatialLayers[MAX_SPATIAL_LAYER_NUM]; |
|
int |
iInputCsp |
color space of input sequence |
int |
iTemporalLayerNum |
temporal layer number, max temporal layer = 4 |
int |
iSpatialLayerNum |
spatial layer number, 1<= iSpatialLayerNum <= MAX_SPATIAL_LAYER_NUM MAX_SPATIAL_LAYER_NUM = 4 |
unsigned int |
uiIntraPeriod |
period of IDR frame |
int |
iNumRefFrame |
The number of the reference frame |
Unsigned int |
uiFrameToBeCoded |
The number of the frames to be encoded. the user doesn’t know the number or doesn’t care the number,can set 0xFFFFFFFF. |
bool |
bEnableSpsPpsIdAddition |
false:not adjust ID in SPS/PPS; true: adjust ID in SPS/PPS |
bool |
bPrefixNalAddingCtrl |
false:not use Prefix NAL; true: use Prefix NAL |
bool |
bEnableSSEI |
false:not use SSEI; true: use SSEI |
int |
iPaddingFlag |
0:disable padding;1:padding |
int |
iEntropyCodingModeFlag |
0:CAVLC 1:CABAC. Currently only supports CAVLC. |
bool |
bEnableRc |
False: don’t use rate control; true: use rate control |
bool |
bEnableFrameSKip |
False: don’t skip frame even if VBV buffer overflow. True: allow skipping frames to keep the bitrate within limits |
int |
iMaxBitrate |
the maximum bitrate |
int |
iMaxQp |
the maximum QP encoder supports |
int |
iMinQp |
The minmum QP encoder supports |
Unsigned int |
uiMaxNalSize |
The maximum NAL size. This value should be not 0 for dynamic slice mode |
bool |
bEnableLongTermReference; |
0: on, 1: off |
int |
iLTRRefNum |
The number of LTR(long term reference) |
int |
iLtrMarkPeriod |
The LTR marked period that is used in feedback. |
Short |
iMultipleThreadIdc |
# 0: auto(dynamic imp. internal encoder); 1: multiple threads imp. disabled; > 1: count number of threads; |
Int |
iLoopFilterDisableIdc |
0: on, 1: off, 2: on except for slice boundaries |
Int |
iLoopFilterAlphaC0Offset |
AlphaOffset: valid range [-6, 6], default 0 |
Int |
iLoopFilterBetaOffset |
BetaOffset: valid range [-6, 6], default 0 |
bool |
bEnableDenoise; |
denoise control |
bool |
bEnableBackgroundDetection |
background detection control |
bool |
bEnableAdaptiveQuant |
adaptive quantization control |
bool |
bEnableFrameCroppingFlag; |
enable cropping source picture |
int |
bEnableSceneChangeDetect |
Enable scene change detection |
SSpatialLayerConfig
Format |
Parameter Name |
Meaning/Constraint |
int |
iVideoWidth |
width of picture in luminance samples of a layer |
int |
iVideoHeight |
height of picture in luminance samples of a layer |
float |
fFrameRate |
frame rate for a layer |
int |
iSpatialBitrate |
target bitrate for a spatial layer |
int |
iMaxSpatialBitrate |
Maximum bitrate for a spatial layer |
EProfileIdc |
uiProfileIdc |
value of profile IDC (0 for auto-detection) |
ELevelIdc |
uiLevelIdc |
value of level IDC (0 for auto-detection) |
Int |
iDLayerQp |
Each layer QP for fixed quant case |
SSliceConfig |
sSliceCfg |
slicing configuration |
sSliceCfg.uiSliceMode |
0: SM_SINGLE_SLICE: SliceNum==1 1: SM_FIXEDSLCNUM_SLICE: according to SliceNum, Enabled dynamic slicing for multi-thread 2: SM_RASTER_SLICE: according to SlicesAssign, Need input of MB numbers each slice. In addition, if other constraint in SSliceArgument is presented, need to follow the constraints. Typically if MB num and slice size are both constrained, re-encoding may be involved. 3: SM_ROWMB_SLICE: according to PictureMBHeight, a row of mbs per slice 4: SM_DYN_SLICE: according to SliceSize, Dynamic slicing (have no idea about slice_nums until encoding current frame) |
|
sSliceCfg. sSliceArgument uiSliceMbNum |
Used in uiSliceMode=2 |
|
sSliceCfg. sSliceArgument uiSliceNum |
Used in uiSliceMode=1 |
|
sSliceCfg. sSliceArgument uiSliceSizeConstraint |
Used in uiSliceMode=4 |
Encoder Return Value: CM_RETURN
Return value of parameter Initializtion: Initialize() and set option SetOption()
Value |
Parameter Name |
Meaning/Constraint |
0 |
cmResultSuccess |
Successful initialized |
1 |
cmInitParaError |
Found error in input parameters |
2 |
cmMachPerfIsBad |
Not supported yet |
3 |
cmUnkonwReason |
Not supported yet |
4 |
cmMallocMemeError |
Input source picture is NULL |
5 |
cmInitExpected |
Encoder is not created correctly |
Encoder Return Value: EVideoFrameType:
Return value of encode one frame: EncodeFrame()
Value |
Parameter Name |
Meaning/Constraint |
0 |
videoFrameTypeInvalid |
Encoder not ready or parameters are invalidate |
1 |
videoFrameTypeIDR |
IDR frame in H.264 |
2 |
videoFrameTypeI |
I frame type |
3 |
videoFrameTypeP |
P frame type |
4 |
videoFrameTypeSkip |
Encoder decides to skip the frame, no bit stream will be ouputed |
5 |
videoFrameTypeIPMixed |
A frame where I and P slices are mixing, not supported yet |
Decoder Usage:
Step#1: create and destroy the decoder
int WelsCreateDecoder(ISVCDecoder** ppDecoder);
void WelsDestroyDecoder(ISVCDecoder* pDecoder);
Step#2: initialize the decoder
virtual long Initialize(const SDecodingParam* pParam) = 0;
virtual long Unintialize() = 0;
Step#3: invoker the decoding
/***************************************************************************
* Description:
* Decompress one frame or slice, and output I420 decoded data.
*
*Input parameters:
* ParameterTYPE Description
* kpSrcconst unsigned char* the h264 stream to be decoded
* kiSrcLenconst int the length of h264 steam
* ppDst unsigned char** buffer pointer of decoded data (YUV)
* pDstInfo SBufferInfo*information provided to API including width, height, etc
*pStrideint*output stride
*iWidthint&output width
*iHeightint&output height
*
*return: if decode frame success return 0, otherwise corresponding error returned.
/***************************************************************************/
//the following API is for slice level decoding
virtual DECODING_STATE DecodeFrame2(
const unsigned char* kpSrc,
const int kiSrcLen,
unsigned char** ppDst,
SBufferInfo* pDstInfo);
//the following API is for frame level decoding
Virtual DECODING_STATE DecodeFrame(
const unsigned char* kpSrc,
const int kiSrcLen,
unsigned char** ppDst,
int* pStride,
int& iWidth,
int& iHeight );
//Note: DecodeFrameEx() is not used.
//Note: for slice level DecodeFrame2() (4 parameters input), whatever the function return value is, the outputted data of I420 format will only be available when pDstInfo->iBufferStatus == 1,. (e.g., in multi-slice cases, only when the whole picture is completely reconstructed, this variable would be set as 1.)
Step#4: control the decoding
virtual int SetOption(DECODER_OPTION eOptionId, void* pOption) = 0;
virtual int GetOption(DECODER_OPTION eOptionId, void* pOption) = 0;
Decoder Option List:
Option ID |
Input Format |
Meaning/Constraint |
DECODER_OPTION_DATAFORMAT |
int |
color format, now supports 23 only (I420) |
DECODER_OPTION_END_OF_STREAM |
bool |
end of stream flag |
DECODER_OPTION_VCL_NAL |
bool |
feedback whether or not have VCL NAL in current AU for application layer |
DECODER_OPTION_TEMPORAL_ID |
int |
if have VCL NAL in current AU, then feedback the temporal ID |
DECODER_OPTION_FRAME_NUM |
int |
Indicate frame_num |
DECODER_OPTION_IDR_PIC_ID |
int |
Indicate current IDR_ID |
DECODER_OPTION_LTR_MARKING_FLAG |
bool |
read only, indicating if LTR_marking SE flag is used in current AU |
DECODER_OPTION_LTR_MARKED_FRAME_NUM |
int |
Read only, indicating the frame_num of current AU marked as LTR |
DECODER_OPTION_ERROR_CON_IDC |
int |
Indicate error concealment method 0: disable 1: frame_copy 2: slice_copy (default) |
Decoder Parameter List:
(Note: some parameters in mentioned structure are not explained because they are to be removed)
SDecodingParam: (Note: some of the members may not be used for now.)
Format |
Parameter Name |
Meaning/Constraint |
char* |
pFileNameRestructed |
File name of restructed frame used for PSNR calculation based debug; |
int |
iOutputColorFormat |
color space format to be outputed |
unsigned int |
uiCpuLoad |
CPU load |
unsigned char |
uiTargetDqLayer |
target dq layer number |
unsigned char |
uiEcActiveFlag |
Whether active error concealment feature in decoder |
SVideoProperty |
sVideoProperty |
Video stream property |
SVideoProperty
Format |
Parameter Name |
Meaning/Constraint |
unsigned int |
size |
size of structure |
VIDEO_BITSTREAM_TYPE |
eVideoBsType |
Video stream type (AVC/SVC) |
SBufferInfo:
Format |
Parameter Name |
Meaning/Constraint |
int |
iBufferStatus |
0: data not ready, 1: data ready |
union UsrData |
Output buffer info, see following tables |
union UsrData:
Format |
Parameter Name |
Meaning/Constraint |
SSysMEMBuffer |
sSysMEMBuffer |
output with memory, see following tables |
SSysMEMBuffer
Format |
Parameter Name |
Meaning/Constraint |
int |
iWidth |
width of decoded pic |
int |
iHeight |
height of decoded pic |
int |
iFormat |
type is “EVideoFormatType”, see codec_def.h |
int |
iStride[2] |
stride |
Decoder Usage Example:
A dummy process for using the decoder could be: (for static library)
1. ISVCDecoder *pSvcDecoder; //declare a decoder
unsigned char *pBuf = …; //input: encoded bitstream start position; should include start code prefix
int iSize = …; //input: encoded bitsteam length; should include the size of start code prefix
unsigned char *pData[3] = …; //output: [0~2] for Y,U,V buffer
SBufferInfo sDstBufInfo; //in-out: declare the output buffer info
memset(&sDstBufInfo, 0, sizeof(SBufferInfo));
2. CreateDecoder(pSvcDecoder); //create a decoder
3. SDecodingParam sDecParam = {0}; //declare required param
sDecParam.sVideoProperty.eVideoBsType = VIDEO_BITSTREAM_AVC;
4. Initialize(&sDecParam); //initialize the param and decoder context, allocate memory
5. DecodeFrame2(pBuf, iSize, pData, &sDstBufInfo); //do actual decoding for slice/frame level; this can be done in a loop until data ends
//if (sDstBufInfo.iBufferStatus==1), pData can be used for render.
6. Uninitialize(); //Uninitialize the decoder and memory free.
7. DestroyDecoder(); //Destroy the decoder
README.md文档
OpenH264
========
OpenH264 is a codec library which supports H.264 encoding and decoding. It is suitable for use in real time applications such as WebRTC. See http://www.openh264.org/ for more details.
Encoder Features
----------------
- Constrained Baseline Profile up to Level 5.2 (4096x2304)
- Arbitrary resolution, support cropping
- Rate control with adaptive quantization, or constant quantization
- Slice options: 1 slice per frame, N slices per frame, N macroblocks per slice, or N bytes per slice
- Multiple threads automatically used for multiple slices
- Temporal scalability up to 4 layers in a dyadic hierarchy
- Spatial simulcast up to 4 resolutions from a single input
- Long Term Reference (LTR) frames
- Memory Management Control Operation (MMCO)
- Reference picture list modification
- Single reference frame for inter prediction
- Multiple reference frames when using LTR and/or 3-4 temporal layers
- Periodic and on-demand Instantaneous Decoder Refresh (IDR) frame insertion
- Dynamic changes to bit rate, frame rate, and resolution
- Annex B byte stream output
- YUV 4:2:0 planar input
Decoder Features
----------------
- Constrained Baseline Profile up to Level 5.2 (4096x2304)
- Arbitrary resolution, not constrained to multiples of 16x16
- Single thread for all slices
- Long Term Reference (LTR) frames
- Memory Management Control Operation (MMCO)
- Reference picture list modification
- Multiple reference frames when specified in Sequence Parameter Set (SPS)
- Annex B byte stream input
- YUV 4:2:0 planar output
- Decoder output timing conformance
- Error concealment support with slice copy as default method
OS Support
----------
- Windows 64-bit and 32-bit
- Mac OS X 64-bit and 32-bit
- Linux 64-bit and 32-bit
- Android 32-bit
- iOS 64-bit and 32-bit (not fully tested)
Processor Support
-----------------
- Intel x86 optionally with MMX/SSE (no AVX yet, help is welcome)
- ARMv7 optionally with NEON
- Any architecture using C/C++ fallback functions
Building the Library
--------------------
NASM needed to be installed for assembly code: workable version 2.07 or above, nasm can downloaded from http://www.nasm.us/
To build the arm assembly for Windows Phone, gas-preprocessor is required. It can be downloaded from git://git.libav.org/gas-preprocessor.git
For Android Builds
------------------
To build for android platform, You need to install android sdk and ndk. You also need to export **ANDROID_SDK**/tools to PATH. On Linux, this can be done by
'export PATH=**ANDROID_SDK**/tools:$PATH'
The codec and demo can be built by
'make OS=android NDKROOT=**ANDROID_NDK** TARGET= **ANDROID_TARGET**'
Valid **ANDROID_TARGET** can be found in **ANDROID_SDK**/platforms, such as android-12.
You can also set ARCH, NDKLEVEL, GCCVERSION according to your device and NDK version.
ARCH specifies the architecture of android device. Currently only arm and x86 are supported, the default is arm.
NDKLEVEL specifies android api level, the api level can be 12-19, the default is 12.
GCCVERSION specifies which gcc in NDK is used, the default is 4.8.
By default these commands build for the armeabi-v7a ABI. To build for the other android
ABIs, add "ARCH=mips" or "ARCH=x86". To build for the older armeabi ABI (which has
armv5te as baseline), add "APP_ABI=armeabi" (ARCH=arm is implicit).
For iOS Builds
--------------
You can build the libraries and demo applications using xcode project files
located in codec/build/iOS/dec and codec/build/iOS/enc.
You can also build the libraries (but not the demo applications) using the
make based build system from the command line. Build with
'make OS=ios ARCH=**ARCH**'
Valid values for **ARCH** are the normal iOS architecture names such as
armv7, armv7s, arm64, and i386 and x86_64 for the simulator. Additionally,
one might need to add 'SDK=X.Y' to the make command line in case the default
SDK version isn't available. Another settable iOS specific parameter
is SDK_MIN, specifying the minimum deployment target for the built library.
For other details on building using make on the command line, see
'For All Platforms' below.
For Windows Builds
------------------
Our Windows builds use MinGW which can be found here - http://www.mingw.org/
To build with gcc, add the MinGW bin directory (e.g. /c/MinGW/bin) to your path and follow the 'For All Platforms' instructions below.
To build with Visual Studio you will need to set up your path to run cl.exe. The easiest way is to start MSYS from a developer command line session - http://msdn.microsoft.com/en-us/library/ms229859(v=vs.110).aspx If you need to do it by hand here is an example from a Windows 64bit install of VS2012:
export PATH="$PATH:/c/Program Files (x86)/Microsoft Visual Studio 11.0/VC/bin:/c/Program Files (x86)/Microsoft Visual Studio 11.0/Common7/IDE"
You will also need to set your INCLUDE and LIB paths to point to your VS and SDK installs. Something like this, again from Win64 with VS2012 (note the use of Windows-style paths here).
export INCLUDE="C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\include;C:\Program Files (x86)\Windows Kits\8.0\Include\um;C:\Program Files (x86)\Windows Kits\8.0\Include\shared"
export LIB="C:\Program Files (x86)\Windows Kits\8.0\Lib\Win8\um\x86;C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\lib"
Then add 'OS=msvc' to the make line of the 'For All Platforms' instructions.
For All Platforms
-------------------
From the main project directory:
'make' for automatically detecting 32/64bit and building accordingly
'make ENABLE64BIT=No' for 32bit builds
'make ENABLE64BIT=Yes' for 64bit builds
'make V=No' for a silent build (not showing the actual compiler commands)
The command line programs h264enc and h264dec will appear in the main project directory.
A shell script to run the command-line apps is in testbin/CmdLineExample.sh
Usage information can be found in testbin/CmdLineReadMe
Using the Source
----------------
codec - encoder, decoder, console (test app), build (makefile, vcproj)
build - scripts for Makefile build system.
test - GTest unittest files.
testbin - autobuild scripts, test app config files
res - yuv and bitstream test files
Known Issues
------------
See the issue tracker on https://github.com/cisco/openh264/issues
- Encoder errors when resolution exceeds 3840x2160
- Encoder errors when compressed frame size exceeds half uncompressed size
- Encoder does not support QP < 10 encoding
- Encoder does not support slice number > 35 encoding
- The result of float-point calculation in rate control will be affected by preciseness of double-typed variable on different platform
- Decoder errors when compressed frame size exceeds 1MB
- Encoder RC requires frame skipping to be enabled to hit the target bitrate,
if frame skipping is disabled the target bitrate may be exceeded
License
-------
BSD, see LICENSE file for details.
OpenH264的更多相关文章
- Windows下用VS2015+MSYS编译OpenH264
因为项目用到了OpenH264,编译的过程不想做过多研究,搜了下,有网页可以参考,遂记录下来,并在后面做一些补充. 原帖地址:http://blog.csdn.net/dbyoung/article/ ...
- OpenH264编译ffmpeg android
思科的 安装NASM git clone https://github.com/cisco/openh264.git Android Builds install android sdk and nd ...
- [笔记] Ubuntu下编译ffmpeg+openh264+x264
[下载代码] - ffmpeg: git clone git://source.ffmpeg.org/ffmpeg.git - openh264: git clone https://github ...
- openh264 api 使用
IS_PARAMETER_SET_NAL:是不是参数集nal 头文件codec_api.h codec_app_def.h codec_def.h codec_ver.h SEncParamExt.i ...
- openh264 在 osx 上的 nasm 问题
先在 pc 上编译,熟悉一下. 编译遇到一个问题: nasm -DUNIX64 -DPREFIX -f macho64 -I./codec/common/x86/ -o codec/common/x8 ...
- 值得推荐的C/C++框架和库
值得推荐的C/C++框架和库 [本文系外部转贴,原文地址:http://coolshell.info/c/c++/2014/12/13/c-open-project.htm]留作存档 下次造轮子前先看 ...
- [转]C/C++ 程序员必须收藏的资源大全
from: https://github.com/jobbole/awesome-cpp-cn C++ 资源大全中文版 我想很多程序员应该记得 GitHub 上有一个 Awesome – XXX 系列 ...
- [转载]C/C++框架和库
C/C++框架和库 装载自:http://blog.csdn.net/xiaoxiaoyeyaya/article/details/42541419 值得学习的C语言开源项目 Webbench Web ...
- C++ 资源大全
http://www.uml.org.cn/c++/201411145.asp http://ezlippi.com/blog/2014/12/c-open-project.html <C++ ...
随机推荐
- 使用SQL Server 2005数据库管理工具 - 初学者系列 - 学习者系列文章
本文讲述使用SQL Server 2005 Express数据库管理工具的使用. 1.打开数据库管理工具 2.选择下面的SQL Server 身份验证,因为在安装数据库的时候设置了sa的密码. 3.选 ...
- 同TTX更可爱的层次分析法游戏破解
最近的工作太忙,没啥时间写文章,今天遇到一点点的游戏,浅析.以中午的优势写这篇文章. 移动MM的游戏.前面我们已经写过非常多文章,没有看过的朋友,自行查找就可以,今天我们继续分析一个类似的游戏,只是使 ...
- Android项目---listview的那些属性,常用却不常见
一.在xml中,常用到的属性有 android:cacheColorHint="#00000000" //设置拖动背景色为透明 android:dividerHeight=&quo ...
- Android项目---语言适配
android多国语言文件夹 android多国语言文件夹文件汇总如下:(有些语言的书写顺序可能跟中文是相反的) 中文(中国):values-zh-rCN 中文(台湾):values-zh-rTW 中 ...
- 编解码器的学习笔记(十):Ogg系列
Ogg是一个自由和开放的标准容器格式,由Xiph.Org 维修基金. Ogg格式不受软件专利的限制,它的目的是有效地处理高品质的流媒体和数字媒体. Ogg意指一种文件格式,能够纳入各式各样自由和开放源 ...
- AJAX跨域调用ASP.NET MVC或者WebAPI服务
关于AJAX跨域调用ASP.NET MVC或者WebAPI服务的问题及解决方案 作者:陈希章 时间:2014-7-3 问题描述 当跨域(cross domain)调用ASP.NET MVC或者ASP. ...
- [推荐分享]大量Javascript/JQuery学习教程电子书合集,送给有需要的人
不收藏是你的错^_^. 经证实,均可免费下载. 资源名称 资源大小 15天学会jQuery(完整版).pdf 274.79 KB 21天学通JavaScript(第2版)-顾宁燕扫描版.pdf ...
- 什么是gulp
gulp:入门简介 本文是gulp的入门级介绍,主要内容包括什么是gulp,gulp与grunt有什么区别,gulp可以解决grunt存在的哪些问题,以及一个简单的说明例子. 什么是gulp gu ...
- IP地址爬取
ip_spider.py= = = #!/usr/bin/python # coding: utf-8 import os import sys import requests import re i ...
- 一个人开发的html整站源码分享网站就这么上线了
项目我采用了纯静态html+动态搜索的模式,就是说详情页.主页等纯静态页面,仅搜索页面采用数据库访问搜索,搜索结果分为静态和动态,如果输入的关键字是已存在的标签就静态展示,否则就动态展示,这么做的好处 ...