To enable DXVA2, use the --enable-dxva2 ffmpeg configure switch.

To test decoding, use the following command:

ffmpeg -hwaccel dxva2 -threads 1 -i INPUT -f null - -benchmark

****************vlc 启用 dxva2.0硬件解码后,CPU使用率明显降低*************

基于ffmpeg的dxva h264硬件解码 的例子(实际上就是从vlc源码中抽出来的),但是好像没什么效果


http://download.csdn.NET/download/xin_hua_3/7324839

用GPU-Z工具看GPU负载确实没有负载

===============================

FFmpeg provides a subsystem for hardware acceleration.

Hardware acceleration allows to use specific devices (usually graphical card or other specific devices) to perform multimedia processing. This allows to use dedicated hardware to perform demanding computation while freeing the CPU from such computations. Typically
hardware acceleration enables specific hardware devices (usually the GPU) to perform operations related to decoding and encoding video streams, or filtering video.

When using FFmpeg the tool, HW-assisted decoding is enabled using through the -hwaccel option, which enables a specific decoder. Each decoder
may have specific limitations (for example an H.264 decoder may only support baseline profile). HW-assisted encoding is enabled through the use of a specific encoder (for example nvenc_h264).
Filtering HW-assisted processing is only supported in a few filters, and in that case you enable the OpenCL code through a filter option.

There are several hardware acceleration standards API, some of which are supported to some extent by FFmpeg.

Platforms overview

API availability

  Linux Intel Linux NVIDIA Windows Intel Windows NVIDIA OS X Android iOS Raspberry Pi
CUDA N Y N Y Y N N N
Direct3D 11 N N Y Y N N N N
DXVA2 N N Y Y N N N N
MediaCodec N N N N N Y N N
MMAL N N N N N N N Y
NVENC N Y N Y N N N N
OpenCL Y Y Y Y Y N N N
Quick Sync Y N Y N N N N N
VA-API Y Y* N N N N N N
VDA† N N N N Y N N N
VDPAU N Y N N N N N N
VideoToolbox N N N N Y N Y N
XvMC Y Y N N N N N N

* Semi-maintained.

† Deprecated by upstream.

FFmpeg implementations

  AVHWAccel Decoder Encoder CLI Filtering AVHWFramesContext
CUDA N N N N/A Y* Y
Direct3D 11 Y N N/A N N N
DXVA2 Y N N/A Y N N
MediaCodec N Y N N/A N/A N
MMAL Y Y N/A N N/A N
NVENC N/A N/A Y N/A N/A N
OpenCL N/A N/A N/A N/A Y N
Quick Sync Y Y Y Y N N
VA-API Y N Y Y Y Y
VDA Y Y N/A Y N/A N
VDPAU Y N† N/A Y N Y
VideoToolbox Y N Y Y N N
XvMC Y N† N/A N N/A N

N/A This feature is not directly supported by the API, or is not currently implementable.

* Work in progress. If "Y" is indicated, infrastructure is in place but no filters have been implemented yet.

† Actually yes, but is deprecated and should not be used.

VDPAU

​Video
Decode and Presentation API for Unix
. Developed by NVidia for UNIX/Linux systems.
To enable this you typically need the libvdpau development package in your distribution, and a compatible graphic card.

Note that VDPAU cannot be used to decode frames in memory, the compressed frames are sent by libavcodec to the GPU device supported by VDPAU and then the decoded image can be accessed using the VDPAU API. This is not done automatically by FFmpeg, but must be
done at the application level (check for example the ffmpeg_vdpau.c file used by ffmpeg.c).
Also, note that with this API it is not possible to move the decoded frame back to RAM, for example in case you need to encode again the decoded frame (e.g. when doing transcoding on a server).

Several decoders are currently supported through VDPAU in libavcodec, in particular MPEG Video, VC-1, H.264, MPEG4.

XvMC

XVideo Motion Compensation. This is an extension of the X video extension (Xv) for the X Window System (and thus again only available only on UNIX/linux).

Official specification is available here: ​http://www.xfree86.org/~mvojkovi/XvMC_API.txt

VA-API

Video Acceleration API (VA API) is a non-proprietary and royalty-free open source software library ("libVA") and API specification, initially developed by Intel but can be used in combination with other devices. Linux only: ​https://en.wikipedia.org/wiki/Video_Acceleration_API

DXVA2

Direct-X Video Acceleration API, developed by Microsoft (supports Windows and XBox360).

Link to MSDN documentation: ​http://msdn.microsoft.com/en-us/library/windows/desktop/cc307941%28v=vs.85%29.aspx

Several decoders are currently supported, in particular H.264, MPEG2, VC1 and WMV3.

DXVA2 hardware acceleration only works on Windows. In order to build FFmpeg with DXVA2 support, you need to install the dxva2api.h header. For MinGW this can be done by downloading the header maintained by VLC:

​http://download.videolan.org/pub/contrib/dxva2api.h

and installing it in the include patch (for example in /usr/include/).

For MinGW64, the dxva2api.h is provided by default. One way to install mingw-w64 is through a pacman repository, and can be installed using
one of the two following commands, depending on the architecture:

pacman -S mingw-w64-i686-gcc
pacman -S mingw-w64-x86_64-gcc

To enable DXVA2, use the --enable-dxva2 ffmpeg configure switch.

To test decoding, use the following command:

ffmpeg -hwaccel dxva2 -threads 1 -i INPUT -f null - -benchmark

VDA

Video Decoding API, only supported on MAC. H.264 decoding is available in FFmpeg/libavcodec.

Developers documentation: ​https://developer.apple.com/library/mac/technotes/tn2267/_index.html

NVENC

NVENC is an API developed by NVIDIA which enables the use of NVIDIA GPU cards to perform H.264 and HEVC encoding. FFmpeg supports NVENC through the nvenc_h264 and nvenc_hevc encoders.
In order to enable it in FFmpeg you need:

  • A supported GPU
  • Supported drivers
  • Locally installed nvEncodeAPI.h header files from the NVENC SDK
  • ffmpeg configured with --enable-nvenc

Visit ​NVIDIA
Video Codec SDK
 to download the SDK and to read more about the supported GPUs and supported drivers.

Usage example:

ffmpeg -i input -c:v nvenc_h264 -profile high444p -pixel_format yuv444p -preset default output.mp4

You can see available presets, other options, and encoder info with ffmpeg -h encoder=nvenc_h264 or ffmpeg
-h encoder=nvenc_hevc
.

Note: If you get the No NVENC capable devices found error make sure you're encoding to a supported pixel format. See encoder
info as shown above.

Intel QSV

Intel QSV (Quick Sync Video) is a technology which allows decoding and encoding using recent Intel CPU and integrated GPU, supported on recent Intel CPUs. Note that the (CPU)GPU needs to be compatible with both QSV and OpenCL. Some (older) QSV -enabled GPUs
aren't compatible with OpenCL. See: ​http://www.intel.com/content/www/us/en/architecture-and-technology/quick-sync-video/quick-sync-video-general.html ​https://software.intel.com/en-us/articles/intel-sdk-for-opencl-applications-2013-release-notes

To enable QSV support, you need the Intel Media SDK integrated in the Intel Media Server Studio: ​https://software.intel.com/en-us/intel-media-server-studio

The Intel Media Server studio is available for both Linux and Windows, and contains the libva and libdrm libraries, the libmfx dispatcher library and the intel drivers. libmfx is the library which selects the codec depending on the system capabilities, falling
back to a software implementation if the hardware accelerated codec is not available).

FFmpeg QSV support relies on libmfx, but the library provided by Intel does not come with pkg-config files and a proper installer. Thus the
easiest to install the library is to use the libmfx version packaged by lu_zero here: ​https://github.com/lu-zero/mfx_dispatch

Requirements on Windows: install the Intel Media SDK packaged in the Intel Media Server Studio, which comes with a graphic installer, and a MinGW compilation enviroment (for example provided by MSYS2 with a corresponding Mingw-w64 package). Then you need to
build libmfx and install it in a path recognized by pkg-config. For example if you install in /usr/local then you need the update the$PKG_CONFIG_PATH environment
variable to make it point to /usr/local/lib/pkgconfig.

Requriments on Linux: you need either to rely on the Intel Media Server Studio for Linux, or use a recent enough supported system, with the libva and libdrm libraries, the libva Intel drivers, and the libmfx library packaged by lu_zero. Note: in case you use
the Intel Media Server Studio generic installation script, the installation script may overwrite your system libraries and break the system.

Check the following website for updated information about the Intel Graphics stack on the various Linux platforms: ​https://01.org/linuxgraphics

To enable QSV support in the FFmpeg build, configure with --enable-libmfx.

Support for decoding and encoding is integrated in FFmpeg through several codecs identified by the _qsv suffix. In particular, it currently
supports MPEG2 video, VC1 (decoding only), H.264 and H.265.

For example to encode to H.264 using h264_qsv, you can use the command:

ffmpeg -i INPUT -c:v h264_qsv -preset:v faster out.qsv.mp4

OpenCL

Official website: ​https://www.khronos.org/opencl/

Currently only used in filtering (deshake and unsharp filters). In order to use OpenCL code you need to enable the build with --enable-opencl.
An API to use OpenCL API from FFmpeg is provided in libavutil/opencl.h. No decoding/encoding is currently supported (yet).

External resources


configure --list-decoders | grep h264

h264_crystalhd

 h264_mmal

h264_qsv  ===》这个对应vaapi?

h264_vda

h264_vdpau ====》VDPAU

June 27th, 2016, FFmpeg 3.1 "Laplace"

FFmpeg 3.1 "Laplace", a new major release, is now available! Some of the highlights:

  • DXVA2-accelerated HEVC Main10 decoding

【视频开发】 ffmpeg支持的硬解码接口的更多相关文章

  1. 查看Android支持的硬解码信息

    通过/system/etc/media_codecs.xml可以确定当前设备支持哪些硬解码.通过/system/etc/media_profiles.xml可以知道设备支持的具体profile和lev ...

  2. 音视频开发-FFmpeg

    音视频开发是个非常复杂的,庞大的开发话题,初涉其中,先看一下结合 OEIP(开源项目) 新增例子. 可以打开flv,mp4类型文件,以及rtmp协议音视频数据,声音的播放使用SDL. 把采集的麦/声卡 ...

  3. 基于FFmpeg的Dxva2硬解码及Direct3D显示(四)

    初始化硬解码上下文 目录 初始化硬解码上下文 创建解码数据缓冲区 创建IDirectXVideoDecoder视频解码器 设置硬解码上下文 解码回调函数 创建解码数据缓冲区 这一步为了得到 LPDIR ...

  4. [原]ffmpeg编译android 硬解码支持库 libstagefright

    最近花了一天时间将ffmpeg/tools/build_stagefright执行成功,主要是交叉编译所需要的各种动态库的支持没链接上,导致各种报错,基本上网络上问到的问题我都碰到了,特此记录下来. ...

  5. 基于FFmpeg的Dxva2硬解码及Direct3D显示(一)

    目录 前言 名词解释 代码实现逻辑 前言 关于视频软解码的资料网上比较多了,但是关于硬解可供参考的资料非常之有限,虽然总得来说软解和硬解的基本逻辑一样,但是实现细节上的差别还是比较多的.虽然目前功能已 ...

  6. 基于FFmpeg的Dxva2硬解码及Direct3D显示(五)

    解码及显示 目录 解码及显示 解码 显示 资源清理 解码 循环读取视频帧 AVPacket packet = { 0 }; while (av_read_frame(m_pFmtCtx, &p ...

  7. 基于FFmpeg的Dxva2硬解码及Direct3D显示(三)

    初始化Direct3D 目录 初始化Direct3D 创建Direct3D物理设备对象实例 创建Direct3D渲染设备实例 创建Direct3D视频解码服务 Direct3D渲染可以通过Surfac ...

  8. 基于FFmpeg的Dxva2硬解码及Direct3D显示(二)

    解析视频源 目录 解析视频源 获取视频流 解析视频流 说明:这篇博文分为"获取视频流"和"解析视频流"两个部分,使用的是FFmpeg4.1的版本,与网上流传的低 ...

  9. 【视频开发】GPU编解码:GPU硬解码---DXVA

    GPU编解码:GPU硬解码---DXVA 一.DXVA介绍 DXVA是微软公司专门定制的视频加速规范,是一种接口规范.DXVA规范制定硬件加速解码可分四级:VLD,控制BitStream;IDCT,反 ...

随机推荐

  1. jQ native 构造函数

  2. java技术思维导图(转载)

      在网上看到有个人总结的java技术的东东,觉得很好,就保存下来了,码农还真是累啊,只有不断的学习才能有所提高,才能拿更多的RMB啊. java技术思维导图 服务端思维导图 前端思维导图

  3. 合并K个有序链表

    方法一:采用归并的思想将链表两两合并,再将两两合并后的链表做同样的操作直到合并为只有一个新的链表为止. 归类的时间复杂度是O(logn),合并两个链表的时间复杂度是O(n),则总的时间复杂度大概是O( ...

  4. 爬虫 - 请求库之selenium

    介绍 官方文档 selenium最初是一个自动化测试工具,而爬虫中使用它主要是为了解决requests无法直接执行JavaScript代码的问题 selenium本质是通过驱动浏览器,完全模拟浏览器的 ...

  5. 关于AndroidStudio项目app在手机上运行遇到登录网络问题的解决

    又得提到我熟悉的12月份末尾,依旧想着把自己遇到的问题给大家看看,顺便分享我的解决办法. 看过我第一个发的随笔就知道,我遇到过给项目app打包成apk的问题啊,虽然解决了,但是运行到手机上 就又出现了 ...

  6. Random Walk——高斯消元法

    题目 有一个 $N \times M$ 大小的格子,从(0, 0)出发,每一步朝着上下左右4个格子中可以移动的格子等概率移动.另外有些格子有石头,因此无法移至这些格子.求第一次到达 $(N-1, M- ...

  7. LeetCode 947. Most Stones Removed with Same Row or Column

    原题链接在这里:https://leetcode.com/problems/most-stones-removed-with-same-row-or-column/ 题目: On a 2D plane ...

  8. 二分图匹配--KM算法

    Kuhn-Munkres算法 KM算法,求完备匹配下的最大权匹配,时间复杂度O(\(n^3\)) 所谓的完备匹配就是在二部图中,x点集中的所有点都有对应的匹配 且 y点集中所有的点都有对应的匹配 ,则 ...

  9. jsp之大文件分段上传、断点续传

    1,项目调研 因为需要研究下断点上传的问题.找了很久终于找到一个比较好的项目. 在GoogleCode上面,代码弄下来超级不方便,还是配置hosts才好,把代码重新上传到了github上面. http ...

  10. ShardingSphere初探1 -- 概览

    知道这个框架是通过一期QQ课堂 https://shardingsphere.apache.org 官网 https://github.com/apache/incubator-shardingsph ...