现在增加了一个filter属性,所以可以很好和opencv结合。转一篇文章(http://blog.qt.io/blog/2015/03/20/introducing-video-filters-in-qt-multimedia/):

Introducing video filters in Qt Multimedia

Published Friday March 20th, 2015
3 Comments
Posted in Graphics, Multimedia, OpenGL, Qt Quick

Qt Multimedia makes it very easy to get a video stream from the camera or a video file rendered as part of your application’s Qt Quick scene. What is more, its modularized backend and video node plugin system allows to provide hardware accelerated, zero copy solutions on platforms where such an option is available. All this is hidden from the applications when using Qt Quick elements like Camera, MediaPlayer and VideoOutput, which is great. But what if you want to do some additional filtering or computations on the video frames before they are presented? For example because you want to transform the frame, or compute something from it, on the GPU using OpenCL or CUDA. Or because you want to run some algorithms provided by OpenCV. Before Qt 5.5 there was no easy way to do this in combination with the familiar Qt Quick elements. With Qt 5.5 this is going to change: say hello to QAbstractVideoFilter.

QAbstractVideoFilter serves as a base class for classes that are exposed to the QML world and are instantiated from there. They are then associated with a VideoOutput element. From that point on, every video frame the VideoOutput receives is run through the filter first. The filter can provide a new video frame, which is used in place of the original, calculate some results or both. The results of the computation are exposed to QML as arbitrary data structures and can be utilized from Javascript. For example, an OpenCV-based object detection algorithm can generate a list of rectangles that is exposed to QML. The corresponding Javascript code can then position some Rectangle elements at the indicated locations.

Let’s see some code

import QtQuick 2.3
import QtMultimedia 5.5
import my.cool.stuff 1.0

Item {
Camera {
id: camera
}
VideoOutput {
source: camera
anchors.fill: parent
filters: [ faceRecognitionFilter ]
}
FaceRecognizer {
id: faceRecognitionFilter
property real scaleFactor: 1.1 // Define properties either in QML or in C++. Can be animated too.
onFinished: {
console.log("Found " + result.rects.length + " faces");
... // do something with the rectangle list
}
}
}
The new filters property of VideoOutput allows to associate one or more QAbstractVideoFilter instances with it. These are then invoked in order for every incoming video frame.

The outline of the C++ implementation is like this:

QVideoFilterRunnable FaceRecogFilter::createFilterRunnable()
{
return new FaceRecogFilterRunnable(this);
}
...
QVideoFrame FaceRecogFilterRunnable::run(QVideoFrame
input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags)
{
// Convert the input into a suitable OpenCV image format, then run e.g. cv::CascadeClassifier,
// and finally store the list of rectangles into a QObject exposing a 'rects' property.
...
emit m_filter->finished(result);
return *input;
}
...
int main(..)
{
...
qmlRegisterType("my.cool.stuff", 1, 0, "FaceRecognizer");
...
}
Here our filter implementation simply passes the input video frame through, while generating a list of rectangles. This can then be examined from QML, in the finished signal handler. Simple and flexible.

While the registration of our custom filter happens from the main() function in the example, the filter can also be provided from QML extension plugins, independently from the application.

The QAbstractVideoFilter – QVideoFilterRunnable split mirrors the approach with QQuickItem – QSGNode. This is essential in order to support threaded rendering: when the Qt Quick scenegraph is using its threaded render loop, all rendering (the OpenGL operations) happen on a dedicated thread. This includes the filtering operations too. Therefore we have to ensure that the graphics and compute resources live and are only accessed on the render thread. A QVideoFilterRunnable always lives on the render thread and all its functions are guaranteed to be invoked on that thread, with the Qt Quick scenegraph’s OpenGL context bound. This makes creating filters relying on GPU compute APIs easy and painless, even when OpenGL interop is involved.

GPU compute and OpenGL interop

All this is very powerful when it comes to avoiding copies of the pixel data and utilizing the GPU as much as possible. The output video frame can be in any supported format and can differ from the input frame, for instance a GPU-accelerated filter can upload the image data received from the camera into an OpenGL texture, perform operations on that (using OpenCL – OpenGL interop for example) and provide the resulting OpenGL texture as its output. This means that after the initial texture upload, which is naturally in place even when not using any QAbstractVideoFilter at all, everything happens on the GPU. When doing video playback, the situation is even better on some platforms: in case the input is already an OpenGL texture, D3D texture, EGLImage or similar, we can potentially perform everything on the GPU without any readbacks or copies.

The OpenCL-based example that comes with Qt Multimedia demonstrates this well. Shown below running on OS X, the input frames from the video already contain OpenGL textures. All we need to do is to use OpenCL’s GL interop to get a CL image object. The output image object is also based on a GL texture, allowing us to pass it to VideoOutput and the Qt Quick scenegraph as-is.

qml操作播放器的更多相关文章

  1. 完全依赖QML实现播放器

    前言 一直听闻QML无比强大好用,工作中需要扣一个同时播放视频的Demo,所以就趁这个机会研究了一下. 效果图和源码 源码仓库 主要设计 主页面QML import QtQuick 2.12 impo ...

  2. qml 音乐播放器的进度条

    进度条采用qml的Slider组件 样式什么的,网上很多.我就不列举了.接下来主要说明,进度条是怎样按秒移动的. Slider { id: control    value: 0 stepSize: ...

  3. 树莓派播放视频的播放器omxplayer

    omxplyer为树莓派量身定做的一款GPU硬件加速的播放器,很好的解决了树莓派cpu计算力不足的缺点.(播放时cpu一定都不烫手) 1.安装方法: CTRL + ALT + T 调出终端命令行输入 ...

  4. Linux终端音乐播放器cmus攻略: 操作歌单

    目录 1. 安装 2. 操作说明 2.1. *PlayList歌单 2.2. 其他 3. 视图切换 4. 使响应Media/play按键 4.1. 编译安装 cmus是一款开源的终端音乐播放器.它小巧 ...

  5. SE Springer小组《Spring音乐播放器》软件需求说明3

    3 需求规定 3.1对功能的规定 基本功能与相关的输入输出如下表所示.歌曲播放.停止.暂停等功能调用MCI库,数据在MCI库下如何运作与用户的直观感受无关,就不具体列出. 输入 处理 输出 用户登录信 ...

  6. iOS多播放器封装

    今年在做直播业务的时候遇到一些问题,就是在一个套播放器UI中需要多种不同的播放器(AVPlayer.IJKPlayer.AliPlayer)支持,根据ABTest开关来切换具体使用哪种播放器,并且还要 ...

  7. Andriod小项目——在线音乐播放器

    转载自: http://blog.csdn.net/sunkes/article/details/51189189 Andriod小项目——在线音乐播放器 Android在线音乐播放器 从大一开始就已 ...

  8. Android开发6:Service的使用(简单音乐播放器的实现)

    前言 啦啦啦~各位好久不见啦~博主最近比较忙,而且最近一次实验也是刚刚结束~ 好了不废话了,直接进入我们这次的内容~ 在这篇博文里我们将学习Service(服务)的相关知识,学会使用 Service ...

  9. 基于 AVPlayer 自定义播放器

    如果我只是简单的播放一个视频,而不需要考虑播放器的界面.iOS9.0 之前使用 MPMoviePlayerController, 或者内部自带一个 view 的 MPMoviePlayerViewCo ...

随机推荐

  1. 如何让div上下左右都居中

    在做登陆页面的话,需要login的div 上下左右都居中. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ...

  2. 事件冒泡与事件委托 -Tom

    事件冒泡 事件冒泡,就是事件触发的时候通过DOM向上冒泡,首先要知道不是所有的事件都有冒泡.事件在一个目标元素上触发的时候,该事件将触发祖先节点元素,直到最顶层的元素: 如图所示,如果a连接被点击,触 ...

  3. 读convolutional Neural Networks Applied to House Numbers Digit Classification 的收获。

    本文以下内容来自读论文以后认为有价值的地方,论文来自:convolutional Neural Networks Applied to House Numbers Digit Classificati ...

  4. 关于协程的学习 & 线程栈默认10M

    先看的这篇文章:http://blog.csdn.net/qq910894904/article/details/41699541 以nginx为代表的事件驱动的异步server正在横扫天下,那么事件 ...

  5. Android 空心和实心按钮

    Android 空心和实心按钮 做界面时 有时老要用到这种按钮 动画如下 实心的 <?xml version="1.0" encoding="utf-8" ...

  6. Nginx 启用gzip压缩

    1. 网页压缩 网页压缩是一项由 WEB 服务器和浏览器之间共同遵守的协议,也就是说 WEB 服务器和浏览器都必须支持该技术,所幸的是现在流行的浏览器都是支持的,包括 IE.FireFox.Opera ...

  7. 通过NORFLASH中的uboot烧写uboot到nandFlash

    在mini2440的教程中,在构建nandflash系统的时候是首先通过supervivi借助dnw烧写uboot.bin到nand flash 第零块, 由于我使用的是64位操作系统,usb驱动没安 ...

  8. 提高PHP开发质量的36个方法(精品)

    提高PHP开发质量的36个方法 林涛 发表于:2016-3-25 0:00 分类:26点 标签: 62次 1.不要使用相对路径 常常会看到: require_once('../../lib/some_ ...

  9. linux删除文件未释放空间问题处理

    linux删除文件未释放空间问题处理 或者 /根分区满了 (我的根分区是/dev/sda1,/dev/sda1满了) http://blog.csdn.net/donghustone/article/ ...

  10. c time_t 和 oc NSDate 的转换

    c time_t 和 oc NSDate 的转换 1:time_t 转 oc NSDate time_t some_time_t=NULL; NSDate *someDate = [NSDate da ...