In order to improve my English writing skills,I am going to  write the blogs in English form now!

-------------------------------------------------------------------------------------------Luxuriant line-----------------------------------------------------------------------

Today,we will learn how to initialize the kinect and get RGB data form it,then convert the data to a texture,which will be drawn to the windows.

We have two real pieces of kinect-specific code. I will go over these in some detail, and give a fairly hight level overview of the display code

include the header files:

#include <Windows.h>
#include <Ole2.h> #include <gl/GL.h>
#include <gl/GLU.h>
#include <gl/glut.h> #include <NuiApi.h>
#include <NuiImageCamera.h>
#include <NuiSensor.h>

Constants and global variables:

#define width 640
#define height 480 // OpenGL Variables
GLuint textureId; // ID of the texture to contain Kinect RGB Data
GLubyte data[width*height*4]; // BGRA array containing the texture data // Kinect variables
HANDLE rgbStream; // The identifier of the Kinect's RGB Camera
INuiSensor* sensor; // The kinect sensor

Kinect Initialization:

bool initKinect() {
// Get a working kinect sensor
int numSensors;
if (NuiGetSensorCount(&numSensors) < 0 || numSensors < 1) return false;
if (NuiCreateSensorByIndex(0, &sensor) < 0) return false; // Initialize sensor
sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH | NUI_INITIALIZE_FLAG_USES_COLOR);
sensor->NuiImageStreamOpen(
NUI_IMAGE_TYPE_COLOR, // Depth camera or rgb camera?
NUI_IMAGE_RESOLUTION_640x480, // Image resolution
0, // Image stream flags, e.g. near mode
2, // Number of frames to buffer
NULL, // Event handle
&rgbStream);
return sensor;
}

get an RGB frame from the Kinect:

void getKinectData(GLubyte* dest) {
NUI_IMAGE_FRAME imageFrame;
NUI_LOCKED_RECT LockedRect;
if (sensor->NuiImageStreamGetNextFrame(rgbStream, 0, &imageFrame) < 0) return;
INuiFrameTexture* texture = imageFrame.pFrameTexture;
texture->LockRect(0, &LockedRect, NULL, 0);
 if (LockedRect.Pitch != 0)
{
const BYTE* curr = (const BYTE*) LockedRect.pBits;
const BYTE* dataEnd = curr + (width*height)*4; while (curr < dataEnd) {
*dest++ = *curr++;
}
}
  texture->UnlockRect(0);
sensor->NuiImageStreamReleaseFrame(rgbStream, &imageFrame);
}

Something about the window:

void draw() {
drawKinectData();
glutSwapBuffers();
} void execute() {
glutMainLoop();
} bool init(int argc, char* argv[]) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowSize(width,height);
glutCreateWindow("Kinect SDK Tutorial");
glutDisplayFunc(draw);
glutIdleFunc(draw);
return true;
}

Display via OpenGL:

  // Initialize textures
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height,
0, GL_BGRA, GL_UNSIGNED_BYTE, (GLvoid*) data);
glBindTexture(GL_TEXTURE_2D, 0); // OpenGL setup
glClearColor(0,0,0,0);
glClearDepth(1.0f);
glEnable(GL_TEXTURE_2D); // Camera setup
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
int main(int argc, char* argv[]) {
if (!init(argc, argv)) return 1;
if (!initKinect()) return 1; /* ...OpenGL texture and camera initialization... */ // Main loop
execute();
return 0;
}

Draw a frame to the screen:

void drawKinectData() {
glBindTexture(GL_TEXTURE_2D, textureId);
getKinectData(data);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, (GLvoid*)data);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0, 0, 0);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(width, 0, 0);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(width, height, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0, height, 0.0f);
glEnd();
}

The End! Build and run,making sure that your Kinect is plugged in.You should see a window containing a viseo stream of what your Kinect sees.

Draw the RGB data from kinect C++ via opengl的更多相关文章

  1. Kinect SDK C++ - 2. Kinect Depth Data

    Today we will learn how to get depth data from a kinect and what the format of the data is kinect co ...

  2. Sensor信号输出YUV、RGB、RAW DATA、JPEG【转】

    本文转载自:http://blog.csdn.net/southcamel/article/details/8305873 简单来说,YUV: luma (Y) + chroma (UV) 格式, 一 ...

  3. Sensor信号输出YUV、RGB、RAW DATA、JPEG 4种方式区别

    简单来说,YUV: luma (Y) + chroma (UV) 格式, 一般情况下sensor支持YUV422格式,即数据格式是按Y-U-Y-V次序输出的RGB: 传统的红绿蓝格式,比如RGB565 ...

  4. 嵌入式开发之davinci--- 8148/8168/8127 中的图像采集格式Sensor信号输出YUV、RGB、RAW DATA、JPEG 4种方式区别

    简单来说,YUV: luma (Y) + chroma (UV) 格式, 一般情况下sensor支持YUV422格式,即数据格式是按Y-U-Y-V次序输出的RGB: 传统的红绿蓝格式,比如RGB565 ...

  5. IPC网络高清摄像机基础知识4(Sensor信号输出YUV、RGB、RAW DATA、JPEG 4种方式区别) 【转】

    转自:http://blog.csdn.net/times_poem/article/details/51682785 [-] 一 概念介绍 二 两个疑问 三 RAW和JPEG的区别 1 概念说明 3 ...

  6. 最简单的视音频播放示例5:OpenGL播放RGB/YUV

    本文记录OpenGL播放视频的技术.OpenGL是一个和Direct3D同一层面的技术.相比于Direct3D,OpenGL具有跨平台的优势.尽管在游戏领域,DirectX的影响力已渐渐超越OpenG ...

  7. 最简单的视音频播放示例2:GDI播放YUV, RGB

    前一篇文章对“Simplest Media Play”工程作了概括性介绍.后续几篇文章打算详细介绍每个子工程中的几种技术.在记录Direct3D,OpenGL这两种相对复杂的技术之前,打算先记录一种和 ...

  8. 最简单的视音频播放演示样例5:OpenGL播放RGB/YUV

    ===================================================== 最简单的视音频播放演示样例系列文章列表: 最简单的视音频播放演示样例1:总述 最简单的视音频 ...

  9. kinect 深度图与彩色图对齐程序

    //#include "duiqi.hpp" #include "kinect.h" #include <iostream> #include &q ...

随机推荐

  1. 安卓怎么不如ios运行流畅

    一.优先级别不同:iOS最先响应屏幕当我们使用iOS或者是Android手机时,第一步就是滑屏解锁找到相应程序点击进入.而这个时候往往是所有操控开始的第一步骤,iOS系统产品就表现出来了流畅的一面,但 ...

  2. dispatch_group_t踩过的坑

    如果想在dispatch_queue中所有的任务执行完成后在做某种操作,在串行队列中,可以把该操作放到最后一个任务执行完成后继续,但是在并行队列中怎么做呢.这就有dispatch_group 成组操作 ...

  3. Dbf文件操作

    package cn.com.szhtkj.util; import java.io.File; import java.io.IOException; import java.lang.reflec ...

  4. 路飞学城Python-Day3

    Moudle 1 Chapter 1 #练习题# 1.简述编译型与解释型语言的区别,且分别列出你知道的哪些语言属于编译型,哪些属于解释型?"""编译型:编译类指在应用源程 ...

  5. selenium自动化(一).........................................搭建环境

    一  环境搭建 安装python(建议使用py3) py2和py3在语法上会有一定的差别 第三方插件逐步转向py3,很多py2的插件已经停止维护 本教程的所有代码基于py3 安装selenium插件 ...

  6. NodeJS学习笔记 (4)网络服务-http(ok)

    原文:https://github.com/chyingp/nodejs-learning-guide 自己敲代码: http模块概览 大多数nodejs开发者都是冲着开发web server的目的选 ...

  7. 图层Layers的介绍

    图层包含的要素可以是矢量形式的也可以是栅格形式的. 这里介绍其中一种:添加TileLayer.(加载Image类型的图层) 引用:"esri/layers/TileLayer" 举 ...

  8. Markdown语法简记

    目录 一.标题 1. 六个级别的标题 2. 主.副两级标题 二.根据标题生成文档结构大纲 三.字体 1. 斜体 2. 粗体 3. 倾斜加粗 4. 行首缩进 5. 删除线 四.引用块 五.代码块 1. ...

  9. CF451E Devu and Flowers (组合数学+容斥)

    题目大意:给你$n$个箱子,每个箱子里有$a_{i}$个花,你最多取$s$个花,求所有取花的方案,$n<=20$,$s<=1e14$,$a_{i}<=1e12$ 容斥入门题目 把取花 ...

  10. dropload上拉加载 下拉刷新

    1.引入css.js <link rel="stylesheet" href=" ${pageContext.request.contextPath}/dist/d ...