转自(并致谢):http://www.cnblogs.com/yangecnu/archive/2012/04/05/KinectSDK_Depth_Image_Processing_Part2.html


1,简单的景深影像处理

Kinect深度值最大为4096mm,0值通常表示深度值不能确定,一般应该将0值过滤掉。微软建议在开发中使用1220mm(4’)~3810mm(12.5’)范围内的值。在进行其他深度图像处理之前,应该使用阈值方法过滤深度数据至1220mm-3810mm这一范围内。

使用统计方法来处理深度影像数据是一个很常用的方法。阈值可以基于深度数据的平均值或者中值来确定。统计方法可以帮助确定某一点是否是噪声、阴影或者是其他比较有意义的物体,比如说用户的手的一部分。有时候如果不考虑像素的视觉意义,可以对原始深度进行数据挖掘。对景深数据处理的目的是进行形状或者物体的识别。通过这些信息,程序可以确定人体相对于Kinect的位置及动作。

直方图是统计数据分布的一个很有效的工具。在这里我们关心的是一个景深影像图中深度值的分布。直方图能够直观地反映给定数据集中数据的分布状况。从直方图中,我们能够看出深度值出现的频率以及聚集分组。通过这些信息,我们能够确定阈值以及其他能够用来对图像进行过滤的指标,使得能够最大化的揭示深度影像图中的深度信息

<Window x:Class="TestDepthHist.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="MainWindow" Height="800" Width="1200" WindowStartupLocation="CenterScreen">
<Grid>
<StackPanel>
<StackPanel Orientation="Horizontal">
<Image x:Name="DepthImage" Width="640" Height="480" />
<Image x:Name="FilteredDepthImage" Width="640" Height="480" />
</StackPanel>
<ScrollViewer Margin="0,15" HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto">
<StackPanel x:Name="DepthHistogram" Orientation="Horizontal" Height="300" />
</ScrollViewer>
</StackPanel>
</Grid>
</Window>

上面用到了一些新的标签:

StackPanel元素用于水平或垂直堆叠子元素,StackPanel 要么垂直叠放包含的控件,要么将包含的控件排列在水平行中,具体情况取决于 Orientation 属性的值。 如果将比 StackPanel 的宽度能显示的控件还要多的控件添加到 StackPanel 中,这些控件将被截掉且不显示。

看来很有必要学一下 WPF 的控件 ~~

namespace TestDepthHist
{
/// <summary>
/// MainWindow.xaml 的交互逻辑
/// </summary>
public partial class MainWindow : Window
{
private KinectSensor kinect;
private WriteableBitmap depthImageBitMap;
private Int32Rect depthImageBitmapRect;
private Int32 depthImageStride;
private DepthImageFrame lastDepthFrame;
private short[] depthPixelDate; private Int32 LoDepthThreshold = ;
private Int32 HiDepthThreshold = ; public KinectSensor Kinnect // 公共访问接口
{
get { return kinect; }
set
{ if (kinect != null)
{
UninitializeKinectSensor(this.kinect); // 首先释放资源
kinect = null;
}
if (value != null && value.Status == KinectStatus.Connected)
{
kinect = value;
InitializeKinectSensor(this.kinect); // 将新连接状态的传感器赋值,初始化
}
}
} public MainWindow()
{
InitializeComponent();
this.Loaded += (s, e) => DiscoverKinectSensor();
this.Unloaded += (s, e) => this.kinect = null;
} private void DiscoverKinectSensor()
{
KinectSensor.KinectSensors.StatusChanged += new EventHandler<StatusChangedEventArgs>(KinectSensors_StatusChanged);
this.Kinnect = KinectSensor.KinectSensors.FirstOrDefault(sensor => sensor.Status == KinectStatus.Connected);
} void KinectSensors_StatusChanged(object sender, StatusChangedEventArgs e)
{
// 处理Kinect的插拔引起的状态改变
switch (e.Status)
{
case KinectStatus.Connected:
if (this.kinect == null)
this.kinect = e.Sensor;
break;
case KinectStatus.Disconnected:
if (this.kinect == e.Sensor)
{
this.kinect = null;
this.kinect = KinectSensor.KinectSensors.FirstOrDefault(x => x.Status == KinectStatus.Connected);
if (this.kinect == null)
{
//TODO:通知用于Kinect已拔出
}
}
break;
//TODO:处理其他情况下的状态
}
} private void InitializeKinectSensor(KinectSensor kinectSensor)
{
if (kinectSensor != null)
{
DepthImageStream depthStream = kinectSensor.DepthStream;
depthStream.Enable(); depthImageBitMap = new WriteableBitmap(depthStream.FrameWidth, depthStream.FrameHeight, , ,
PixelFormats.Gray16, null);
depthImageBitmapRect = new Int32Rect(, , depthStream.FrameWidth, depthStream.FrameHeight);
depthImageStride = depthStream.FrameWidth * depthStream.FrameBytesPerPixel; DepthImage.Source = depthImageBitMap;
kinectSensor.DepthFrameReady += new EventHandler<DepthImageFrameReadyEventArgs>(KineceDevice_DepthFrameReady);
kinectSensor.Start();
}
} private void UninitializeKinectSensor(KinectSensor kinect)
{
if (kinect != null)
{
kinect.Stop(); // 关闭此传感器
kinect.DepthFrameReady -= new EventHandler<DepthImageFrameReadyEventArgs>(KineceDevice_DepthFrameReady);
}
} private void CreateDepthHistogram(DepthImageFrame depthFrame, short[] pixelData)
{
int depth;
int[] depths = new int[];
double chartBarWidth = Math.Max(, DepthHistogram.ActualWidth / depths.Length);
int maxValue = ; DepthHistogram.Children.Clear(); //计算并获取深度值.并统计每一个深度值出现的次数
for (int i = ; i < pixelData.Length; i++)
{
depth = pixelData[i] >> DepthImageFrame.PlayerIndexBitmaskWidth; if (depth >= LoDepthThreshold && depth <= HiDepthThreshold)
{
depths[depth]++;
}
} //查找最大的深度值
for (int i = ; i < depths.Length; i++)
{
maxValue = Math.Max(maxValue, depths[i]);
} //绘制直方图
for (int i = ; i < depths.Length; i++)
{
if (depths[i] > )
{
Rectangle r = new Rectangle();
r.Fill = Brushes.Red;
r.Width = chartBarWidth;
r.Height = DepthHistogram.ActualHeight * (depths[i] / (double)maxValue);
r.Margin = new Thickness(, , , );
r.VerticalAlignment = System.Windows.VerticalAlignment.Bottom; // 此元素的垂直对齐特征
DepthHistogram.Children.Add(r);
}
}
} private void CreateBetterShadesOfGray(DepthImageFrame depthFrame, short[] pixelData)
{
Int32 depth;
Int32 gray;
Int32 loThreashold = ;
Int32 bytePerPixel = ; // 4个通道只用前面3个bgr
Int32 hiThreshold = ;
byte[] enhPixelData = new byte[depthFrame.Width * depthFrame.Height * bytePerPixel];
for (int i = , j = ; i < pixelData.Length; i++, j += bytePerPixel)
{
depth = pixelData[i] >> DepthImageFrame.PlayerIndexBitmaskWidth;
if (depth < loThreashold || depth > hiThreshold)
{
gray = 0xFF;
}
else
{
gray = ( * depth / 0xFFF);
}
enhPixelData[j] = (byte)gray;
enhPixelData[j + ] = (byte)gray;
enhPixelData[j + ] = (byte)gray; }
DepthImage.Source = BitmapSource.Create(depthFrame.Width, depthFrame.Height, , , PixelFormats.Bgr32, null, enhPixelData, depthFrame.Width * bytePerPixel);
} private void KineceDevice_DepthFrameReady(Object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame frame = e.OpenDepthImageFrame())
{
if (frame!=null)
{
depthPixelDate = new short[frame.PixelDataLength];
frame.CopyPixelDataTo(this.depthPixelDate);
CreateBetterShadesOfGray(frame, this.depthPixelDate);
CreateDepthHistogram(frame,this.depthPixelDate);
}
}
}
}
}

很多情况下,基于Kinect的应用程序不会对深度数据进行很多处理。如果要处理数据,也应该使用一些类库诸如OpenCV库来处理这些数据。深度影像处理经常要耗费大量计算资源,不应该使用诸如C#这类的高级语言来进行影像处理。

Kinect SDK具有分析景深数据和探测人体或者游戏者轮廓的功能,它一次能够识别多达6个游戏者。SDK为每一个追踪到的游戏者编号作为索引。游戏者索引存储在深度数据的前3个位中。景深数据每一个像素占16位,0-2位存储游戏者索引值,3-15为存储深度值。7 (0000 0111)这个位掩码能够帮助我们从深度数据中获取到游戏者索引值。幸运的是,SDK为游戏者索引位定义了一些列常量。他们是DepthImageFrame.PlayerIndexBitmaskWidth和DepthImageFrame.PlayerIndexBitmask。前一个值是3,后一个是7。开发者应该使用SDK定义的常量而不应该硬编码3或者7。

游戏者索引位取值范围为0~6,值为0表示该像素不是游戏者。但是初始化了景深数据流并没有开启游戏者追踪。游戏者追踪需要依赖骨骼追踪技术。初始化KinectSensor对象和DepthImageStream对象时,需要同时初始化SkeletonStream对象。只有当SkeletonStream对象初始化了后,景深数据中才会有游戏者索引信息。获取游戏者索引信息并不需要注册SkeletonFrameReady事件。

不要对特定的游戏者索引位进行编码,因为他们是会变化的。实际的游戏者索引位并不总是和Kinect前面的游戏者编号一致。例如, Kinect视野中只有一个游戏者,但是返回的游戏者索引位值可能是3或者4。有时候第一个游戏者的游戏者索引位可能不是1,比如走进Kinect视野,返回的索引位是1,走出去后再次走进,可能索引位变为其他值了。所以开发Kinect应用程序的时候应该注意到这一点。

namespace TestDepthPlayer
{
/// <summary>
/// MainWindow.xaml 的交互逻辑
/// </summary>
public partial class MainWindow : Window
{
#region Member Variables
private KinectSensor _KinectDevice;
private WriteableBitmap _RawDepthImage;
private Int32Rect _RawDepthImageRect;
private short[] _RawDepthPixelData;
private int _RawDepthImageStride;
private WriteableBitmap _EnhDepthImage;
private Int32Rect _EnhDepthImageRect;
private short[] _EnhDepthPixelData;
private int _EnhDepthImageStride;
private int _TotalFrames;
private DateTime _StartFrameTime;
#endregion Member Variables #region Constructor
public MainWindow()
{
InitializeComponent(); KinectSensor.KinectSensors.StatusChanged += KinectSensors_StatusChanged;
this.KinectDevice = KinectSensor.KinectSensors.FirstOrDefault(x => x.Status == KinectStatus.Connected);
}
#endregion Constructor #region Methods
private void KinectSensors_StatusChanged(object sender, StatusChangedEventArgs e)
{
switch (e.Status)
{
case KinectStatus.Initializing:
case KinectStatus.Connected:
case KinectStatus.NotPowered:
case KinectStatus.NotReady:
case KinectStatus.DeviceNotGenuine:
this.KinectDevice = e.Sensor;
break;
case KinectStatus.Disconnected:
//TODO: Give the user feedback to plug-in a Kinect device.
this.KinectDevice = null;
break;
default:
//TODO: Show an error state
break;
}
} private void KinectDevice_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame frame = e.OpenDepthImageFrame())
{
if (frame != null)
{
frame.CopyPixelDataTo(this._RawDepthPixelData);
this._RawDepthImage.WritePixels(this._RawDepthImageRect, this._RawDepthPixelData, this._RawDepthImageStride, );
CreatePlayerDepthImage(frame, this._RawDepthPixelData);
}
} //FramesPerSecondElement.Text = string.Format("{0:0} fps", (this._TotalFrames++ / DateTime.Now.Subtract(this._StartFrameTime).TotalSeconds));
} private void CreatePlayerDepthImage(DepthImageFrame depthFrame, short[] pixelData)
{
int playerIndex;
int depthBytePerPixel = ;
byte[] enhPixelData = new byte[depthFrame.Width * depthFrame.Height * depthBytePerPixel]; for (int i = , j = ; i < pixelData.Length; i++, j += depthBytePerPixel)
{
playerIndex = pixelData[i] & DepthImageFrame.PlayerIndexBitmask; if (playerIndex == )
{
enhPixelData[j] = 0xFF;
enhPixelData[j + ] = 0xFF;
enhPixelData[j + ] = 0xFF;
}
else
{
enhPixelData[j] = 0x00;
enhPixelData[j + ] = 0x00;
enhPixelData[j + ] = 0x00;
}
} this._EnhDepthImage.WritePixels(this._EnhDepthImageRect, enhPixelData, this._EnhDepthImageStride, );
}
#endregion Methods #region Properties
public KinectSensor KinectDevice
{
get { return this._KinectDevice; }
set
{
if (this._KinectDevice != value)
{
//Uninitialize
if (this._KinectDevice != null)
{
this._KinectDevice.Stop();
this._KinectDevice.DepthFrameReady -= KinectDevice_DepthFrameReady;
this._KinectDevice.DepthStream.Disable();
this._KinectDevice.SkeletonStream.Disable(); // 要获取用户数据必须 this.RawDepthImage.Source = null;
this.EnhDepthImage.Source = null;
} this._KinectDevice = value; //Initialize
if (this._KinectDevice != null)
{
if (this._KinectDevice.Status == KinectStatus.Connected)
{
this._KinectDevice.SkeletonStream.Enable();
this._KinectDevice.DepthStream.Enable(); DepthImageStream depthStream = this._KinectDevice.DepthStream;
this._RawDepthImage = new WriteableBitmap(depthStream.FrameWidth, depthStream.FrameHeight, , , PixelFormats.Gray16, null);
this._RawDepthImageRect = new Int32Rect(, , (int)Math.Ceiling(this._RawDepthImage.Width), (int)Math.Ceiling(this._RawDepthImage.Height));
this._RawDepthImageStride = depthStream.FrameWidth * depthStream.FrameBytesPerPixel;
this._RawDepthPixelData = new short[depthStream.FramePixelDataLength];
this.RawDepthImage.Source = this._RawDepthImage; // 此次关联 WriteableBitmap,然后调用WritePixels 即可显示 —— 内存利用率高 this._EnhDepthImage = new WriteableBitmap(depthStream.FrameWidth, depthStream.FrameHeight, , , PixelFormats.Bgr32, null);
this._EnhDepthImageRect = new Int32Rect(, , (int)Math.Ceiling(this._EnhDepthImage.Width), (int)Math.Ceiling(this._EnhDepthImage.Height));
this._EnhDepthImageStride = depthStream.FrameWidth * ;
this._EnhDepthPixelData = new short[depthStream.FramePixelDataLength];
this.EnhDepthImage.Source = this._EnhDepthImage; this._KinectDevice.DepthFrameReady += KinectDevice_DepthFrameReady; // 事件监听器绑定
this._KinectDevice.Start(); this._StartFrameTime = DateTime.Now;
}
}
}
}
}
#endregion Properties
} }

关键点总结:

要 SkeletonStream.Enable();

WriteableBitmap 的用法,其他都没啥,很简单的逻辑


对物体进行测量

像素点的X,Y位置和实际的宽度和高度并不一致。但是运用几何知识,通过他们对物体进行测量是可能的。每一个摄像机都有视场,焦距的长度和相机传感器的大小决定了视场角。Kinect中相机的水平和垂直视场角分别为57°和43°。既然我们知道了深度值,利用三角几何知识,就可以计算出物体的实际宽度。

摄像头的视场角是一个以人体深度位置为底的一个等腰三角形。人体的实际深度值是这个等腰三角形的高。可以将这个等腰三角形以人所在的位置分为两个直角三角形,这样就可以计算出底边的长度。一旦知道了底边的长度,我们就可以将像素的宽度转换为现实中的宽度。例如:如果我们计算出等腰三角形底边的宽度为1500mm,游戏者所占有的总象元的宽度为100,深度影像数据的总象元宽度为320。那么游戏者实际的宽度为468.75mm((1500/320)*100)。公式中,我们需要知道游戏者的深度值和游戏者占用的总的象元宽度。我们可以将游戏者所在的象元的深度值取平均值作为游戏者的深度值。

计算人物高度也是类似的原理,只不过使用的垂直视场角和深度影像的高度。

机智的相似三角形

用到了新的控件:

<Grid>
<StackPanel Orientation="Horizontal">
<Image x:Name="DepthImage"/>
<ItemsControl x:Name="PlayerDepthData" Width="" TextElement.FontSize="">
<ItemsControl.ItemTemplate>
<DataTemplate>
<StackPanel Margin="0,15">
<StackPanel Orientation="Horizontal">
<TextBlock Text="PlayerId:" />
<TextBlock Text="{Binding Path=PlayerId}" />
</StackPanel>
<StackPanel Orientation="Horizontal">
<TextBlock Text="Width:" />
<TextBlock Text="{Binding Path=RealWidth}" />
</StackPanel>
<StackPanel Orientation="Horizontal">
<TextBlock Text="Height:" />
<TextBlock Text="{Binding Path=RealHeight}" />
</StackPanel>
</StackPanel>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
</StackPanel>
</Grid>
namespace TestDepthMeasure
{
/// <summary>
/// MainWindow.xaml 的交互逻辑
/// </summary>
public partial class MainWindow : Window
{
private KinectSensor _KinectDevice;
private WriteableBitmap _DepthImage;
private Int32Rect _DepthImageRect;
private short[] _DepthPixelData;
private int _DepthImageStride;
private int _TotalFrames;
private DateTime _StartFrameTime; public KinectSensor kinectDevice
{
get { return this._KinectDevice; }
set
{
if (this._KinectDevice!=value)
{
if (this._KinectDevice != null)
{
this._KinectDevice.Stop();
this._KinectDevice.DepthFrameReady -= KinectDevice_DepthFrameReady;
this._KinectDevice.DepthStream.Disable();
this._KinectDevice.SkeletonStream.Disable();
} this._KinectDevice = value;
if (this._KinectDevice!=null&& this._KinectDevice.Status==KinectStatus.Connected)
{
this._KinectDevice.SkeletonStream.Enable();
this._KinectDevice.DepthStream.Enable(); DepthImageStream depthStream = this._KinectDevice.DepthStream;
this._DepthImage = new WriteableBitmap(depthStream.FrameWidth, depthStream.FrameHeight, , , PixelFormats.Bgr32, null);
this._DepthImageRect = new Int32Rect(, , (int)Math.Ceiling(this._DepthImage.Width), (int)Math.Ceiling(this._DepthImage.Height));
this._DepthImageStride = depthStream.FrameWidth * ;
this._DepthPixelData = new short[depthStream.FramePixelDataLength];
this.DepthImage.Source = this._DepthImage; // 关联 this._KinectDevice.DepthFrameReady += KinectDevice_DepthFrameReady;
this._KinectDevice.Start(); this._StartFrameTime = DateTime.Now;
}
}
}
} private void CreateBetterShadesOfGray(DepthImageFrame depthFrame, short[] pixelData)
{
int depth;
int gray;
int bytesPerPixel = ;
byte[] enPixelData=new byte[depthFrame.Width*depthFrame.Height*bytesPerPixel];
int loThreshold = ;
int hiThreshold = ; for (int i = , j = ; i < pixelData.Length;i++,j+=bytesPerPixel )
{
depth = pixelData[i] >> DepthImageFrame.PlayerIndexBitmaskWidth;
if (depth<loThreshold||depth>hiThreshold)
{
gray = 0xFF;
}
else
{
gray = - ( * depth / 0xFFF);
} enPixelData[j] = (byte)gray;
enPixelData[j + ] = (byte)gray;
enPixelData[j + ] = (byte)gray;
}
this._DepthImage.WritePixels(this._DepthImageRect, enPixelData, this._DepthImageStride, );
} private void KinectDevice_DepthFrameReady(Object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame frame = e.OpenDepthImageFrame())
{
if (frame!=null)
{
frame.CopyPixelDataTo(this._DepthPixelData);
CreateBetterShadesOfGray(frame, this._DepthPixelData);
CalculatePlayerSize(frame, this._DepthPixelData); }
}
} private void KinectSensors_statusChanged(Object sender, StatusChangedEventArgs e)
{
switch (e.Status)
{
case KinectStatus.Initializing:
case KinectStatus.Connected:
case KinectStatus.NotPowered:
case KinectStatus.NotReady:
case KinectStatus.DeviceNotGenuine:
this._KinectDevice = e.Sensor;
break;
case KinectStatus.Disconnected:
this._KinectDevice = null;
break;
default:
break;
}
} private void CalculatePlayerSize(DepthImageFrame depthFrame, short[] pixelData)
{
int depth;
int playerIndex;
int pixelIndex;
int bytesPerPixel = depthFrame.BytesPerPixel;
PlayerDepthData[] players = new PlayerDepthData[]; for (int row = ; row < depthFrame.Height; row++)
{
for (int col = ; col < depthFrame.Width; col++)
{
pixelIndex = col + (row * depthFrame.Width);
depth = pixelData[pixelIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth; if (depth != )
{
playerIndex = (pixelData[pixelIndex] & DepthImageFrame.PlayerIndexBitmask) - ; if (playerIndex > -)
{
if (players[playerIndex] == null)
{
players[playerIndex] = new PlayerDepthData(playerIndex + , depthFrame.Width, depthFrame.Height);
} players[playerIndex].UpdateData(col, row, depth);
}
}
}
} PlayerDepthData.ItemsSource = players;
} public MainWindow()
{
InitializeComponent(); KinectSensor.KinectSensors.StatusChanged += KinectSensors_statusChanged;
this.kinectDevice = KinectSensor.KinectSensors.FirstOrDefault(x => x.Status == KinectStatus.Connected);
}
}
}
namespace TestDepthMeasure
{
class PlayerDepthData
{
#region Member Variables
private const double MillimetersPerInch = 0.0393700787;
private static readonly double HorizontalTanA = Math.Tan(57.0 / 2.0 * Math.PI / );
private static readonly double VerticalTanA = Math.Abs(Math.Tan(43.0 / 2.0 * Math.PI / )); private int _DepthSum;
private int _DepthCount;
private int _LoWidth;
private int _HiWidth;
private int _LoHeight;
private int _HiHeight;
#endregion Member Variables #region Constructor
public PlayerDepthData(int playerId, double frameWidth, double frameHeight)
{
this.PlayerId = playerId;
this.FrameWidth = frameWidth;
this.FrameHeight = frameHeight; this._LoWidth = int.MaxValue;
this._HiWidth = int.MinValue; this._LoHeight = int.MaxValue;
this._HiHeight = int.MinValue;
}
#endregion Constructor #region Methods
public void UpdateData(int x, int y, int depth)
{
this._DepthCount++;
this._DepthSum += depth;
this._LoWidth = Math.Min(this._LoWidth, x);
this._HiWidth = Math.Max(this._HiWidth, x);
this._LoHeight = Math.Min(this._LoHeight, y);
this._HiHeight = Math.Max(this._HiHeight, y);
}
#endregion Methods #region Properties
public int PlayerId { get; private set; }
public double FrameWidth { get; private set; }
public double FrameHeight { get; private set; } public double Depth
{
get { return this._DepthSum / (double)this._DepthCount; }
} public int PixelWidth
{
get { return this._HiWidth - this._LoWidth; }
} public int PixelHeight
{
get { return this._HiHeight - this._LoHeight; }
} public string RealWidth
{
get
{
double inches = this.RealWidthInches;
//int feet = (int)(inches / 12);
//inches %= 12;
return string.Format("{0:0.0}mm", inches * 25.4);
//return string.Format("{0:0.0}mm~{1}'{2:0.0}\"", inches * 25.4, feet, inches);
}
} public string RealHeight
{
get
{
double inches = this.RealHeightInches;
//int feet = (int)(inches / 12);
//inches %= 12;
return string.Format("{0:0.0}mm", inches * 25.4);
//return string.Format("{0:0.0}mm~{1}'{2:0.0}\"", inches * 25.4, feet, inches);
}
} public double RealWidthInches
{
get
{
double opposite = this.Depth * HorizontalTanA;
return this.PixelWidth * * opposite / this.FrameWidth * MillimetersPerInch;
}
} public double RealHeightInches
{
get
{
double opposite = this.Depth * VerticalTanA;
return this.PixelHeight * * opposite / this.FrameHeight * MillimetersPerInch;
}
}
#endregion Properties
}
}

深度值图像和视频图像的叠加

这个需要深度图像和RGB图像的对应关系了

用深度数据中游戏者所属的象元获取对应的彩色影像数据并叠加到视频图像中。这在电视制作和电影制作中很常见,这种技术叫做绿屏抠像,就是演员或者播音员站在绿色底板前,然后录完节目后,绿色背景抠出,换成其他场景,在一些科幻电影中演员不可能在实景中表演时常采用的造景手法。我们平常照证件照时,背景通常是蓝色或者红色,这样也是便于选取背景颜色方便抠图的缘故。

景深数据影像的象元不能转换到彩色影像中去,即使两者使用相同的分辨率。因为这两个摄像机位于Kinect上的不同位置,所以产生的影像不能够叠加到一起。就像人的两只眼睛一样,当你只睁开左眼看到的景象和只睁开右眼看到的景象是不一样的,人脑将这两只眼睛看到的景物融合成一幅合成的景象。

MapDepthToColorImagePoint,MapDepthToSkeletonPoint,MapSkeletonPointToColor和MapSkeletonPointToDepth。在DepthImageFrame对象中这些方法的名字有点不同(MapFromSkeletonPoint,MapToColorImagePoint及MapToSkeletonPoint),但功能是相似的。在下面的例子中,我们使用MapDepthToColorImagePoint方法来将景深影像中游戏者所属的象元转换到对应的彩色影像中去。细心的读者可能会发现,没有一个方法能够将彩色影像中的象元转换到对应的景深影像中去。

创建一个新的工程,添加两个Image对象。第一个Image是背景图片。第二个Image是前景图像。在这个例子中,为了使景深影像和彩色影像尽可能的接近,我们采用轮询的方式。每一个影像都有一个Timestamp对象,我们通过比较数据帧的这个值来确定他们是否足够近。注册KinectSensor对象的AllFrameReady事件,并不能保证不同数据流产生的数据帧时同步的。这些帧不可能同时产生,但是轮询模式能够使得不同数据源产生的帧能够尽可能的够近。 —— 像素之间不对应,帧流不一致!!(所以不能用AllFrameReady)

namespace DepthGreenScreen
{ public partial class MainWindow : Window
{
private KinectSensor _KinectDevice;
private WriteableBitmap _GreenScreenImage;
private Int32Rect _GreenScreenImageRect;
private int _GreenScreenImageStride;
private short[] _DepthPixelData;
private byte[] _ColorPixelData;
// 拉的方式获取数据,原因:像素不对应,帧流不对应 public KinectSensor KinectDevice
{
get { return this._KinectDevice; }
set
{
if (this._KinectDevice != value)
{
//Uninitialize
if (this._KinectDevice != null)
{
UninitializeKinectSensor(this._KinectDevice);
this._KinectDevice = null;
} this._KinectDevice = value; //Initialize
if (this._KinectDevice != null)
{
if (this._KinectDevice.Status == KinectStatus.Connected)
{
InitializeKinectSensor(this._KinectDevice);
}
}
}
}
} public MainWindow()
{
InitializeComponent();
CompositionTarget.Rendering += CompositionTarget_Rendering;
} private void UninitializeKinectSensor(KinectSensor sensor)
{
if (sensor != null)
{
sensor.Stop();
sensor.ColorStream.Disable();
sensor.DepthStream.Disable();
sensor.SkeletonStream.Disable();
}
} private void InitializeKinectSensor(KinectSensor sensor)
{
if (sensor!=null)
{
sensor.DepthStream.Range = DepthRange.Default;
sensor.SkeletonStream.Enable(); // 分割出人形所必需
sensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30); // 指定格式
sensor.ColorStream.Enable(ColorImageFormat.RgbResolution1280x960Fps12); DepthImageStream depthStream = sensor.DepthStream;
this._GreenScreenImage = new WriteableBitmap(depthStream.FrameWidth, depthStream.FrameHeight, , , PixelFormats.Bgra32, null);
this._GreenScreenImageRect = new Int32Rect(,,(int)Math.Ceiling(this._GreenScreenImage.Width),(int)Math.Ceiling(this._GreenScreenImage.Height));
this._GreenScreenImageStride = depthStream.FrameWidth * ;
this.GreenScreenImage.Source = this._GreenScreenImage; this._DepthPixelData = new short[this._KinectDevice.DepthStream.FramePixelDataLength];
this._ColorPixelData = new byte[this._KinectDevice.ColorStream.FramePixelDataLength]; sensor.Start();
}
} private void DiscoverKinect()
{
if (this._KinectDevice==null)
{
this.KinectDevice = KinectSensor.KinectSensors.FirstOrDefault(x => x.Status == KinectStatus.Connected); if (this._KinectDevice!=null)
{
InitializeKinectSensor(this._KinectDevice);
}
}
} private void RenderGreenScreen(KinectSensor kinectDevice,ColorImageFrame colorFrame,DepthImageFrame depthFrame)
{
if (kinectDevice != null && depthFrame!=null && colorFrame !=null)
{
int depthPixelIndex;
int playerIndex;
int colorPixelIndex;
ColorImagePoint colorPoint;
int colorStride = colorFrame.BytesPerPixel * colorFrame.Width;
int bytesPerPixel = ;
byte[] playerImage= new byte[depthFrame.Height * this._GreenScreenImageStride];
int playerImageIndex = ; depthFrame.CopyPixelDataTo(this._DepthPixelData);
colorFrame.CopyPixelDataTo(this._ColorPixelData); for (int depthY = ; depthY < depthFrame.Height;depthY++ )
{
for (int depthX = ; depthX < depthFrame.Width;depthX++,playerImageIndex+=bytesPerPixel )
{
depthPixelIndex = depthX + (depthY * depthFrame.Width);
playerIndex = this._DepthPixelData[depthPixelIndex] & DepthImageFrame.PlayerIndexBitmask; if (playerIndex != )
{
colorPoint = kinectDevice.MapDepthToColorImagePoint(depthFrame.Format, depthX, depthY, this._DepthPixelData[depthPixelIndex], colorFrame.Format);
colorPixelIndex = (colorPoint.X * colorFrame.BytesPerPixel) + (colorPoint.Y * colorStride); playerImage[playerImageIndex] = this._ColorPixelData[colorPixelIndex]; //Blue
playerImage[playerImageIndex + ] = this._ColorPixelData[colorPixelIndex + ]; //Green
playerImage[playerImageIndex + ] = this._ColorPixelData[colorPixelIndex + ]; //Red
playerImage[playerImageIndex + ] = 0xFF;
}
}
} this._GreenScreenImage.WritePixels(this._GreenScreenImageRect, playerImage, this._GreenScreenImageStride, );
} }
private void CompositionTarget_Rendering(Object sender, EventArgs e)
{
DiscoverKinect(); if (this._KinectDevice!=null)
{
try
{
using (ColorImageFrame colorFrame = this._KinectDevice.ColorStream.OpenNextFrame())
{
using (DepthImageFrame depthFrame = this._KinectDevice.DepthStream.OpenNextFrame())
{
RenderGreenScreen(this.KinectDevice, colorFrame, depthFrame);
}
}
}
catch (System.Exception ex)
{
// do nothing
}
}
}
}
}

Kinect 开发 —— 深度信息(二)的更多相关文章

  1. Kinect 开发 —— 深度信息

    转自:http://www.cnblogs.com/yangecnu/archive/2012/04/04/KinectSDK_Depth_Image_Processing_Part1.html 深度 ...

  2. Kinect开发笔记之二Kinect for Windows 2.0新功能

    这是本博客翻译文档的第一篇文章.笔者已经苦逼的竭尽全力的在翻译了.但无奈英语水平也是非常有限.不正确或者不妥当不准确的地方必定会有,还恳请大家留言或者邮件我以批评指正.我会虚心接受. 谢谢大家.   ...

  3. Kinect开发学习笔记之(一)Kinect介绍和应用

    Kinect开发学习笔记之(一)Kinect介绍和应用 zouxy09@qq.com http://blog.csdn.net/zouxy09 一.Kinect简单介绍 Kinectfor Xbox ...

  4. SharePoint 2013 APP 开发示例 (二)获取用户信息

    SharePoint 2013 APP 开发示例 (二)获取用户信息 这个示例里,我们将演示如何获取用户信息: 1. 打开 Visual Studio 2012. 2. 创建一个新的  SharePo ...

  5. Kinect开发文章目录

    整理了一下去年为止到现在写的和翻译的Kinect的相关文章,方便大家查看.另外,最近京东上微软在搞活动, 微软 Kinect for Windows 京东十周年专供礼包 ,如果您想从事Kinect开发 ...

  6. [.net 面向对象程序设计进阶] (23) 团队开发利器(二)优秀的版本控制工具SVN(上)

    [.net 面向对象程序设计进阶] (23) 团队开发利器(二)优秀的版本控制工具SVN(上) 本篇导读: 上篇介绍了常用的代码管理工具VSS,看了一下评论,很多同学深恶痛绝,有的甚至因为公司使用VS ...

  7. Excel阅读模式/聚光灯开发技术之二 超级逐步录入提示功能开发原理简述—— 隐鹤 / HelloWorld

    Excel阅读模式/聚光灯开发技术之二 超级逐步录入提示功能开发原理简述———— 隐鹤  /  HelloWorld 1. 引言 自本人第一篇博文“Excel阅读模式/单元格行列指示/聚光灯开发技术要 ...

  8. 使用OpenNI 2获取RGBD摄像头深度信息

    NiViewer 安装好摄像头驱动和OpenNI后,在Tools文件夹中可以找到一个程序NiViewer.NiViewer的一些基本控制方法如下: 1. ESC关闭NiViewer程序 2. 右键可以 ...

  9. Kinect 开发 —— 全息图

    Kinect的另一个有趣的应用是伪全息图(pseudo-hologram).3D图像可以根据人物在Kinect前面的各种位置进行倾斜和移动.如果方法够好,可以营造出3D控件中3D图像的效果,这样可以用 ...

随机推荐

  1. caffe(5) 其他常用层及参数

    本文讲解一些其它的常用层,包括:softmax_loss层,Inner Product层,accuracy层,reshape层和dropout层及其它们的参数配置. 1.softmax-loss so ...

  2. d3 bubble源码分析

    技术 d3.d3.pack.d3.hierarchy 展示 https://bl.ocks.org/xunhanliu/e0688dc2ae9167c4c7fc264c0aedcdd1 关于怎么使用, ...

  3. HDU-2045 不容易系列之(3)—— LELE的RPG难题 找规律&递推

    题目链接:https://cn.vjudge.net/problem/HDU-2045 找规律 代码 #include <cstdio> long long num[51][2]; int ...

  4. [转载]CentOS 7虚拟机下设置固定IP详解

    在 复制 他人作品之前,是因为我再此“跌倒”过一次,虽然原主说是永久地址,但是地址失效 不可避免.所以就原封不动的copy了过来,我自己也是按照他的一步一步配置的,我成功了,相信你们也会成功. 如果不 ...

  5. [Recompose] Refactor React Render Props to Streaming Props with RxJS and Recompose

    This lesson takes the concept of render props and migrates it over to streaming props by keeping the ...

  6. [BZOJ5305][HAOI2018]苹果树 组合数学

    链接 小 C 在自己家的花园里种了一棵苹果树, 树上每个结点都有恰好两个分支. 经过细心的观察, 小 C 发现每一天这棵树都会生长出一个新的结点. 第一天的时候, 果树会长出一个根结点, 以后每一天, ...

  7. BZOJ5106: [CodePlus2017]汀博尔

    [传送门:BZOJ5106] 简要题意: 给出n棵树,初始高度为h[i],每棵树每个月长高a[i] 现有一个客户,需要至少s长的总木材,而且每次截取的木材必须是一整颗树而且高度大于等于L 求出最少的月 ...

  8. 异常Exception

    try…catch…finally恐怕是大家再熟悉不过的语句了,而且感觉用起来也是很简单,逻辑上似乎也是很容易理解.不过,我亲自体验的“教训”告诉我,这个东西可不是想象中的那么简单.听话.不信?那你看 ...

  9. MVC ValidateInput(false)页面验证失效的解决方案

    毫无疑问这是一个bug,很多用户升级到rc时都遇到了这个问题,以前很正常的提交只要带有html标签就被报"...从客户端中检测到有潜在危险的 request.form 值."即使在 ...

  10. 最长回文子串 hihocode 1032 hdu 3068

    最长回文 Time Limit: 4000/2000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others) Total Submi ...