Yesterday in the Kinect for Windows Blog, we found that some new features will be released in an update for the SDK. In addition to increased language support and “seated mode,” we’ll be getting record, playback and debug capabilities. This just brings questions to mind:
- Will this functionality be better or worse than the KinectRecorder?
- Will the SDK have discrete saved parts (image, depth, skeleton) for inspection?
- Will the SDK have independent component playback capability, since one component of sample code so far has skeletal joint rendering reliant on depth information? Right now, in the
KinectSkeletonViewer.DrawSkeleton
method, you return a point object by invoking theDepthImageFrame.MapFromSkeleton
method:
[code language=”csharp” wraplines=”true”]private Point GetPosition2DLocation(DepthImageFrame depthFrame, SkeletonPoint skeletonPoint)
{
DepthImagePoint depthPoint = depthFrame.MapFromSkeletonPoint(skeletonPoint);
switch (ImageType)
{
case ImageType.Color:
ColorImagePoint colorPoint = depthFrame.MapToColorImagePoint(depthPoint.X, depthPoint.Y, this.Kinect.ColorStream.Format);
// map back to skeleton.Width & skeleton.Height
return new Point(
(int)(this.RenderSize.Width * colorPoint.X / this.Kinect.ColorStream.FrameWidth),
(int)(this.RenderSize.Height * colorPoint.Y / this.Kinect.ColorStream.FrameHeight));
case ImageType.Depth:
return new Point(
(int)(this.RenderSize.Width * depthPoint.X / depthFrame.Width),
(int)(this.RenderSize.Height * depthPoint.Y / depthFrame.Height));
default:
throw new ArgumentOutOfRangeException(“ImageType was a not expected value: ” + ImageType.ToString());
}
}[/code]
In the end, I assume that each piece of the overall saved information (Image, Depth, Skeleton) will still be reliant on at lease one of the other pieces.