SOFTWARE HOUSE Microsoft has announced that it is integrating its Kinect Fusion project into its software developer kit (SDK) for Kinect for Windows.
First developed as a research project at the Microsoft Research lab in Cambridge, UK, Kinect Fusion allows users to create 3D models of real world objects or environments by combining a continuous stream of data from the Kinect for Windows sensor. As the data is streamed from the cameras, it's combined into a 3D representation of an object or environment.
"As soon as the Kinect for Windows community saw it, they began asking us to include it in our SDK," Microsoft said in a developer network blog post. "Now, I'm happy to report that the Kinect for Windows team is, indeed, working on incorporating it and will have it available in a future release."
Microsoft said that Kinect Fusion can successfully read data either by moving the sensor around an object or environment or by moving the object being scanned in front of the sensor. The longer the object is placed in front of the camera, the more accurate the model becomes.
"Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3D map of objects or environments," the blog post explains.
"The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading."
Microsoft added that this allows Kinect Fusion to gather and incorporate data not viewable from any single viewpoint, saying, "Among other things, it enables 3D object model reconstruction, 3D augmented reality, and 3D measurements."
Microsoft is expecting developer communities and business partners to make good use of the tool now that it has been made available, and has high hopes that it will be used in areas such as 3D printing, industrial design, body scanning, augmented reality and gaming. µ
Sign up for INQbot – a weekly roundup of the best from the INQ