3 min read

In February, this year, at the Mobile World Congress (MWC), Microsoft announced the $399 Azure Kinect Developer Kit, an all-in-one perception system for computer vision and speech solutions. Recently, Microsoft announced that the kit is generally available and will begin shipping it to customers in the U.S. and China who preordered it. 

The Azure Kinect Developer Kit aims to offer developers a platform to experiment with AI tools as well as help them plug into Azure’s ecosystem of machine learning services. 

The Azure Kinect DK camera system features a 1MP (1,024 x 1,024 pixel) depth camera, 360-degree microphone, 12MP RGB camera that is used for additional color stream which is aligned to the depth stream, and an orientation sensor. It uses the same time-of-flight sensor that the company had developed for the second generation of its HoloLens AR visor. It also features an accelerometer and gyroscope (IMU) that helps in sensor orientation and spatial tracking.

Developers can also experiment with the field of view because of the presence of a global shutter and automatic pixel gain selection. This Kit works with a range of compute types that can be used together for providing a “panoramic” understanding of the environment.

This advancement might help Microsoft users in health and life sciences to experiment with depth sensing and machine learning.

During the keynote, Microsoft Azure corporate vice president Julia White said, “Azure Kinect is an intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions.” 

She further added, “It only makes sense for us to create a new device when we have unique capabilities or technology to help move the industry forward.”

Few users are complaining about the product and expecting some changes in the future. They have highlighted issues with the mics, the SDK, the sample code and much more.

A user commented on the HackerNews thread, “Then there’s the problem that buries deep in the SDK is a binary blob that is the depth engine. No source, no docs, just a black box.

Also, these cameras require a BIG gpu. Nothing is seemingly happening onboard. And you’re at best limited to 2 kinects per usb3 controller. All that said, I’m still a very happy early adopter and will continue checking in every month or two to see if they’ve filled in enough critical gaps for me to build on top of.”

Few others seem to be excited and think that the camera is good and will be helpful in projects. Another user commented, “This is really cool!” The user further added, “This camera is way better quality, so it’ll be neat to see the sort of projects can be done now.”

To know more about Azure Kinect Developer Kit, watch the video

Read Next

Microsoft Defender ATP detects Astaroth Trojan, a fileless, info-stealing backdoor

Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards

Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities