Advertisement

You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

Using two kinects for better 3D view

19

The idea of using a Microsoft Kinect controller for real-time 3D capture is sparking everyone's imagination, it seems. Of course, with only one camera you'll only get a limited view, and you see 'shadows' in your object's z-buffer. Okrey Los, the creator of the first proof of concept, has now succeeded in merging data from two Kinects. It still needs some work, but the demo is awesome!

About the Author

Avatar image for Bart Veldhuizen
Bart Veldhuizen

I have a LONG history with Blender - I wrote some of the earliest Blender tutorials, worked for Not a Number and helped run the crowdfunding campaign that open sourced Blender (the first one on the internet!). I founded BlenderNation in 2006 and have been editing it every single day since then ;-) I also run the Blender Artists forum and I'm Head of Community at Sketchfab.

19 Comments

  1. AllNamesAreRegistered on

    The ability to choose your angle of watching a video is pretty exciting! I have actually had this concept in my head for a while.

  2. Is there a way to derive armature positional data from this device? I don't need all the colors etc, but an armature would be a DREAM for animation.

  3. AllNamesAreRegistered on

    @NRK I would assume so. The Kinect's primary function is interpretation of bodily movements for gameplay but someone would probably have to write their own code; the program is most likely in the Xbox itself rather than the Kinect.

  4. The fun stuff ^___^
    I remembered the first time I had an experience with sound on Spectrum 48k. Then it was too expensive for me to buy 128k with "sound" so I bought AY-3-8910 chip and manually installed it into the system (that was not difficult). After that I created a little program that played looped noise sound with decay :)
    This experience with 2 cinects seems almost the same with a difference that this is much harder (I guess).

    BTW - the future usage of such technology can be found easily! Live shows those can't be overviewed from different places. Rally races - much more safety and a lot more fun!!! :) Catch your bull's hoof man and you'll be reach and will bring into Blender comunity much more than this :)
    Find somebody who will be making closed devices and you'll get your real business. I'm not against open source but this will happen anyway. With your help or without you.

  5. Wow.

    Amazing what the kinect is able to do. I'm sure the designers never thought people would be this into using their product to create this kind of work.

    There's a bunch here, but I think that one good suggestion for open source developers would be to think of each light source of a kinect as the light source in an unbiased renderer. Think of the kinect as shooting out photons and gathering the returns, removing the bias over time. Then if an object suddenly moves beyond a certain point, the algorithm switches from point gathering to motion capture and aspect monitoring. When the object stops, it goes back to point gathering, keeping the points already gathered and using the new position to either gather new points (a side not visible) or continue to increase the accuracy of existing points. You'd need two algorithms checked by a watchdog to switch between the two modes, and perhaps a unique way of flagging a group of mesh points as a unique, independent object to the algorithm.

    I wish that I had time to work on this sort of thing - it would be amazing. Another thing I'd like to know is how broad the ir spectrum is and can a filter change the light from one kinect to another sufficiently for a program to be able to differentiate between two light source patterns. Kind of like having a red light source and a blue light source being projected from different angles. If you can do that, you can know how much of kinect #1's light is reaching kinect #2's camera and vice-versa. Match that with interpolated visual data from the two webcams... :)

    There are so many possibilities. We certainly live in interesting times.

  6. mh the idea and concept is looking good

    visually this still seems to need a lot of work

    but if he could find a way to stabilize the 3d mesh and video feed
    this might turn into a sweet and cheap home 3d digitizer.

  7. I find all this very interesting. My hopes are for motion capture there is code out there for making skeletons. I think that the resolution is the limiting factor. I think that the way it works is a laser shoots through the filter that has a bunch of holes which causes the point to track in the ir camera. Not sure how the willow garage guys got red and blue data could be just the way the represent the data. http://www.youtube.com/user/WillowGaragevideo#p/u/0/rYUFu64VXkg
    Look what they did with it though this is a more impressive but less seen video

  8. wow man that is really awesome. could you imagine being able to watch a show like big brother and panning to where ever you wanted??? this project really has potential, not only for 3d but for professional video, as in a production company could only have to do 1 take of a scene and then they can choose the best angle. stick with this job trust.

  9. Well "Minority Report" had this waaaay in the future, and ILM made it for millions. Now Sony, Blender and a Genius does it for change! Great work.

  10. Is anyone working on using the Kinect as a human interface device for Blender? I want to be able to use the voice recognition to do things, such as to say "Scale X" then move my hands to scale the selection along the x-axis. The Minority Report version of Blender will be so great!

Leave A Reply

To add a profile picture to your message, register your email address with Gravatar.com. To protect your email address, create an account on BlenderNation and log in when posting a message.

Advertisement

×