Advertisement

You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

Microsoft Kinect in Blender - Realtime Point Cloud Demonstration

40

http://www.youtube.com/watch?v=yZSXXFwsyhc

Dylan writes:

I made a python script for Blender using Brandyn White's work on the libfreenect setup. It's fully open source, under the GPLv2 and Apache licenses (as per libfreenect).

It can generate at half quality (seen in the above video) at approximately 10-15 FPS and at full quality (300k points) it can go at approximately 2 FPS (on higher spec equipment, it can probably go much better)

It shoots out a point cloud of what the Kinect sees into Blender, where you can export to PLY or any of the other formats Blender supports. This will also allow you to put this into something like Meshlab (or use Blender scripts that do the same thing) and get a 3D model of whatever the Kinect is seeing.

It's rather slow at this point in time, as I'm trading off some performance for quality. (there are a lot of dots right there to render!)

The upshot of doing this in blender is that it allows for exporting to all the many formats Blender supports, included PLY so that we can then use Meshlab to create a mesh of our capture :)

Microsoft definitely brings it all when they provide this awesome hardware for $150

Link

About the Author

Avatar image for Bart Veldhuizen
Bart Veldhuizen

I have a LONG history with Blender - I wrote some of the earliest Blender tutorials, worked for Not a Number and helped run the crowdfunding campaign that open sourced Blender (the first one on the internet!). I founded BlenderNation in 2006 and have been editing it every single day since then ;-) I also run the Blender Artists forum and I'm Head of Community at Sketchfab.

40 Comments

  1. Mookie, it can catch the back if you have 2 kinects! To get the full 360 view of what you're scanning, at a 120 degree field of vision, you'd need 3 kinects. All up $450~ for a full 360 degree scanner.

    If anyone has any ideas/things to implement, please let me know! I'd love to help out the Blender community a bit more :)

  2. Dylan - thanks for reply! Very cool looking script, I must say! But will it be possible to create a mesh out of those points? And how can you save the data of each frame? As a shape key or something else?

  3. Maybe it can do a 3d scanner with only one kinect and the object rotating on himself, with 3 kinects they must synchronize the datas in realtime from the different views.

  4. Hey mookie, it's very possible to do a mesh out of these points using any of the awesome plugins other people have created - my initial goal was to get it into blender in realtime but the mesh plugins won't exactly do it in realtime. I am able to add a sys.exit() call after a single loop through the code so that just the points are there freely editable (and exportable to meshlab). So yes, it is possible to do a mesh, but you'll have to play with some other plugins to do that :D

    I like your idea on the different frames mookie, and I might have to have a look at that :)

    darkel, I do believe it is possible to do a 3d scanner like this - and that was my next "challenge" so to speak. I'm not great at maths (though I do a hell of a lot of it in my studies) so I'm not sure I'd be able to develop an algorithm to match points in 3d (though I guess I can always try!) but I think maybe by giving certain increments (say, turning the object 30 degrees at a time) would be possible.

  5. Been waiting for this. The first thought when i heard about that they "hacked" Kinect was "when is it usable within blender" or "what could this mean for blender". I even bought it yet. I hope there will son be a way to gain motioncaturedata from myself and use these in Blender!!!

  6. Dylan, I think the sequential approach is a good one. By making differents capture of the object with fixe angular increment (maybe setting a user value will be a good thing, because certains object need finest scan like a cup of tea for example). You can convert shots into mesh and rotate each of them following their associated angular position, then joint all into one mesh and remove doubles and also a mesh simplification can be a good idea. All this operations are include in blender and can be made without any particular skill in math.

  7. Hey darkel,

    I'll definitely put that on my list of things to do then, I'm glad I've gone and put this into Blender so I can access all of Blender's awesome features ;)

    You can probably keep up to date best on my progress on my YouTube channel (though there's only one video on there right now) and on the github link I've linked in the video. If you'd like to help out, feel free to fork my code - there isn't much code there as I've tried to keep it as simple and succinct as possible.

  8. @tr3w and @Dylan

    Mmm, I wouldn't be too sure about that. It would need some tinkering to modulate the laser slightly so that each Kinect would recognize it's own 'signature' and also reduce overlap as much as possible.
    Who knows, it might even be done in software by using pattern-bursts. Something like the way a common IR remote-control works. Although that would slow processing down quite a bit.

    I from someone there is actually a Chinese system with Kinect capabilities at a fraction of the price (with complete SDKs). If anyone is interested, I'll try to track it down.

  9. @tr3w and @ Dylan

    If IR interference is a problem, it might be possible to use three Kinects by modulating the laser so that each Kinect recognizes its own 'signature'. That would require some hardware changes. But it might also be possibly through software/firmware by creating pattern-bursts, just like a common IR remote control works.
    Although that might slow down processing a bit (maybe quite a bit...).

    BTW, some time ago someone told me there is a Chinese system which works like the Kinect at a fraction of the price and has a complete set of SDKs. If anyone is interested, I'll try to track it down.

  10. TomsT,

    I was thinking of using "stop" IR projection commands to the USB device, so we can get a synchronised capture mechanism. Each device gets a few captures each per second and you'd be fine.

    Another idea was using material that doesn't reflect light (and hope that it extends to the near-infrared in not reflecting) and making that the background material with the object to be 3d scanned being the only thing that actually reflects light. It'd make it so much easier to do this sequential scan technique, as right now, I'm thinking the easiest way is to have a set distance from the object at different places. I could try my hand at a few math-y sort of things to get it to recognize bits of the object to do it for me, but I'm going to guess that'll be difficult..

  11. Your sequential solution would be easiest, because if the actual IR projection isn't modulated itself there will still be interference even if the distance is different. In the least, it would confuse the system.
    Unless you mean different distance in 2 dimensions! Now, why didn't I think of that...haha

    Uhm... making a background material for a 360 scan... Isn't that a contradiction in terms? If it's a static scan of an object, a rotator would be much easier (and cheaper) to build (take a look at http://www.intricad.com for example).

  12. Dylan, be sure i'll follow your progression because im very interresting about 3D scanning and animating in blender. I look your code, it seem that you made a display loop from the datas receive, i'm studying the 2.49 api to find something that can help you

  13. The background material would be for a scan of an object, rather than a room - which I've built a nice little servo platform (I just need to be bothered to write the software to get it to rotate via. USB - I was even thinking using an internet protocol to turn the servo, so I can have it remote from the computer)

    This would allow me to do the sequenetial room scan - I could probably go as far as 10 degree increments over the course of a minute or two to get a nice detailed map of the environment.

    I might spend a little bit of time playing with Blender's built-in rotate/shift commands and do a little math on my whiteboard to get some of the ideas in my head out. I'll keep in touch though :)

  14. Great work Dylan ,, looking forward to c more of your magic.
    I was thinking if it possible to control an armature in blender using Kinect ?. this will make MOCAP available for the masses ,, think about it ;).

  15. Interesting.
    What resolution can it get?

    Fine enough to capture a recognisable likeness of a face for use on a sculpted figure?

  16. Would it be possible, you think, to scan a room by moving the Kinect and then turn it into a textured mesh ? I mean : moving the Kinect would allow all angles to be combined, but somehow the algorithm would have to intelligently update the Kinect's movement. I don't understand math, i'm just asking because i see a potential for quickly making 3D arch models that can be used for digital sets.

  17. Yes that is totally doable not sure that it has been done much with kinect. Usally it is accomplished with photos matching points and calulating the camera's position. I think that the kinect would work great to get a quick and rough mesh of the enviroment. Not sure how good the detail would be but I guess you could get closer.

  18. Blah. No, thank you. With me personally, you won't catch me dead with Kinect. I'd rather wait till mocap cameras become more affordable and portable (both small-scale mocap systems and 3D scanners are already on that way). I like my open-source far away from Microsoft. Besides, I'd rather hack my Nintendo 3DS when I get it in March 2011--2 front cameras as video/picture input with 3D capabilities and augmented reality features. Imagine THAT used when Blender.

  19. Per Lars Obenhaupt on

    What first come to my mind was: when do we get the first Blender games using this tool. When i think about a caracter fully controlable whit the body. No premade animations so you realy could dug down in a shooter...sounds cool to me. of course for this it needs to be realtime but I guess like alwasy someone would find a solution.

  20. wow, this is pretty cool. The only thing that stumps me is the price of a Kinect :P. Is there a possibility that all that can be done with simple WebCam or Camera?? That would be even cooler :)

  21. Any specific packages needed for this?

    I figured Cython and NumPy, but? it's still giving "invalid syntax" throughout the? Python front-end.

  22. hi i have a problem with your script in blender for kinect, so: when i do alt-p the consolle renspondig python script error, may the path is not correct , i have downloaded the openkinect-libfreenect but ther's not a directory called /c/python but wrappers/python or wrappers/c so may i put the wrong path or what? could you help my Tank you and compliments for your job with kinect

Leave A Reply

To add a profile picture to your message, register your email address with Gravatar.com. To protect your email address, create an account on BlenderNation and log in when posting a message.

Advertisement

×