I made a python script for Blender using Brandyn White's work on the libfreenect setup. It's fully open source, under the GPLv2 and Apache licenses (as per libfreenect).
It can generate at half quality (seen in the above video) at approximately 10-15 FPS and at full quality (300k points) it can go at approximately 2 FPS (on higher spec equipment, it can probably go much better)
It shoots out a point cloud of what the Kinect sees into Blender, where you can export to PLY or any of the other formats Blender supports. This will also allow you to put this into something like Meshlab (or use Blender scripts that do the same thing) and get a 3D model of whatever the Kinect is seeing.
It's rather slow at this point in time, as I'm trading off some performance for quality. (there are a lot of dots right there to render!)
The upshot of doing this in blender is that it allows for exporting to all the many formats Blender supports, included PLY so that we can then use Meshlab to create a mesh of our capture :)
Microsoft definitely brings it all when they provide this awesome hardware for $150