3D SLAM (Simultaneous Location and Mapping) is a technique for scanning 3D environments in real-time and combining the data with GPS or other location info to build up a 3D map of an area.
It's used by the military for doing recon and allowing unmanned robotic systems to find their way in urban environments, but the wider possibilities are pretty exciting, and as the video shows people are applying it in blender too!
Google Code page for open source 3DSLAM library: http://code.google.com/p/3dslam/
Slideshow (pdf): Simultaneous Location and Mapping in 3D: http://www.mitre.org/news/events/tech07/3055.pdf
Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.
Cool Blade Runner reference. And I think its cool we can now track replicants err ... map buildings.
In NUKE it's called "point cloud" data.. or something. It can be rendered as an image and then viewed in compositor to help compositor understand where are objects in 3D space. Would be great to have such thing in blender too..
In the source code repository there is code under slam 2, the source file have been modified in March/December 2009 so I would guess that the binary is slam 2 but not the latest code. Development appears to have stalled though.
here is a similar related project. http://insight3d.sourceforge.net/
Couldn't blender's new camera tracking (formerly Tomato branch) be "tweaked" to do something similar to what this "insight3d" app does? Say, import a video that's a flyaround of someone's face, then track multiple points to reconstruct it in 3d? I think there's real value in leveraging that code to perform this kind of function... think about architectural reconstruction! Take some videos of a house (inside and/or out), then rebuild it inside blender!
Yep it's like autodesk photofly aka 123DCatch project, point clouding capture... But how can we use it actually with blender ?
Seems interesting, the result is nearly identical tu other systems... But how does it work ???
Hi everyone, I didn't know my video was on blendernation :D.
The system works with a Kinect and it's similar to what you can find in this paper http://ils.intel-research.net/uploads/papers/3d-mapping-iser-10-final.pdf.
Our goal is to create a really simple model of a room for virtual reality using the blendergame engine.