Sebastian writes:
I am currently working on an addon that allows you to import different photogrammetry (Structure from Motion) data formats into Blender. It can be used to perform automatic camera tracking in Blender. Also, there is an option to represent the point cloud with a particle system. Currently, the data formats of the following Structure from Motion libraries are supported: Meshroom, Colmap, OpenMVG and VisualSfM.
Automatic camera animation is supported as shown in the following gif:
And you'll find a short tutorial video below. I hope the addon will save you a lot of time performing your next camera tracking task :)
11 Comments
Thank you very much!!!
That is really nice!
I'll try it soon.
Thank you.
This is really awesome! What is or would be required to have the mesh exported from Meshroom so that it lines up with the track like the point cloud does. This addon in combination with the one over here https://www.youtube.com/watch?v=hk5ovQ6-IbM would be killer. Is this something that you have considered? Thanks again
This is already possible, since the Multi-View Stereo step does not change the coordinate system.
Just run the full Meshroom pipeline until it computes the *.obj and the corresponding *.mtl file. When importing the *.obj you need to adjust the transformation options. Set "Fordward" to "Y Forward" and "Up" to "Z Up".
This is needed, because Blender has a strange default coordinate system of obj files. If I am not mistaken, other tools like Meshlab or Cloudcompare correctly align the *.obj files with other formats such as ply.
I've added an example image here
https://github.com/SBCV/Blender-Addon-Photogrammetry-Importer
and a short description here:
https://github.com/SBCV/Blender-Addon-Photogrammetry-Importer/blob/master/doc/markdown/usage.md
Thanks so much for taking the time to respond so quickly, greatly appreciated.
I will have a look at that now.
You're welcome
I have run Meshroom on a 452frame shot and imported the OBJ all perfect, within reason. However, it looks as though Meshroom undistorts the footage when creating everything which makes sense. Is there a way to get that distortion value and pass that back so that the render lines up with the plate? What would be the best practice for this lens distortion workflow. Thanks so much.
Can you share the images and corresponding reconstruction results?
Could you open an issue on the github repository (I do not know if the comment section on blendernation is the right place to solve this problem)
By the way: The "PrepareDenseScene" node in meshroom creates a set of "exr" files which could be the corresponding undistorted images. Is the alignment better if you use these images?
The "exr" files that are generated are numbered in a way that does not correspond to the frame numbers of the originals. Is there a way to order them so they match correctly.
Ok worked it out, Thanks.