You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

Blender Photogrammetry Addon v1.0

17

Stuart Attenborrow writes:

I've recently written an addon that may be useful to those of you using Blender's camera tracker for VFX, or those interested in photogrammetry.

I've been attempting to reconstruct sparse, low resolution images where a normal automated pipeline fails. The usual fix is to take more photos but I'm unable to do so and must work with what I have. Autodesk had an application in the past called ImageModeler which met this need but has since been discontinued with no avenue for licensing. In essence, ImageModeler was a photogrammetry tool where the user manually placed points and the software solved the camera poses and sparse 3D point cloud. Fortunately, this is exactly what Blender can do!

After solving a camera move in Blender and generating a 3D scene, the addon can be used to leverage either COLMAP or PMVS to reconstruct a dense point cloud from the sparse tracking data.

After reconstruction, the point cloud is imported directly into Blender. This allows you to position objects accurately within your scene using the thousands of new points. You could also use Meshlab to generate a mesh and associated texture from the point cloud and then import this into Blender. This would give you textured geometry that can be reflected or simulated in your VFX shot with minimal effort!

If you come up with something awesome from it, please share!

Download

About Author

Stuart Attenborrow

I'm a software developer and 3D generalist in Tasmania, Australia. I switched from the industry standard 3D software a decade ago when Blender 2.5 was released. It had a great UI, was feature rich and (most importantly) it was stable.

17 Comments

  1. This is awesome. Thank you! Question, is this add-on self contained, or would I need other photogrammetry software you mentioned (COL, PMVS) in order to get the point clouds? Or maybe I'm misunderstanding.

    My original workflow was VisualSFM>MeshLab>Blender. But if this could do it all....then great! :)

  2. Couple questions:
    1. Could you use Blender's rotoscoping capabilities to instruct the software to ignore parts of the image/feed?
    2. Is this adaptable to 360° cameras?

    Thanks in advance.

    • 1. This is one of the tasks on my to-do list. Blender's masks would make this workflow super easy.
      2. No idea sorry! Depends whether Blender's motion tracking supports 360 cameras...

    • You could but I would not recommended, 360 videos are distorted in a way so it wouldn't work, I mean it could but distorted images or from weird angle could result in inaccurate model

  3. This addon looks awesome to say the least! Is it possible to use an imported camera solved in an external 3d tracker such as Syntheyes to create the reconstruction. Syntheyes exports a python script which setups a solved camera and empties similar to what blender does, but does not deliver trackers, just empties, which looks to be what is required for your script to work. Is there a way for the addon to use this data now or in the future? Thanks.

    • This might best be done directly within something like Meshroom. If the camera poses are available but the 2D tracker data is not, I believe you can create a JSON file of the cameras that Meshroom can utilise in it's Structure From Motion node. It'll retrospectively calculate features based on these cameras that can then be used for full reconstruction.

      As for extending the addon, if you know python it should be easy to follow. Each software gets it's own python module, and the data structure can be inferred from looking at the other importers/exporters. It originated with the bundler format so that became the intermediary format between them.

  4. Hi There,

    This looks amazing - thanks for putting out into the open domain!

    I'm just trying it using colmap and every time I get this error ... (I am using an RTX2070 with latest studio driver)

    CUDA error at C:/Users/joschonb/Development/colmap/src/mvs/gpu_mat_ref_image.cu:110 - invalid texture reference
    Traceback (most recent call last):
    File "C:\Users\Woody\AppData\Roaming\Blender Foundation\Blender\2.80\scripts\addons\blender_photogrammetry\__init__.py", line 79, in execute
    outputs[p.output].func(load_props, data, scene=scene)
    File "C:\Users\Woody\AppData\Roaming\Blender Foundation\Blender\2.80\scripts\addons\blender_photogrammetry\colmap\load.py", line 48, in load
    raise Exception('COLMAP patch_match_stereo failed, see system console for details')
    Exception: COLMAP patch_match_stereo failed, see system console for details

    location: :-1

    location: :-1

  5. I have several photos taken with the same digital camera Canon EOS 1200D. EXIF data are known, however because of zooming the focal lengths vary from 21 mm to 40 mm. It seems to me that in this situation it is not possible to apply motion tracking and manually match points of the photos. So it is not possible to use this woderful Blender photogrammetry addon either. Am I right?

Leave A Reply

To add a profile picture to your message, register your email address with Gravatar.com. To protect your email address, create an account on BlenderNation and log in when posting a message.