rob on earth writes:
In this video Blender writes Python code to automatically take 4 camera positions and 8 matcap selections and save all 32 image files.
You can then script Blender to run against your .blend files and use imagemagick to build a montage from the results.
Script files can be found on GitHub. Free Human Skull mesh from BlendSwap.
2 Comments
Thanks so much for this video tutorial -- This may be part of a solution to a problem I've been thinking about. See I want to do rendered 3D comics (not animations). I realize that I have to adapt the "live action" mode of "shooting scenes" == that is rendering according to the locations they use and not necessarily based on characters. If I have 3 scenes with 10-25 "shots" (i.e. Panels in comic terminology) and most (if not all) of the shots will be of different ratios (Unless I'm doing a "strict grid" where each panel is the same size for the page). And each scene will be a different blend file.
All I need to figure out is how to use different cameras that exist in the blend file and have the Python script go though the cameras one-by-one until there's no more cameras. It would get the ratio for the render and the size from the camera in an ideal script. I feel that this is doable.
I had stumped various peeps about this request, including one who proclaims to use blender for comics. Your video really has given me hope. Now I just need to finish my models of the characters and some sample sets and then I can experiment with Python scripting via blender, like you showed, along with some actual coding. I know JavaScript and basic C programming, so I think I can learn how Python works. Hopefully.
Very few videos I can say are insprirational. Yours is one of them.
Very glad you found it useful.
Not exactly what you are asking for, but I did render from different cameras a few years ago using the camera name from a list.
bpy.context.scene.camera = bpy.data.objects[camera_name]
You can see the scripts, they are a bit more involved and I added lots parameters, but it might help.
https://github.com/robgithub/camera-track-endevour/
The resulting animation was included in this video.
https://youtu.be/Q6FS9sRJcDQ?t=47