Beniamino Della Torre is back with the second part of his epic ‘Alien Invasion’ trailer! Read on to learn more about him and his project.
Can you tell us a little bit about yourself? What is your background?
Ok, I’m 37 years old and I live in Fossano, a small town in Italy. I work in graphic and advertising by myself, in a little studio, with my wife and 2 friends. Some year ago I started using Blender for a typography experiment (you can see it here) and after this I’m using Blender more and more, for various purposes. But two years ago, I started a challenge with myself: I wanted to simulate an alien invasion with starship and aerial fight and explosions as real as possible, using only Blender.
Now you should know that I can’t model a glass… I’m not able in modeling at all, and so in texturing, rigging, animating. Two years ago I didn’t even know how to start my experiment, because VFX and CGI were new words for me. But luckly, on YouTube you can find whatever you look for, expecially if it is inherent the open source world. So, I began watching tons of reels and following tutorials, learning each days something new. Frustration was always with me, because learning Blender is very hard… but I never gave up! And in this way I’ve learnt about different techniques: camera tracking, camera mapping, panoramas stitching, lighting and, at the end, Blender nodes (The result of my first efforts is this first trailer).
What are your plans for ‘Alien Invasion’? Is it an exercise in animation or do you plan to make a longer movie?
Sincerely I have no money and no time to work more on this project. Here, I mean in my country, there’s no the movie culture you can find in Los Angeles, so the skills I’ve reached don’t help me earning much money. I’m thinking about a short movie, but not about aliens and without all that CGI.
Can you tell us about your production pipeline? Which tools to you use?
As you can understand, my tools are very low budget. I use a Panasonic HDC-X800 camera at 1920×1080 (unfortunately this camera works only in 50p, so I have to process all footage in Apple Compressor, to bring it at 1280×720 25p). I chose to work in 720p just to have a faster rendering time. I use an iMac i5 2.5 Ghz with 4 Gb RAM.
As I am a self-taught person my way to work can be considered wrong by someone. But now I’ll try to explain how I work: all the scene is worked inside Blender, and I use essentially three different techniques:
- Real video footage mixed with CGI. In this case I use the Blender motion tracker (instead of Icarus, that is too old). The only important thing is to create the tracking point manually. It took me about 6 hours to track the highway scene manually, because the tracker lost every point! Then I light the scene balancing one Lamp (Point lamp, because I really don’t like the Sun. So I use a Point modifying the falloff curve as a horizontal line) with ambient occlusion (I use a very low occlusion value, just to bring up the shadows subjects). After this, the work is all done moving object on layers and working a lot with nodes (mixing every original video frame with CGI).
- Fake world created by a panorama stitched with the open source Hugin and mapped on a sphere. In this case I don’t have to synch the action to a pre-existent camera movement, so it’s everything easier. I can compose the scene and then simulate the movement of the camera as I like. Then I work on layers and with nodes again.
- Camera mapped world: I re-create the 3D world using low poly polygons and mapping them with sticky UV based on the camera position. This situation is quite similar the preceding, with a little but important difference: In the fake world inside the sphere you can only rotate the camera, that must be in the centre of the sphere, while in the camera mapped world you can also make little traslation of the camera in the space.
In all cases, after a long tweak phase, i proceed rendering frame by frame in PNG RGB format, before merging every frame into a sequence with Quicktime or other applications (even Blender can do it).
For the lens flares effect you can see, I used a plugin called mflare for Apple Motion. There’s a script I found on BlenderArtists that helps you bring your Blender 3D asset in Apple Motion, so you can generate 3D flares synchronized with your scene.
In the end, I edit all the scenes in Apple Final Cut, just because I’m familiar with it. Here I do some color correction too, if necessary.
This time, I’ve tried a new experiment: editing while composing the music track (I’d love to compose music, 15 years ago) in Apple Logic. In this way you can reach a very good result.
Can you some Blender screenshots or images of spaceship models?
PS: I should tell tons of things more… but time is short, so I make a short list of what is passing now through my head:
- I’ve never used game engine for this scenes
- no bullet physic, only particles physic
- all the explosions are video textures with alpha channel on planes (they are from video copilot action essentials)
I’d like to talk about a very important thing. Blender is an open source application, so everybody can use it without spending money. Nobody is getting reach programming and selling Blender, because the open source world offers a new vision of a possible better future world in which there’s less money profit and more human equality. I’m glad to everybody involved in Blender for the opportunity it gave me, because I think that without Blender I’ll never get near the 3D VFX and CGI world. And I’ll also like to thanks people like you, like the one on Blendernation, Blenderartists, and other Blender sites that give for free their knowledge to everybody!