Allan Liddle shares how he did a facial motion capture project in Blender.
I made this video is an experiment for a potential customer: to prove whether I can successfully track the movements from a face in a video - as well as the expressions that go with it - and to project those movements onto the modelled face of another character with different facial proportions. It was quite a mission!
Here is the result (don't be too concerned about the accent ;-) :
https://youtu.be/3zg_7HCjUKI
Everything was done in Blender and a bit of Gimp. I downloaded the facial texture from the Internet, but I had to edit it in Gimp.
Below is the composite footage that compares the animation with the original video. Notice the differences in facial features, which had to be catered for.
Summary of how it was done:
I first built the model of the character's face, and then rigged it using Pitchypoy.
Next, I made a video of my face. I then used camera tacking to move the model of my face and to track the movements of my eyes, mouth, etc. Thereafter, I used the markers in the video to drive empties that run along the surface of the model of my face (as it moves).
There are bones in the rig of my face that then track those empties. The movements of those bones (in my face model) are then translated into movements of bones for the character's rig that I also developed. Some of the bones of the Pitchypoy rig of the front character then copy the location of the bones that are so moved.
(I hope it all makes sense)
7 Comments
Great work
a full tutorial will be very useful.
keep up the great work
Mar10
Agreed! Great work. I would love to see a tutorial also!
Emo
Is it possible to do tracking from multiple videos of the same thing? Like for example, if you setup 3 video cameras around something, and filmed on all 3 at once, and combined the tracking to get accurate 3D tracking of feature points?
I suppose it could be done. It would mean repeating the tracking 3 times. I just wonder how accurately one will be able to align the 3 cameras to the same model - and therefore how accurately one would be able to track the common markers?
Having said that: I think there may be 2 ways:
1. The markers end up being empties that follow/slide along the surface of a model. Camera #2 and #3 could have their own empties. In the final tally, the bone that has to follow an empty, can be weighted to follow TWO empties: 50% each.
2. Another way could be to somehow (I'm not sure how yet) let the perpendicular empty from camera #2 (and #3) determine the depth of the empties from the main camera (#1).
Don't know if you found this alteady, but maybe also for others looking for this... I found this Blender Addon that does exactly that!
https://github.com/Uberi/MotionTracking/blob/master/README.md
video OFF, please repost or give link to see.