Advertisement

You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

Facial motion tracking

7

Allan Liddle shares how he did a facial motion capture project in Blender.

I made this video is an experiment for a potential customer: to prove whether I can successfully track the movements from a face in a video - as well as the expressions that go with it - and to project those movements onto the modelled face of another character with different facial proportions. It was quite a mission!

Here is the result (don't be too concerned about the accent ;-) :

https://youtu.be/3zg_7HCjUKI

Everything was done in Blender and a bit of Gimp. I downloaded the facial texture from the Internet, but I had to edit it in Gimp.

Below is the composite footage that compares the animation with the original video. Notice the differences in facial features, which had to be catered for.

Summary of how it was done:

I first built the model of the character's face, and then rigged it using Pitchypoy.

Next, I made a video of my face. I then used camera tacking to move the model of my face and to track the movements of my eyes, mouth, etc. Thereafter, I used the markers in the video to drive empties that run along the surface of the model of my face (as it moves).

There are bones in the rig of my face that then track those empties. The movements of those bones (in my face model) are then translated into movements of bones for the character's rig that I also developed. Some of the bones of the Pitchypoy rig of the front character then copy the location of the bones that are so moved.

(I hope it all makes sense)

About the Author

Allan Liddle

I am a qualified electronics engineer, who migrated into software development/management, but I am also an artist. By combining my analytical and creative sides, I do 3D CG and animation (in the broad sense of the word) in my spare time. I do all my 3D work in Blender. I love the open source movement and do other work in the GIMP, Audacity, Inkscape, Open Office, etc. I am a Blender Foundation Certified Trainer (BFCT) and have provided training in various cities and in other countries.

7 Comments

  1. Is it possible to do tracking from multiple videos of the same thing? Like for example, if you setup 3 video cameras around something, and filmed on all 3 at once, and combined the tracking to get accurate 3D tracking of feature points?

    • I suppose it could be done. It would mean repeating the tracking 3 times. I just wonder how accurately one will be able to align the 3 cameras to the same model - and therefore how accurately one would be able to track the common markers?

      • Having said that: I think there may be 2 ways:
        1. The markers end up being empties that follow/slide along the surface of a model. Camera #2 and #3 could have their own empties. In the final tally, the bone that has to follow an empty, can be weighted to follow TWO empties: 50% each.
        2. Another way could be to somehow (I'm not sure how yet) let the perpendicular empty from camera #2 (and #3) determine the depth of the empties from the main camera (#1).

Leave A Reply

To add a profile picture to your message, register your email address with Gravatar.com. To protect your email address, create an account on BlenderNation and log in when posting a message.

Advertisement

×