You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

Siggraph - Face Modeling


andy thumb.jpgThis video shows Andy Goralczyk, the Art Director of Elephants Dream, working his magic by creating an alien face in only thirty minutes.

Andy did his work in complete silence, typing in occasional comments in Blender. Still, the audience was transfixed and a small group of spectators outside our booth was forming while he was working.

I'm afraid that the recording quality of this video is a bit lower than the other ones - I think that the automatic brightness setting was playing up here.

Download: mirror 1 | mirror 2 [Quicktime, 67MB]
Note: right-click and save-as to store the files on your harddisk

About the Author

Avatar image for Bart Veldhuizen
Bart Veldhuizen

I have a LONG history with Blender - I wrote some of the earliest Blender tutorials, worked for Not a Number and helped run the crowdfunding campaign that open sourced Blender (the first one on the internet!). I founded BlenderNation in 2006 and have been editing it every single day since then ;-) I also run the Blender Artists forum and I'm Head of Community at Sketchfab.


  1. Thankyou for posting these even though they are so large in resolution. I've been very pleased with these first three so far. i can't wait to see what's in store for the last of the four. I hope it's something verse-related! :3

  2. especially this one would have been great with comments. it's a little bit hard to follow, because video resolution doesn't clearly show picked menu items or klicked buttons.

    but generally these videos are great.

  3. He Bart!
    Great job man!

    Don't let the idiots that don't know how to configure their browser and computer stop you reporting great stuff!


  4. Great work there guys!

    I always have a hard time making the area around the eye ball. This
    demo was a particular eye opener on that part. (Bad pun, I know :-P)

    Video quality is really good and so far the the audio was very clear, too.

    Just one suggestion, why use a camera to make a video of the demos?
    Wouldn't it have been easier to use a screen capturing software to record
    directly off the framebuffer?

    Anyway, keep it up!!

  5. @Axel: we tried but the screen capturing software kept crashing. I already had a (reliable ;-) DV Cam so I decided to use that. Screen capturing is on my wishlist for the Blender Conference though. If anyone has suggestions about the best solution (preferably a hardware based solution - I don't trust screen capturing enough), I'd love to hear about them.

  6. Thanks Bart for the vids, Although I don't know if the 'silent treatment' was the most informative for leaning purposes :)

    And I guess no screencapture software because it slows down the PC it captures ;)

  7. @Axel:
    Maybe Bart should answer that as to his motivation.

    But I can tell you something about taping the blender conference(s).
    Last bconf I taped the speakers with a video camera and recorded the screen to vhs (alas there was no video capture device, so it got analog first).
    People don't only want to see the screen but also the guys doing the job.
    So idealy one would edit the (digitaly) screen capture with life footage, specialy when the speaker talks alot and nothing happens in his/her powerpoint presentation.

    Then when people in the audience speak the viewer expects that to be recorded to, so you need microphones in the room. Lots of them because the speaker is at a conference, not a tv-show, so they don't wait untill there is a mic in front of them.

    Also, converting a 1280x1024 computer screen to a 720x576 PAL image does not always give a clear view on what's happening. You can't read the button names for instance. Idealy the editor would tape in HD and then zoom when needed.

    I found this for a 3 day conference to much work. I can even imaging this being to much work for a halfhour demo if you have to do it the night after a day at the booth.

    Then there is the filesize issue. hours of footage in PAL resolution take up many bytes. The taped material is not compressed very well so if you don't want to bring loads of harddisk to the booth then better not tape to much.
    An other 'wonder' of video is that it's still 'realtime'. Copying an hour of video costs... an hour! Generaly speaking an edit costs 6x the time you see. And the horror of internet video is that it takes the whole night to compress it.

    So, :) I think these siggraph vids are a nice 'reporting' of what is happening. And not so much an 'optimal tutorial'. And I appriciate and acknolledge the amount of work that goes into making this possible. I'm sure that blendernation is not on an Oprah's budget, probably not even a Springer.

  8. Hi ! Thank you a lot for these very interesting Videos !

    For my own, as I have very few experience of spoken english, and as each speaker has is own accent, I often have difficulties to understand what is said.

    With a mute video, I haven't this problem !o)

    I have been impressed by the modelling speed, and I guess that there are some keyboard shortcuts that I have not yet discovered after two years using Blender every day !

    Great job!


  9. @bart: I was hoping to have Hardware screen capturing, but good harware is expensive :( .

    I'm doing some tests on adding a recording functionality to Blender, recording directly from OpenGL, if it is fast enough I'm happy with that. Let's see what my poor C skills are capable of.

    But if you do find a decent solution that isn't expensive and works in Linux I can help on buying it ...

  10. @bart, @joeri

    Thanks for the answers. I didn't realize screen capturing was that much more work than cam recording. The "analoge detour" just seemed a bit odd to me.

  11. Yup. The analog detour is not what you want.
    At the Waag there was equipment that converted the computer output to a video signal.
    That signal was used to create the live internet stream. Idealy one would plug in a digital video recorder or a media center computer to record straight to mpeg.

  12. ooooo, I've been looking forward to this one. Thanks Bart.

    I have in the past used Archos devices for doing hardware screen capture, but that is probably quite an expensive solution.

  13. @Symposius

    If I'm not wrong, Archos devices capture in a quite proprietary MPEG4... and I've heared that it was not a very compatible format, over all if you want to convert it in something else.

    And also, for the devices I have seen, they captured only from Composite Video analog source, not from digital USB or FireWire ports.

    So, the Archos solution seems to be one of the worse in my opinion...


  14. Very good Blender demo but I kinda missed the sound.
    I hope @ndy introduced himself to the audience at least and thanked them afterward - might have seemed a bit rude otherwise.
    Yup ok so on a roll and waiting for no.4 bart...
    Thanks again for your efforts.
    Will you be posting Ton's full speech sometime?

  15. @Roubal

    It's in XviD actually (open source ISO-compliant MPEG-4). And I can easily convert those files to (pretty much) any file type I like using utilities like transcode. I have to say though, I'm not really into using open source software for its own sake. I like supporting OSS and I work on and donate to a lot of open source projects myself, but I don't think people should feel obliged to use it if it doesn't suit their needs (isn't that why Blender can output to any codec you have on your computer?). That's my two cents anyway.

    Out of interest, does anyone know of a hardware based screen capture that codes to something more open than XviD? I wouldn't mind trying one out myself to see how it compares.

  16. Cowdude is right.

    Making the videos available for download is great.

    I've tried to look at some stream video on several websites, and it is often clipped and difficult to play, and often more difficult to save on disk if you want to review them later.

    So Thanks again !o)

  17. the reason why i didnt do the usual business talk (like on the days before) was simply because i couldnt go on talking about "how great and efficient blender's modeling tools are blah blah blah" with my actual models turning out like a bunch of crap.
    thankfully wybren pointed this out to me the day before, since i didnt really pay much attention to a model when i was talking ;) this time i just wanted to model something (more or less) nice for a change, duh! :)

    doing the comments as text objects in blender was actually kinda funny for the audience. there's absolutely nothing rude about it... and i dont think i came across that way. after the presentation people were asking me lots of questions and i could go into more detail.

    anyways, thanks for all your comments! i had a great time at siggraph, met the most extraordinary bunch of (blender) people you could find in this 'verse :)


  18. I use a Canopus ADVC-100 for capturing video from analog devices. It converts any analog signal (s-video, or composite video) into DV on the fly (it acts as a pass-through) which can then be captured the same way you'd capture footage from a DV camera (through a computer firewire port).

    I've used it to convert some videos to DVD where I was just having too many problems trying to do a direct conversion due to framerate mismatches and codec issues. I just played them back on the laptop, connected the S-video out to the ADVC, and then the ADVC to the firewire port on my desktop. The quality turned out quite nice. I've also used it to capture footage from VHS and Hi-8 cameras. It also works in reverse, so you can do 'print to tape' of your DV footage if you want to make VHS copies (not as important anymore as DVD is a pretty universal distribution format).

    For something like these, you would end up with downsampling (NTSC DV is 720x480) so you probably wouldn't be able to read buttons (unless your presenter system was running a relatively low res like 800x600) but you *would* get a clean transfer. If you ran your microphone to it, you would also not have to worry about sync issues that you sometimes have to worry about with audio captured through a soundcard and video captured through other means - as the audio and video are synchronized together into the DV stream that goes to the computer.

    None of this is a complaint about the existing videos ... just an option for the future since that discussion came up. :)

  19. Sovereignncc-e on

    About the screen capture that was being discussed earlier, couldn't you run a video out to a VCR, tape the whole thing, and then re-capture it? It would be a lot of work, but it might be worth it.

  20. As for recording the siggraph demos... siggraph ended last thursday, so its a bit late to capture them differently. Maybe make a note for next years siggraph to mention it while plans for siggraph are being put together.
    Thanks again bart for providing these files! They are great!

  21. Javier Reyes Guzmán - Puerto Rico on

    Hi everyone:

    I know that this is not a blender news, but I think is important.

    "Nova Design is selling Aladdin 4D into Open Source! We're talking source code, trademarks and everything! Nova Design, Inc. has put the entire rights and source code to one of their top software titles, Aladdin 4D, on an exclusive offer to the Open Aladdin 4D group ( who are raising the money for it." - News from their site

    The amount of money needed is $37,579.83, that information is from a wikipedia article ( So, if you want to turn another software for 3D animation to open source, make a donation. I know that Blender has passed the same process long time ago. Thanks in advance.


  22. Some miniDV cameras have analogue inputs wich you can use to record analogue video to MiniDV tape.
    This way you could use the video-out of a graphic card and connect it to the analogue input of the camera.
    Once in DV tape, you can easily transfer the digital video to the computer for encoding without quality loss.
    Of course, you'd need an extra camera (one for the live conference and another for the screen capturing).

  23. - use encodedv for capturing analog input and / or ppm-streams in PAL-size - it can do it in real time to raw dv.
    - otherwise, if it's a DV-Cam: why not just capture the DV-Data directly to harddisk over firewire? (using dvconnect e.g.)

    Just my 2 cents,
    Peter "advertising his tool chain" Schlaile ;-)

  24. @Ian: Andy presented without talking and I figured you didn't want to be listening to the Siggraph background noise all the time ;-) So yes, the sounds are without sound - that's correct.

  25. Hmm, a modelling question:

    The way @ndy modelled was pretty impressing but is that really a good way to start? If you first model the face means that you have to add the head at the end (or extrude a sphere...)

    I made a quick modeltest the way @ndy did which was working well for modelling only the face - i can´t imagine how I could "add the head to the face" if I don´t start with a sphere or a "to sphered" cube...

    Maybe someone could give me a short tip? "Ich sitz quasi voll auf der Leitung" - just to say it in my native language ;-)

  26. How did he start this. Hard to make out. How does he get the eye socket mesh in the begining to have curved lines instead of angular ones? Is it parented to the first object he made. Is he using an array? I wish he would have spoken and walked us through this because it is so awesome....

  27. @leif: He used the subsurf-modifier along with the mirror-modifier and some extrudes and a couple of cuts... that wouldn´t be the problem. But I can´t image how to add the head to the "standalone face" - I think it´s the wrong method to extrude the head, isn´t it? The head would never be that round IMO!

    Any ideas?

  28. Hey Andy, You are a great modeler. I wish to see you create something that is friendly looking. It is always aliens and monsters. You are a very good modeler though. I do see that most of your stuff is dark. Lets see some picture that my children and I can look at.

  29. Okay well I know TV shows like the old Screen Savers used the same method you did for their tv shows so don't feel bad.

    BUT maybe get a graphics card with a TV out and run that TV out cable to a mythtv box or something? Some sort of Tivo type deal may work. Thats mostly hardware based though and you might want to even run the mic imput into the box also.

    Umm another route may also be used a fixed camera shooting staight to a fixed monitor. Like lets say a lcd screen then the person could slowly manuver the camera around the screen zooming in and out appropiatly.(the exact technique the screen savers used) But like peopler before me stated the thing that makes these so cool is the fact that you see both the man doing it and talking but you see and hear the conference.

    In my opinon just put 1 camera up in a fixed position point more straight. And make sure the talker is off to the side in front of a black background. Also you could for more a tutorials sake you could transcribe the entire thing in tutorial form so while people watched they could follow along? And since you already use what seems to be flex for the picture you could make streaming video with pause and the step he is at next to it or on the side using the tutorial as he goes.

    But I know bart you do this yourself and its already alot of work that you yourself do so these are all mere suggestions. But if you'd like maybe we can discuss some options as to I'd might be able to do some of the flex stuff that is if you use flex.

    Well goodluck :)


  30. Umm I also want to say I'm having trouble downloading this. I get to a certain point ant it drops me. And I cannot load it in-browser sooo..

  31. "If anyone has suggestions about the best solution (preferably a hardware based solution - I don’t trust screen capturing enough), I’d love to hear about them."

    My videocard has a video output (S-video, which most modern videocards have), with a simple adapter connects to my VCR.

  32. Chris: thanks! it's simple though, ugly things are easier to model than beauty. i'm still not good enough to model something nice to look at (oh... and i'm also mostly in a dark mood) :)

    tom: i model the body usually the same way i do the face. starting from the same (head) mesh and sketching out the rough shapes and topology first. tends to be a bit messy, i admit... and you kinda need a good picture of your concept in your head since you never have the 'complete' shape in front of you (which is the case for box modeling - which i usually avoid for organic stuff)


  33. @ndy I don't buy that. I think you are a great modeler. If you are mostly in a dark mood then I think that I know where to get help :)

  34. @Tom
    Select the edges in the forehead and the side of the face down to about the ear. Switch to side view. Extrude, move the new edges back and rotate them so they radiate from about where the earhole will be. Do this 4 or 5 times, until your new edges are horizontal (Take the time to round out the first new edges before you do the next ones, saves time). Thats the back of the head. Then extrude straight down once or twice for the back of the neck.

    Take the edges along the jaw line, extrude and scale in a bit. Do this once or twice for the underplane of the jaw.

    Take a single edge near the ear hole and jaw line, and extrude a seried of faces toward the notch between the clavicles. Extrude the back edge of the jaw underplane down for the throat.

    Make connecting faces as needed. Shape the back of the neck and the jaw/throat. Takes about ten minutes. Gets faster with practice.

  35. Are you planning to post this in
    Just a personal opinion its a great site (youtube).

    And thanks for the videos :)


  36. Andy is very talented and I know Blender can be made to do some fantastic things so apart from the way I saw @ndy work uv, that demo to me was completely useless.

  37. excellent video!
    while i was watching it i was thinking "how the...?" and "what the...?", i guess there are just some more important blender stuff that i haven't discovered yet...

    and yeah, a written step by step would really be great, especially to uncover some of the more "hidden" functions (and their functions ;)...

    keep up the excellent work! :D

  38. @lief:

    Original Question:

    How did he start this. Hard to make out. How does he get the eye socket mesh in the begining to have curved lines instead of angular ones? Is it parented to the first object he made. Is he using an array? I wish he would have spoken and walked us through this because it is so awesome….


    Until about 5 minutes ago, I had the same question, so I played around with subsurf modeling a little bit, and the modifier (I assume you know how to start the subsurf modifier in blender). Well, after you add it, notice the three buttons to the right of the Modifier name (in this case, subsurf). Click on the rightmost one. Next to that rightmost button, a new checkbox appears. Click it and all of your edges and faces curve to match the subsurfing, and then all of your vertices are attached to the subsurf rather than the original model shape.

    For the mirroring, just add the mirror mod and select the axis to mirror around. Very cool.

  39. hello everyone,

    I want you all to help me.
    Could someone send me the document that talk about how to get data from harddisk (the fonctionnement in the pc).

    thanks inadvance
    from sophea

Leave A Reply

To add a profile picture to your message, register your email address with To protect your email address, create an account on BlenderNation and log in when posting a message.