Meshflow, a system for interactively viewing how artists construct meshes in 3D applications, was presented in a paper at this years SIGGRAPH conference. The researchers used Blender to capture the modelling process for several kinds of complex models, and then used the meshflow system to cluster the modelling operations into expandable chunks, allowing the viewer to replay the modelling process at an overview level, or to dive in deeper to see each operation in the process, all whist being able to view the model from any angle.
From the paper's abstract:
The construction of polygonal meshes remains a complex task in Computer Graphics, taking tens of thousands of individual operations over several hours of modeling time. The complexity of modeling in terms of number of operations and time makes it difficult for artists to understand all details of how meshes are constructed. We present MeshFlow, an interactive system for visualizing mesh construction sequences.
MeshFlow: Interactive Visualization of Mesh Construction Sequences
7 Comments
Its really interesting stuff, i think it could be really nice for people who like doing tutorials/timelaps videos,
this way u dont need to always record ur modeling "live" but u can play it back and screen record after u have finished modeling
found a nice demo vid about this meshflow in action too... http://www.youtube.com/watch?v=oTH7zwS_Wto
2 and half hours for a Hydrant model? I can make it in 10 minutes... in 30 minutes I can make it in hipoly+lowpoly with textures :)
You should make a video timelapse of your hydrant modeling so that we can compare it for detail.
I agree with chrome monkey.
Ahhhh man this would be REALLY helpful if it would take off! I have, so many times, ended up confused and not a little disheartened when trying to work out how a pro created model in a video or tutorial and I can't make something out.
Not only that, but this would be good if it was adapted to undo stages. Similar to SVN in coding. The user can mark a stage as completed, and move the next stage, then the Meshflow system could take the modeller back and forth as required.
Interesting read and nice to see that Blender is helpful for the academic world. Because you can change the source, implementations like this are possible.
For the people interested in the code, check out this page: http://www.cs.dartmouth.edu/~jdenning/projects.php
this would be invaluable for the Makehuman project, because animating a photo-realistic human is especially difficult as everyone knows how they are supposed to look. With this technology, one could track shape keys all the way through the animation, starting with correct curvilinear forms, and adding muscle flow and vein location as separate processes.