During the initial development stages, Kai writes:
I had another strange idea. Now it's the implementation of micropolygons via python. It's probably the worst implementation of micropolygons ever made, because it's so slow and needs unbelievable amounts of memory but at least it works.
I did some optimizations for motion blur.
Check this example mapped with depth images:
I just put suzanne there, produced a depth map straight from the z-buffer and then I applied the displacement modifier to it. To be exact two of them because I used two different uv-layers, each for every size. I used this image:
Render time: 6 minutes for 12,000,000 micropolygons
Not bad for a python script.
First Release (the code has been ported to C and integrated into a test build as a modifier)
This is the first test build (for Windows) and there are a still a few known issues left to solve:
- There is no motion blur correction and optimization so far. The modifier is calculated for each subframe for regular motion blur and vector blur won't work at all so far.
- When rendering the modifier is always calculated three times per frame, even when vector blur or speed vectors are disabled. i don't know why blender does this, it's strange.
- In the viewport on time changes the animated camera position and rotation is somehow always one frame behind. i don't think the modifier is the reason for this, G.scene->camera->obmat simply returns old values.
- And some for the end-user invisible hacks were needed to bypass some strange behavior of a meshtool function. I need to talk to some blenderdev about this.
It's slow but it should work.
Download it from graphicall.org
I've created also a patch in case you want to build for your own platform. There is lots of old code in there but I'll clean it up later when the problems are solved.