A SIGGRAPH research paper proposes a new approach for filtering noisy renders. Could this be the end of Cycles fireflies?
Abstract
The most successful approaches for filtering Monte Carlo noise use feature-based filters (e.g., cross-bilateral and cross non-local means filters) that exploit additional scene features such as world positions and shading normals. However, their main challenge is finding the optimal weights for each feature in the filter to reduce noise but preserve scene detail. In this paper, we observe there is a complex relationship between the noisy scene data and the ideal filter parameters, and propose to learn this relationship using a nonlinear regression model. To do this, we use a multilayer perceptron neural network and combine it with a matching filter during both training and testing. To use our framework, we first train it in an offline process on a set of noisy images of scenes with a variety of distributed effects. Then at run-time, the trained network can be used to drive the filter parameters for new scenes to produce filtered images that approximate the ground truth. We demonstrate that our trained network can generate filtered images in only a few seconds that are superior to previous approaches on a wide range of distributed effects such as depth of field, motion blur, area lighting, glossy reflections, and global illumination.
8 Comments
This seams to be a denoise filter, working on information not limited to metropolis render.
But instead metropolis was used for training the neural network, and so it would work best on metropolis renders.
I think it could have been trained just as well, with other rendering methods we currently have
And perform denoise filtering optimized for them.
It takes time to train a neural network; but once trained, they much quickly execute over their data.
And if we got multiple train data sets, for each render method. Then that would be a case of aplying the right model to the neural network.
I think what makes this different from other neural denoise filters.
That those filters doent take into account the extra data a 3D engine can provide, and they where trained with the purpose of photographic noise, learning from flat 2d photos....
i hope we can get this into blender too, some day.
Wow, we need it, we want it!!! Thank you
in their source code ( http://cvc.ucsb.edu/graphics/Papers/SIGGRAPH2015_LBF/ ) they've written an exporter for Blender... not sure what it does though.
The source code is pbrt-v2 with their additions.
http://www.pbrt.org/
There is a discussion about the exporters here:
https://groups.google.com/forum/#!topic/pbrt/QNfwTauVeTI
The modified code produces 3 outputs: noisy image, filtered result, and timing info if you wanted to test scenes.
Now I wonder whether you could extend this to generally find the relationship between an arbitrary scene and the final image result, so given a scene you basically can produce the final result in a single step. And whether it'd be possible to do this in real time.
My guess is that, to properly work, a huge network would be required which, in turns, would be too slow to evaluate to render at a decent frame rate. But I wonder just how fast would be possible within acceptable visual errors (~1px, ~1/255th RGB error size or, given some higher dynamic range, something equivalent)
the math behind rendering is pretty complex, i doubt a neural net can be trained to do the whole thing.
However as they show it is able to optimally polish a render,.. and that means, less render samples for almost similar result. Polishing while keeping in mind texture/ face orientation / distance / speed / glossy... etc
A neural network would easily trust pixels on the same face to be of equal color, and thus average those, while maintaining border sharpness etc, it wont add detail, but it would probably know how much optimal blurring would be needed. And combined with the raw render data provide better denoise results then gimp / photoshop.
I was going to write a long description of why Cycles' crowded megakernel couldn't possibly handle an unproven/untested feature like this, before I realized what this actually was: A denoise filter based more deeply on the way rendering works than others.
Also interesting to me is what this might mean for realtime raytracing, which is clearly (in my opinion) where realtime graphics are headed. It's already at the point where the noise disappears within a few seconds; perhaps some adaption to this could bring realtime noise down to more manageable levels?
I imagine that the Blender Institute could compute the method (run the machine learning, adapt it to Cycles) for denoising, then distribute that result, adding the function as a composting node (on by default?). Best of all, no reason to crowd the Cycles code at all - all Cycles really might have to do is provide some information while computing the method, like on motion blur or deformations. The user wouldn't even feel it.
I hope the devs take notice of this; it would be refreshing to see Blender being the one finally taking some kind of leading edge in new tech. Would certainly do wonders for publicity in the industry. With this in mind, real results are often less amazing than papers might make you believe. Still!
Wonderful!! But I can't help but think that they've stacked the deck slightly in their favor. I don't see one piece of clear glass in the presentation and that may be a downfall of sorts for this type of noise filter. Remember that bilateral nodes work a real miracle until we try to look into a window in archviz renders...then the filter process falls apart and the glass appears frosted. This is all very exciting however...can't wait til someone implements it in Blender.