(Watch in HD!!)
Jorge Jimenez writes:
These last months I’ve learned a very important lesson: efforts towards rendering ultra realistic skin are futile if they are not coupled with HDR, high quality bloom, depth of field, film grain, tone mapping, ultra high quality models, parametrization maps, high quality shadow maps (which are lacking on my demo) and a high quality antialiasing solution. If you fail on any of them, the illusion of looking at a real human will be broken. Specially on close-ups at 1080p, that is where the real skin rendering challenge is.
As some subtleties (like the film grain) are lost on the online version, I encourage to download the original blu-ray quality version below, to better appreciate the details and effects rendered (but be aware that you will need a powerful computer to play it). Please note that everything is rendered in real-time; in fact, you can also download a precompiled version of the demo (see below), which shows the shot sequence of the movie, from its beginning to its ending. The whole demo runs between 80 and 160 FPS, with an average of 112.5 FPS on my GeForce GTX 580. But it can be run in weaker configurations by using more modest settings.
Link
54 Comments
beautiful
holy shit that's amazing
If I render a turd with
HDR, high quality bloom, depth of field, film grain, tone mapping, ultra high quality models, parametrization maps, high quality shadow maps and a high quality antialiasing solution it will also look awseome...
There is nothing amazing about this IMO...
I'd recommend adding the smoke sim to that list, for realistic steam.
Go ahead, I'll feature it.
In Tha Face!
Nice one, Bart!
But the video ís rendered so 'real-time' seems to be a bit misleading for some people.
Edit:
WOW that research page is really, REALLY nice. If anyone has the time to read it, you should! Incredible how much time has gone into this!
Yes, there is nothing at all amazing about real-time separable sub-surface scattering. Really there isn't. Right. Sure.
I think you failed to notice that the demo renders in real time!
You also fail to read. This result was done with a 2-step approach as opposed to the usual 12-step approach. So, yeah...this is amazing.
Some people just can't appreciate other people's good work.
Hahaha! I said that earlier, and then edited it out, and now I'm glad you said it. :)
Your opinion SUCKS!
mfg,
QFox
this is already in CryEngine years ago...
http://freesdk.crydev.net/display/SDKDOC2/HumanSkin+Shader
you seem to be incapable to read.
seriously read that damn text!
Just a word, impressive. The final result seems very similar to the nvidia's human head demo, but I think that the shadows in that demo are handled better. Anyway, I'm always suprised by the GPU's power, surely this technology is the future of 3D visualizations (real-time and not)
Amazing stuff.
I'm sorry......... REAL TIME??????
Amazing!!!
Why music is so dramatic ?
Is combination of instrumentation and dynamics and tempo for why music sound so dramatic.
Thanks. Now I know it.
LOL you guys crack me up.....some people just dont get it:) Great vid by the way most impressive!
the demo is realtime, see for yourselves guys download the demo and even it runs on a dx10 not in dx11...
it runs on 70fps on my gtx460... the demo is really configurable... anyways the realistic smoke stream is coming... i've seen them in youtube nvidia tech demos...
*speechless*
Was just waiting for him to open his eyes :P
That would have freaked me out. Seriously.
I think it is amazing. Haven't seen a blender-render realistic like that. Amazing even here on an old mainstream hd4650 with 21 fps.
Is that Peter Bishop from Fringe? If so, awesome. If not, um... still awesome.
All this in real-time? And done in with a 2-step approach as opposed to the traditional 12-step approach? I don't care what anyone says--that's amazing, my friend. Excellent job, Jorge!
So who's going to bring this to Blender ;-)
cool stuff. it's written in direct x, boo. I hope an openGL counterpart emerges.
OH MY GODDDD!!!!!
ikeahloe: see the comment about OpenGL "Jorge Jimenez: February 7, 2012 at 6:04 pm [...]yes the code is abstracted from the API. You can take a look into it here! https://github.com/iryoku/separable-sss/blob/master/SeparableSSS.h"
Sounds more or less simple(not for me ^^). But the code seems to be not free(need to be rewritten?).
Downloaded the demo, and my jaw dropped through the floor.
HO-LY SH*T! That's amazing. It was so breathtaking, that I look at that guy like he can blink his eyes anytime or sets a smile on his face!
Why is everyone amazed by this? lol This doesn't look realistic at all.. Looks like a plastic surface...
Get your eyes checked, noob.
Realism impression is subjective. It should be some scientific method to estimate how realistic image is. Maybe take photo of real skin in same distance/lighting as rendered image, and then compare pixels values.
My objection was to the degree to which the other poster found it laughable. What *is* laughable is the the assertion that this demonstration of realtime separable subsurface light scattering, was somehow indisinguishable from a simplistic rendering of a simple hard diffuse surface with an unrefined phong specular (which is in fact how plastic surfaces are rendered.) It's like twenty people seeing a glass of milk, and one bystander asserting that the beverage resembles nothing more than a stick of white chalk. It simply beggars belief.
By calling me a "noob", you sound like 10 year old who discovered 3D recently, maybe you should stick to video games where the term "noob" is more appreciated.
I'm doing 3D since the very early 90's, that's over 20 years now.. I read and collected various magazines (I have a huge collection of CGW from the 90's, it was pretty much the only and respected magazine about computer graphics) and books since the very beginning. At the time, in these magazines-books, an image of a bunch of ray-traced grouped primitives that resembled a "bee" was "photo-realism". A total joke if you look at it now.. but at that time, being a novice, like everyone else (since the industry was so new), I thought, yeah, wow, "photo-realism"! Now I know better, I have been doing art for a very very long time. With age you also learn to see, observe, and know the minute details of the world very well. I know quite well when I see a photo-realistic skin, after all, I have been looking at the skin for 20 years longer then you have.
You guys should spend a bit less time in 3D, go out and do some photography, drawings, paintings, sculpture, if you want to see and understand what a real skin looks like. This would allow you to become a better artist overall..
Don't try to pull rank with me on this one, it won't wash. I never mistook a ray-traced polygonal mesh for photorealism, not even 25 years ago. The problem you appear to be having here is one of context. What people are astounded at here is that a real-time procedure can do this good of a job at it, and though "this good of a job" is indeed a subjective and relative statement to make, here it applies, in context. You should be able to clearly see that this is not a simplistic light model being demonstrated here.
One might compelling argue that the effects were overdone here, and again this would be missing the context of the point, which was to demonstrate one possible model for a real-time procedure for conveying the scattering of light. Perhaps nobody can stop you from being this dismissive of it, but it doesn't make it any less of a technical achievement.
But mostly, I'm just dismayed that someone with such highly developed artistic sensibilities sees absolutely no daylight (no pun intended) between this and a plastic-like Gouraud/Phong rendering, or that someone who is aware of the technical issues can dismiss a real-time implementation of this as meaningless.
(And the "you guys" comment was as telling as anything... you appear to approach things from a standpoint of generalities and absolutes. Again... context matters. Do you really think you have been looking at skin for two more decades than we have? You would have to be in your seventies for that to be the case.)
and maybe you should spend a bit less time in 3d, to learn programming opengl or directx, if you want to know how hard doing that thing in REALTIME with more than 60 fps.
I'd like a link to what you perceive as realistic---in 3D mind you.
Perhaps through all your majestic observations we can learn from a non-fellow artist.
After all; you've been waiting to state your self-announced status, you might as well keep going with it and actually drop some knowledge.
I'm beginning to think the phrase "paper tiger" might apply there.
I'm in the biz since when we made "photorealism" with pencils and airbrush.
Look at those wristwatch's images on magazines: they are all made up with a mix of photos and 3d renderings. That's because this is the easiest way, and art directors' like it, by now at least. This is the kind of photorealism that earns you money with still renderings.
The kind of photorealism spoken of in this post turns useful, and very much at that, in the vfx field and for games, where you have to trick the eye relying on speed, not betting on prolonged perception.
After pondering this for a few hours, my take is that something like this demonstration serves not so much the purpose of being the end-all, but to present a suitable starting point for artists working with this equipment to use as a reference. There will be individual works created with the tools in this demo that will be worse, on par with, or better than, the examples in this demo.
It will be the responsiblity of artists with exposure to this platform, artists with highly developed eyes and skills, to show us the ultimate strengths and limitations of this new gadget.
When the very best of the best artists kick the tires on this one, we will be able to see what they can do with a more subtle approach. Having the tools is good, but overusing them can break the illusion just as surely as not having them at all. The fact that this had a "waxy" (NOT "plastic") appearance shows that subsurface scattering was at work, and accomplished in real time, and in this respect, the demo works.
The demo is only a demo after all, not a declaration of perfection in any way. Perfection is something to be pursued and sought after, and our reach should always exceed our grasp. My philosophy on this, anyway.
I'm sorry, but I'm not going to get all excited about plasticky skin (or waxy), "OMG" "holy shit" "speechless" etc.. whether it's done in real time or not.. However, I do appreciate the work the guy did, innovation, experimentation and progress is always good.
CG (3D) is AWESOME if it's done properly, if not, all you have to do is go on youtube and see how many people whine about cheap CG effects in expensive blockbuster movies, and how in the 80's FX were done better without the use of CG..
Much appreciated... and I would have personally led with that, instead of the "LOL" comment. My own "noob" comment was equally undesirable of course (and as you pointed out, it did little to nothing for my appearance here), but even money says I may not have overreacted otherwise. Separately, I tend to take the YouTube opinions with a large shaker of salt.*
-- * I say this regarding the YouTube folk because there are commentators on that site for whom, and this I am convinced of, no CG is good enough because they expect to see a visibly poly-styrene model with tons of greeble filmed through a ridiculously wide-angled lens at close range. It's not that such an approach accurately captures reality either, it's more of a larger-than-life caricature of it, but it's a defining reference point that stays with them.
I've overstayed my welcome in this topic/thread.
I was a bit disappointed that he didn't add a little bit of mo-cap animation to prove the realism thing. Not that we haven't seen excellent motion capture faces in real-time, but it would certainly prove the point that realistic humans are indeed possible in true real-time.
Did anyone else notice the model was missing eyelashes? That's the part about it that gave me those "unnerving" creepy-crawlies the most. I could overlook the waxiness of the subsurface scattering as a little bit of artistic license (or as I suspect might be the case, a little bit of "overstatement" to emphasize how many realtime shaders were being moved around at once), but there's something about a face with stubble and brows but no lashes that's almost subliminally distracting. Mostly though, what this demo does for me is to make me anticipate what the next two iterations of development could have in store for us.
Impressive indeed. From a technical standpoint...
However, personally I'm sick of those games who try to mimic realism. Most of those realistic looking games lacking of a satisfying core gameplay. They focus on pushing the graphics instead of remembering what really counts. At least for me. I still prefer games I used to play back in the 90's where games actually were clever and fun to play.
Nowadays, the most innovative games coming from indie devs.
That's because we forget about the games in the 90s that lacked in both core gameplay and on the visual front. A truly good core gameplay is rare as a lightning strike, while superior fidelity in graphics is becoming trivial to do. Thus for me, the two are unrelated, and it's trivial to give good gameplay a visual facelift. As the little girl asks on the taco commercial, "Por que no las dos?"
can we use for blender ?
Does Blender have the ability to render incoming light to UV space? If so, this method could be duplicated with a network of blurring and mixing nodes, and applied back to the surface. Is that possible in Blender?