Advertisement

You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

Render Farm in a Box

49

The research group ASTRA, part of the Vision Lab of the University of Antwerp, has constructed a computer "made completely from consumer hardware and contains four NVIDIA 9800GX2 graphics cards, each containing two GPUs."

This has dramatically sped up their tomography computations. While not directly related to Blender, this could herald a new era of affordable high-speed rendering? Perhaps this could be useful for the Blender Foundation's next open movie?

Time will tell.

Reported by Sensi

http://fastra.ua.ac.be/en/index.html

49 Comments

  1. It will be great when the day comes that rendering as in pushing a button and waiting is gone. But instead you work in 3d with the final quality render output.

  2. Interestingly, despite the wow factor of putting 4 9800s in a single machine I think the really impressive thing here is the code.
    I mean, 4 GPUs is awesome, but it's not like installing them is particularly difficult. What's really amazing is the fact that they got their code running efficiently split across eight parallel cores. I'm not much of a programmer, but from what I've heard parallel processing is extremely difficult to code well.

    ...And yeah, I'm almost 100% sure Blender doesn't support GPU rendering (although I think there's a script to capture Game Engine output as rendered images).
    GPU rendering would actually be a really useful function, too. Your video card isn't usually doing much when you're not gaming (Well, unless you're using Vista's Aero theme), so you could use it to render in the background without tying up your CPU.

  3. Well there's the memory issue, I'm not quite sure, but for Blender to render it is necessary a CPU and memory Ram, lots of it... so 4 graphic cards = 4 Gb; it's not that bad, but considering that each GPU only has 1Gb to work with... :/

  4. Ooh... I want that, once it's compatible with Blender. Comes to about $6000 in U.S. Currency though, so maybe I can't afford one...

  5. Th3w-san, the complexity of coding for multiple processors depends entirely on what they are being used for. Often the algorithms are no more complicated for 4 cores than they are for 1. 3d rendering is generally considered easy to do with multiple cores. The biggest problem with using graphics cards for scientific computing is in trying to use a highly specialized processor as a general-purpose processor. I have no idea what Tomography is but I imagine it's much more difficult to program a Tomography app for a graphics processor than a graphics app. (But even doing high-quality graphics on a gpu built for real-time graphics is challenging)

    Rogper, The ram issue isn't as great as you may think. When working on the CPU if the RAM is full the machine has to read/write from the disk which is by far slower than RAM. When GPU RAM is full the GPU accesses the system memory, not the disk so it is much faster (though still not nearly as fast as reading from RAM on the video card)

  6. Cheap too! It's the price of one commercial license of 3Dstudio max!

    Imagine the number crunching power you have here, if I was a coder I'd throw myself upon
    this, but with my limited coding skills it would take several years to acquire the knowledge
    needed to make a render-connection to Blender with this, but I surely would if I could, you
    have NO idea how tempted I am :)

  7. It seems like I'm the only one with this dumb question... ¿How the ... do they plugged the cards? I mean, Which motherboard has 4 pci express 2.0 slots? any model in particular?

  8. Thanks for publishing my "article". If I messed up my name when I submitted this, my apologies, but could you add the missing "e" to my name? It would be much appreciated, just for posterity's sake ;-)

  9. Not unlike the cell processor (except there are 8 additional cores per chip), and like this solution, you need specific software to take advantage of all the extra hardware. A typical single core processor has around 100 million transistors, a typical cell processor has 250 million. A big limitation for PS3's doing blender is 1) software to take advantage of the hardware (and I'm not just talking about blender), and more serious 2) severe memory limitations. 250MB is not enough for an application like Blender. You can't upgrade the memory. They are soldered in surface mount chips (8 of them with about 80 solder connections each), and even if you said "I'm patient and careful with a soldering gun", the bigger question is "What to replace them with?" No one makes memory of that type in larger configurations. While PS3's and this type of solution are not currently the way, they do point to where the way is going.
    Bob

  10. I recently bought a server with 8 CPU's.... well, 2 quad core xeons, but it amounts to the same thing in these measures.

    All in, it cost under £1800 which at todays rates (1 EUR = 0.796230 GBP) equates to 2,260.65 EUR.

    By my markings, as a blender render farm, my server kills this story with 8 CPU's as opposed to 4gpu's and is half the price. Dont believe me? Check out the http://burp.boinc.dk/top_hosts.php stats page. At this point in time my machine lies in 3rd place behind 2 intel labs servers. Mine is at home and my gpu is a Geforce 7300 on a pci-x 1x bus!!! Forget GPU's for blender rendering right now. GO MULTICORE!

    Here comes my point... for half than these guys have paid for their "render farm", I have a machine acting as a render farm that is ACTUALLY RENDERING BLENDER TODAY WITH NO CODE CHANGES!!!!

    If you need a animation rendering now and the power of my server and a whole bunch of other peoples machines... join burp.boinc.dk

    C'mon, Share the love.

    JulesD

  11. "Th3w-san, the complexity of coding for multiple processors depends entirely on what they are being used for. Often the algorithms are no more complicated for 4 cores than they are for 1. 3d rendering is generally considered easy to do with multiple cores. The biggest problem with using graphics cards for scientific computing is in trying to use a highly specialized processor as a general-purpose processor. I have no idea what Tomography is but I imagine it's much more difficult to program a Tomography app for a graphics processor than a graphics app. (But even doing high-quality graphics on a gpu built for real-time graphics is challenging)
    " -TF

    This is no longer true the reason they chose the 9800gx2 (aside from its raw power) is because of the cuda architecture nvidia launched cuda so that gpgpu was much easier. With cuda you are coding in something that is almost identical to c coding with a few things that are different. So it is no longer that hard to get your code to run on the GPU.

  12. I also recently aquired a dual quad core xeon machine with a geforce 8300, I tell ya, the rendering power is awesome. Too bad my company won't let me eliminate xp and put fedora on it.

  13. Yeah...the day blender has gpu rendering is when blender will CONTROL THE WORLD...
    but that aside...
    more or less trying to get blender render farm working is a battle!!!
    I have a planed server render hub in mind *drum roll*
    2x amd opteron quad core 2.5ghz (10ghz per cpu x 2 = 20 ghz total)
    **add more spiffy stuff**

    i havent picked a mobo or a psu and or RAM

    and so i have been testing the render farm in blender for awhile and iam getting mixed results (push aside having the server clients being a '97 pIII with little RAM) the render farm in my opinion is too hard to get working with out really working towards it so if a blender head is out there please simplfy the render farm~

  14. hey everyone!

    well, this is cool, but blender doesnt support GPU rendering out of the box ( yet ;) and asaik)

    but i have found something similar, affortable ( relative) and totally cool and awesome!

    here:
    http://helmer.sfe.se/

    "The most amazing is that this machine just cost as a better standard PC, but has 24 cores that run each at 2.4 Ghz, a total of 48GB ram, and just need 400W of power!! This means that it hardly gets warm, and make less noise then my desktop pc.

    Render jobs that took all night, now gets done in 10-12 min."

    i totally am saving for this now:)

  15. While we are on the subject... What is the perfect $4000 Blendering computer... I use a Mac right now but she's about to be retired.

  16. With regards to Galeto, didnt' they just release the full product to Public for Free?

    They mentioned that they are not going to develop it any more, so full version is now free.

    Am considering it.. I just need a Nvidia Card hihi.

  17. WOOOOW!

    That helmer thing is unbelievable!!!!

    Should be the next blendernation news, making this news old and lame...

  18. Hi everyone, up til now I had'n heard of any option of Blender GPU rendering, besides Gelato. Do someone has experience with
    Gelato and Blender? How does it compare to standrd renderer both in speed and user effectivness? I especially mean
    how to setup such thing.

  19. NielsBlender on

    I believe the NVidia chain-systems work modular, to solve the one-big-render-request from the host...
    That would make it also 'work' for Blender, this considering realtime openGL graphics not rendering.

    From the picture seen, that's a noisy blue bag!

  20. RH2:
    wow, costs less than 4000 euro!
    4000 euro is a lot of money…

    but the return for the money is pretty good….

    Such a powerful machines like this one only for 4000 euros is extremly cheap.
    I wonder how much noise it makes, i mean 1.500 W power supply......

  21. NielsBlender on

    I watched the video, it seems to have a very low noise profile...(this considering having a 'consumer-computer' in a 'consumer-environment' (or just use the computer also to make music.))

    ps.
    They could also have used a 'noise gate' and a 'profile denoiser'... :)

    ps2.
    The fact that they mention 'similar usage for game-play' make that Blender does benefit from it. The GUI-3Dview can be taken for 'openGL game-play-like'.

  22. Well Gelato Pro has just went free due to Nvidia wanting to focus on Mental Images (MentalRay) so I guess that would be useful for using Gelato. (granted it's not easiest nor most suited for all).

  23. I don't really see why this is a huge deal. It's not like they've "invented" anything. I could buy these parts and put them together myself... >.> it's not like its a revelation that you can build a super computer with commercially available parts for under the price of a car, which for most people, is still ridiculously pricey. You can build a rig 4 times more powerful then an average market sold PC now for the same price really, if you buy and install the parts by hand. Still, even that computer will leave you ten thousand miles behind any professional studio set up, sorry, render farms will always be better, but yeah, my point is, this has been possible for almost ten years now, hooray for them for building a nice comp. >.>

  24. Kintaro, really? Can you read this : [quote] Having eight graphics processors work in parallel allows this system to perform as fast as 350 modern CPU cores for our tomography tasks, reducing the reconstruction times from several weeks (on a normal PC) to hours. [/quote]

  25. ... thats not a revelation. multiple processors, multiple video cards, ect. it's not new technology. Mac has multiple processors working together, a lot of people have SLI and are running two or more graphics cards. I've heard such statistics years ago, the fact that 8 processors runs 350times faster isn't a new concept, and given that renderfarms are also implementing such technology shows that this isn't a "renderfarm in a box" because it doesn't even come close to comparing to a true renderfarm's power. the fact that they slapped it together is not amazing. Besides, what computers are we comparing them to, one core rigs? Duh, technology advances, of course it's faster. mac pro has similar stats at max potential, and it's so far behind a true renderfarm it's laughable. again, my point is, this has been possible for ten years. yes, you can always make comps more amazing. no, it won't become ridiculously fast to render unless you're stringing together hundreds of comps that put it 50 or 70 years into the future of computer power, and even then you're going to put new things into your 3D art, and it will take just as long again.

  26. CUDA-enabled GPUs 10 years ago? show me where, please. I'm curious how much costs "true render farm", that are you talking about?

  27. I'm not talking about CUDA enabled GPU's specifically. I'm saying that theres always something way ahead of its time for single unit PC's. that dosn't mean they replace power. a renderfarm can be any size. If we're talking international studio scale here, not one man operations, it could cost several million. however, you could probably build a four or five unit "renderfarm" for less then this with as much power as said "renderfarm in a box" This computer would put us at last generation studio standards. then, there will be something else, again, one generation behind. renderfarms will always be more powerful, and less expensive for the ammount of power your going to go to achieve, at the expensive of bulk and energy wasting.

  28. A CUDA port of the renderer (if not the whole graphics environment) may be worth looking in to. The new models allegedly support double precision and the language is getting better...

  29. lol  its quite funny looking back at this as the advancements in gpu rendering have bee huge..........
    at this point in time luxrender is using opencl, cuda may become obsolete in the next year or so and blenders internal render engine is eventually gonna be replaced by cycles.........good times ahead.....

Leave A Reply

To add a profile picture to your message, register your email address with Gravatar.com. To protect your email address, create an account on BlenderNation and log in when posting a message.

Advertisement

×