Advertisement

You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

AMD Videocard Users - Make Yourselves Heard!

57

Ton Roosendaal published this call yesterday. While AMD is making progress with their development of OpenCL (the framework these cards use for GPU acceleration), it advances slowly and not nearly at the speed of Cuda, the NVidia analogue. You'll find more information on this issue on the Blender Wiki.

About the Author

Avatar image for Bart Veldhuizen
Bart Veldhuizen

I have a LONG history with Blender - I wrote some of the earliest Blender tutorials, worked for Not a Number and helped run the crowdfunding campaign that open sourced Blender (the first one on the internet!). I founded BlenderNation in 2006 and have been editing it every single day since then ;-) I also run the Blender Artists forum and I'm Head of Community at Sketchfab.

57 Comments

  1. AMD user here, I have a slow laptop with a terrible GPU that is an NVidia GPU. But I have a great desktop with a fast CPU but an even faster GPU! But it's AMD.. The best device I have available for GPU rendering is incompatible with Cycles! It's a nightmare.

  2. Jamal Millner on

    I have a rig with 2 AMD cards cross fired, but cannot use GPU render. I spend way to much on the cards to run out and grab a couple nVidia cards. I will have to suffer with CPU render which is not that bad but surely not as good as GPU

  3. Why would anyone buy AMD gpu when he's interested in Cycles rendering? Cycles uses NVidia, I got NVidia, simple.

    • Because not everybody buys their GPU based on Cycles compatibility. A lot of people prefer the gaming performance of AMD cards, and recently AMD cards have become valuable for mining crypto currencies. Others are simply recent Blender converts using their preexisting AMD hardware.

    • If it wasn't for cycles performance, I'd drop NVIDIA for AMD in a heartbeat. NVIDIA has been playing dirty tricks on users, such as intentionally crippling OpenGL performance on "consumer" level cards.

    • Josh Strawbridge on

      i went AMD for viewport performance (i actually got a firepro)... then later ended up getting an on sale for cheap nvidia card to render with but have since ended up just using the nvidia card because well it's a pain trying to set up drivers for the two cards on ubuntu and i didn't feel like going through that every six months or so.

    • Gath Gealaich on

      "Why would anyone buy AMD gpu when he's interested in Cycles rendering?"

      Prices? Future support for HSA APUs and large scenes (as large as your system memory allows)?

  4. This is a good thing to make notice here. I think that some people are of the opinion that Ton/Blender Foundation somehow hate the AMD community. Whereas in reality, AMD hasn't given cycles (well, OpenCL) a break. I hope that there is progress on this, so that Blender can be improved for people who have this type of card.

  5. I also have an AMD card, I bought it because I got a excellent deal on it and have a small render farm in the works which would be CPU only. However I do regret it now since I now know I would benefit from a scene/materials development standpoint and not just for finals. Glad to see Ton making some noise about it hopefully we can see a solution in the near future.

  6. It is not that the AMD card is worse than NVidia it is purely that when OpenCL is applied rather than the native Cuda of Nvidia it falls short. In many cases (Apple products for one case) you can't just go out and buy and Nvidia card and fit it! also for many tasks (such as photoshop filters and general system graphics acceleration) the AMD card is better. BUT ands this is a BIG BUT for Cycles GPU rendering the AMD card is left standing. IF they can get the OpenCL working, it could well be faster than Nvidia as OpenCL can also use the CPU parrallel with the GPU making it not just better and faster for previews but also for full renders, something that the GPU on smaller cards (less than a gig of video ram) is unsuitable for. PLEASE AMD push your OpenCL development and work with Apple, Microsoft and Linux to produce stable Drivers. Thank you!

    • You have little understanding of how Cycles works on GPUs. Hybrid acceleration (CPU&GPU) will never happen on Cycles because of the way it is built. LuxRender and Indigo support CPU&GPU acceleration through OpenCL and work faster on AMD GPUs than on Nvidia's.

      The Blender team has not the time nor interest to completely rework Cycles and make it modular so that it can run on AMD cards.

      Ton is simply passing the blame to AMD and it pisses me off.

      • Yep, I have aboslutely no knowledge of how Cycles is built, but it can not just be Cycles fault. I have read in many, many, threads that the AMD implementation of OpenCL is not fully supported. While I agree that there may be some buck passing happening, they have seen area's which AMD can help in the integration of of it's use as a GPU renderer making OpenCL work with Cycles and I believe that is what Ton is asking. AMD is a profit organisation while development of Blender is reliant on donations and training funds. As you say I have little understanding of the subject (I am not a coder, I am an artist) but getting pissed off isn't an option for me :) I just use the tool set I have to continue working, Cycles is not for me yet and I will continue to use BI. Reading render times in forums for Cycles (even with Nvidia cards) they are WAY too long for my commercial purposes.

      • Eli Spizzichino on

        I don't know how cycles works internally but I see Hybrid acceleration already working for me since many version in blender. It looks like the smaller the tileset and the more cpus are used together with my 3 GPU, but of course the GPU demands bigger tiles and are faster. (auto-tile plugin is useful to test the best performance)

  7. AMD user here just bought a new Mac Pro and keeping my fingers crossed GPU rendering is soon supported for my machine.

  8. i use amd and dont even waste the time and effort not to mention cash on worrying about any of this stuff. i just render on BI . WORKS GREAT ALL THE TIME. PERIOD. sorry for the caps. if they ever catch up great, if not blender internal works fine.

  9. hears something i did awhile ago not great, but to shaby. guess u really cant tell anything from that its so small :( point is it works great :). pretty low resolution too. just so no one grips texture's:cg texture's couple things on the lawn blend swap and andrew price's free bee's.

  10. Jonathan Roth on

    Agreed.

    I'm an Apple user for the OS and stability. As much as I'd like the crossfire-capable FirePro cards in the new Mac Pro, It's not worth it to me if Blender isn't supported. If Apple switches back to AMD on the iMac, I'm stuck.

  11. I have AMD on my laptop and my desktop. I'm always reminded of how slow it is compared to Cuda when I watch tutorials and the author has Cuda. Pity me.

    • What Ton did was that he threw AMD to the wolves while the Blender team is the one mainly responsible for Cycles not running on AMD GPUs.

        • From the start, Cycles wasn't built with AMD in mind. It was written for CPU rendering primarily. Then instead of offloading some of the features to the GPU (like LuxRender and Indigo did), they went for full-on GPU rendering. This way they gained efficiency and rendering performance but lost in terms of compatibility.

          I have no hopes that AMD will alter their hardware and compiler just to accommodate Blender users. MAYBE, Cycles will possibly run on top of HSA for GPUs with GCN architecture in the future.

          • Cuda gets CPU features with long enough delay. Making them come longer as well as possibility of not getting some features at all or not being able to use all shaders at the same time etc. would definitely harm production use. In terms of time investment I doubt GPU viable if it can't run a full-blown "application" unless someone finds and sponsors a few more developers to maintain another version.

  12. AMD user here. I don't really understand what does the statements mean above, because a lot of apps can utilize OpenCL (Luxrender or Darktable to name a few), and maybe not at Cuda speed but a huge advancement anyway.

    • Indeed, LuxRender and Indigo use OpenCL to run on AMD cards and they run perfectly fine.
      The problem is Cycle's big, bloated kernel which can't be handled by AMD GPUs. If they don't completely rework the kernel or AMD pull off some serious driver "magic", we'll never have Cycles running on AMD.

      Ton throwing the blame on AMD is just PR talk.

          • And inability to do function calls is normal how? Either it is a processor or "calculator". If it is a "calculator" it will evolve or die sooner or later. In both cases the work invested will be rendered useless.

          • This is what I've read:

            "- On the VLIW architecture (before HD7000) there is no GOTO instruction. Only Loop, If/Else and exit.

            - The AMD_IL compiler has an optimizer inside it which likes to work on totally unrolled code. So I think even on the GCN architecture (which has true GOTO) it will unroll all your CALL-s and then do the optimizations on the whole thing.

            - Another thing is variable value exchange between OpenCL and AMD_IL. Unfortunately there's no such thing (as I currently know). You can only insert your amd_il texts inside the stream not knowing which OpenCL variable is in which register.

            Oh, and here comes another problem:

            - On GCN you have to drive 2 processors in one instruction stream: Scalar and Vector. In OpenCL and in AMD_IL you can't reach the S Alu, with which you could jump to a 64bit physical address in gpu memory for example (and it can do much more).

            Recursion is possible with the S alu. Even you can make a small stack for return addresses and passed parameters in the registers because you can access the registers indirectly with s_movrel.

            I'd say on the GCN architecture it's possible to do all those complicated things that an IA-64 processor can do. A GCN chip is like an IA-64 with 2048 bit SSE, 10..32 cores but it comes with a very well designed instruction set."

            http://devgurus.amd.com/thread/167192

  13. We also want gl_select to be removed from blender but no one seems to care enough(from the ones that take the decisions of what goes in and what's not)

  14. Why not split the difference and go with C++ AMP or DirectCompute for AMD cards? Sure it's only Windows, but if it means you can use AMD cards nobody will complain

    • Because the problem is not OpenCL. OpenCL runs superbly on AMD cards and faster than Nvidia's.
      The problem lies on how Cycles was originally built which doesn't translate well on OpenCL.

      • That doesn't actually discount C++ AMP/ DirectCompute though, since that shares a lot of basic commands with CUDA but works with AMD. Sure it's not optimal, but I can bet you that a 7950 and better card will net you much higher performance than the CPU

          • C++ AMP/DirectCompute don't use OpenCL, so not sure what you are getting at. The method should be closer to CUDA than OpenCL, and since Cycles works on CUDA just fine it might work.

            Your link only discusses OpenCL, how about trying with one specifically about C++ AMP/ DirectCompute that I'm talking about (rather than opencl... you must really love opencl to ignore a completely different option)

          • I repeat since you obviously didn't get it. Language is not the issue here, be it OpenCL, C++ AMP or DirectCompute.
            The problem is AMD's hardware and compiler not supporting true function calls which then collapses under the size of the Cycles kernel.

            The reason Cycles can be run on Nvidia through CUDA or OpenCL is that their hardware and compiler supports true function calls.

            What could be done MAYBE is have OpenCL run on top of HSA but that is only supported by GPUs with the GCN architecture.

          • "Language is not the issue here, be it OpenCL, C++ AMP or DirectCompute."

            DirectCompute should be bypassing the AMD compiler altogether, since it's HLSL and the compiled shader should be portable between DX11 class devices. The issue with DirectCompute is more of the fact it's Windows specific, and some people believe open source means linux support must come first.

            "What could be done MAYBE is have OpenCL run on top of HSA but that is only supported by GPUs with the GCN architecture."

            There's no reason to support their older architectures, Cycles already dropped support for sm_1x (GTX 280 and below), and GCN is about as old as sm_2.1 . Newest HSA release supports C++ AMP too

  15. polarlighthouse on

    Me too, unfortuntely. The biggest bummer is that we got some nice machines with Nvidia cards at the university, but CUDA can't be enabled on them D:

  16. The amount of commenters that have no idea what they're talking about is staggering.

    OpenCL performance (in general) on AMD GPUs is better than Nvidia's because AMD's architecture favors GPGPU (large number of "cores" handling small bits of "code"). LuxRender and Indigo work super fast on AMD GPUs.

    The blame for Cycles not working on AMD GPUs is both on the Blender Foundation team and on AMD's driver team. The kernel of Cycles is simply too big for the GPU to handle. What needs to be done is either completely rework Cycles to make it modular or make some serious changes in the drivers so that they can "chop up" the kernel in smaller, manageable bits of code for the GPU.

    Seeing Ton throwing the ball to AMD is not a good sign.

  17. If you right-click on the thumbnail, you'll see that it is a screenshot of ArcSoft's TotalMedia Theatre player which supports OpenCL acceleration. Which movie is playing on it is irrelevant.

    Nice try, troll.

  18. I know the GPU in the newest MacPro is amazingly powerful, but not supporting GPU rendering just seems strange. Why wouldn't they want for people to use there cards to the fullest extent. My iMac has the Nvidia GTX 780m and I wouldn't trade for anything, but I know that some AMD's are as fast or faster, but not having GPU rendering would be a deal breaker.

  19. I have a richland apu lappie and have been waiting for opencl to arrive.
    On the desktop I have an AMD cpu and a gtx 650 ti because AMD sucks with blender and I hate Nvidia.

    My friend is a long time blenderer and had a gtx 460 until a lightning blew it all and then he got a 760 with 4GB vram.

    Andrew Price himself uses 2x 680's with 4GB VRAM.

    NONE of the people I know who use blender also play games. That is to say all the multiple GTX owners are blender users and not gamers.

    TO hell with HSA we want Opencl for all radeons equally. There are hobbyist's and pros in the making who cant afford and are using cheap radeons and integrated IGP's.

    Otherwise to HELL with you AMD.

  20. loyal AMD user here, I want OpenCL cycles and stream core cycles because I will never buy from the line of overpriced, underpowered, pieces of crap that is nvidia. I wish people would use amd more because the shader cores are way faster and they are overall more powerful when properly optimized.

    • Yeah... forget AMD! Nvidia is making a push for new tech and better physics calculations while amd is just sitting around doing nothing to better the gaming experience. I am done buying amd stuff! I already bought a Nvidia shield tablet and am waiting on the last few dollars for my 980.

Leave A Reply

To add a profile picture to your message, register your email address with Gravatar.com. To protect your email address, create an account on BlenderNation and log in when posting a message.

Advertisement

×