You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

7 Mistakes Blender Users Make When Trying to Render Faster - (and How to Avoid Them)


Johnson Martin shares some common pitfalls and their solutions that will help you to turn your 🐌  rendering into a 🚀!

When creating Blender projects, one of the most frustrating parts of the process is rendering. Yes, that grueling, slow process watching tiles render one by one. Obviously, we try and make this go as quickly as possible, but sometimes the shortcuts used effect images in subtle, but undesirable ways.

So give it a read!

About Author

Johnson Martin

I'm Johnson, 3d artist and writer. I'm currently a student, exploring the world of creative arts and a part time 3d Artist at Martin Media. I run Topology Guides, a blog that gives tips and tricks for 3d modelers. l also write for BlenderNation and occasionally, the Sketchfab blog.


  1. For the most part some pretty good advice, but I'd disagree with the first suggestion of a minimum 12 bounces.

    12 bounces seems exceptionally high. Better than having those pictures to compare, it would be good to have a GIF showing the same scene rendered with decreasing bounces to demonstrate what effect each count really has. Typically I've found 3-4 to be the minimum necessary. Going from 4 to 3 to 2 bounces you'll notice a significant decrease in the brightness of your shadows and general GI. Going from 4 to 5 to 6 bounces etc... you get diminishing returns and don't really seen an increase in overall GI, at which point it is just wasting processing time. I've put together a quick and rough GIF recreation of your scene to demonstrate the effect of bounces:

    As you can see, there was no appreciable difference in global illumination between 12 and 4 bounces, 3 bounces was pretty okay and significantly faster rendering, only losing some illumination on the edges of the glass, 2 bounces was getting a little bit dark, and 0 & 1 bounces were unusable because they didn't allow the refraction on the glass to work properly. I'd still recommend just 3 or 4 bounces because the cost in render time for more bounces just isn't worth it and isn't appreciable in the final image.

    I'd also look into Branched Path Tracing. This allows you to focus your samples where they're needed instead of spending extra render time across the board. In a normal sampling workflow you might use several hundred or perhaps even a couple thousand samples if you're trying to get an exceptionally clean image. The issue being that things like glossy and refractive surfaces are far more prone to light scattering and therefore heavy noise. Instead, I've taken to Branched Path Tracing. Set my base sampling to something low (50-128 typically), diffuse can use 1x that, Glossy 2x-3x that, etc... to focus the extra samples simply where they're needed, instead of across the entirety of the scene wasting processing time on material properties that are already clean.

    • Thanks for the feedback man!

      It seems I was wrong in setting 12 as the minimum that should be used, a few other users pointed out the same thing. I reworded the section and put in a request to update the page. :)

      I considered putting in some info on Branched Path Tracing, but decided against it because the scope of the article was in terms of what "not to do" for beginners. perhaps I should have mentioned it as an alternative though.


  2. Alexander Weide on

    completely agree to BlackhartFilms, except for branched path trace... it depents on workflow. when you have a professional denoise tool or use it, you need to render with path trace because the resulting noise pattern is similar to real camera noise, that means your denoise app will produce better quality. Also when you use a tool such as neat video for denoiseing which is working perfectly for cycles path trace rendering, you can lower your amount of samples in a animation file... because temporal denoise is the way to go to get fully clean results. Also branche path trace is better for CPU rendering. It speeds up cpu rendering. When you render on GPU you better render with path trace. Branched path trace dosent make any sense for a NVidia card it slows your rendering speed. You can check the difference by testing it out, on pixel level. That means you have to crank up the sample amount alot in branche path trace to get the same exact quality on pixel level ( like antialias edges ) in my tests i needed 3 times more samples on branched path trace to match the antialias edges on pixel level compared to non branched path trace. The largest mistake or maybe totally wrong leaded part is about materials. I dont know if i don't understand what you mean, but in general you need to tweak fix or hack materials in any render solution to get the result you want. Its important to understand how materials look like, and work, in that case i highly recomment to watch the PBR videos of blender guru. But there is a difference, when you do VFX you have to hack what you can to get a realistic result. You cannot and will render an entire scene. It depents on your skills and judgment to get the image look right...avoiding to change materials or reverese engineer materials will not help. Right now the best glass material is prisms glass shader from Blendermarket. Its completely rebuild node structure and give absolut realistic results.

    in the image below there are my render settings for getting highend images in post production. Rendering target is EXR on 16 bit.

  3. When it comes to denoising, I rolled my own solution in Blender's compositor a while back.

    My solution involved taking practically every pass that Cycles can output, (direct/indirect/color for diffuse/glossy/transmission/subsurface, emission, environment, z, normal, AO, UV, etc etc).

    I created one group node that could replicate the cycles shading equation from the light passes, but before connecting each light pass to the lighting equation node, I denoised the direct/indirect passes of light individually using the despeckle. Then I combined Z/Normal/Gloss Color/AO passes into one input for a bilateral blur on the indirect passes, since they generally had more noise than the direct light passes.

    By rendering the image at twice the width/height (4x the pixels), and 1/4 of the number of samples per pixel I kept the render time roughly the same, but improved the results of my denoising method, it made the bilateral blur more accurate by letting it basically work on subpixels.

    The node setup required a little bit of tweaking with some magic values for various nodes, but once it I got it working, it cut my render times down to 75% of their original times, was very happy! However there were some limitations. For example, it didn't work well with normal mapped surfaces, and semi transparent surfaces were a problem. And the node setup took about 2 minutes for the compositor to process!

Leave A Reply

To add a profile picture to your message, register your email address with To protect your email address, create an account on BlenderNation and log in when posting a message.