Skororu shares a Unix tip for distributing rendering work without using render farm software.
As an alternative to using render farm management software, there is a quick and easy way to perform distributed renders from the command line.
GNU Parallel [1] is a general purpose utility for running jobs in parallel on one or more computers. It runs on most unix-like operating systems including macOS and Linux, but it should be possible to use it on Windows via a Linux Virtual Machine [2]. We can use GNU Parallel to distribute rendering workload - either animations or single frames - over multiple computers with just a couple of lines of code.
For example, assuming a .blend file with all resources packed and set to 2000 samples: the first line would distribute a single frame render across 2 computers as 20 chunks of 100 samples, transferring files as required (no need for a shared filesystem), and will tidy up after itself as it goes. The second line uses ImageMagick [3] to combine the chunks to form the final 2000 sample image:
seq 1 20 | parallel -S1/[email protected],1/[email protected] --progress --plus --basefile render.blend --return 1_{#}.exr --cleanup blender -b render.blend -o \"#\"_{#} -F EXR -f 1 -- --cycles-resumable-num-chunks {##} --cycles-resumable-current-chunk {#} > /dev/null
convert 1_*.exr -evaluate-sequence mean final.exr
Some basic SSH configuration may be required, and this is covered in the documentation given here [4].
[1] https://www.gnu.org/software/parallel/
[2] https://www.virtualbox.org
[3] https://imagemagick.org/script/convert.php
[4] https://gitlab.com/skororu/scripts
7 Comments
GNU Parallel is regularly tested on Cygwin, so you may not even need the GNU/Linux virtual machine.
That's excellent news; and thanks for your work on Parallel, it's been very useful.
Can you output to tiff 16bit instead EXR?
Does it work with GPU as compute device?
I did a quick test run, and it worked well; just set the Blender file to output 16-bit PNGs and run the following, changing the settings to suit your network and number of chunks:
seq 1 20 | parallel -S1/[email protected],1/[email protected] --progress --plus --basefile render.blend --return 1_{#}.png --cleanup blender -b render.blend -o \"#\"_{#} -f 1 -- --cycles-resumable-num-chunks {##} --cycles-resumable-current-chunk {#} "> /dev/null 2>&1"
convert 1_*.png -evaluate-sequence mean final.png
The process should be independent of the rendering method. As long as your machines are all using GPU rendering, and that's set in the file config, I would expect it to work.
So far:
-2x PC with ubuntu 14.04 (with password-less openssh)
-each node/PC with nvidia GTX (560ti and 970 oc)
-latest updates (apt-get update, apt-get upgrade)
-latest parallel on one primary node/PC [ (wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash ]
-I did not install parallel on second PC
The GPU is only used on primary node/PC (with gtx970), on second node the GPU is not active during render time - only CPU.
Ok.. is working on both GPU (reinstall blender (PPA) and run nvidia-modprobe).
That's good news, I've used it a lot across CPU clusters; I was confident it would work with GPU setups too.