It's no secret that generative AIs are taking the world by storm, generating both wonder for their technical prowess and disdain for ethical reasons. Stable Diffusion marked a milestone in the rise of AI imagery, being the first open-source state-of-the-art model released, in a relatively compact and performant package able to run on consumer-grade GPUs (whereas the other famous alternatives such as midjourney are closed-source and only available to compute on massive cloud arrays).
Stable Diffusion is a new “text-to-image diffusion model” that was released to the public by https://t.co/yo04zzpJXY
— Abtin Setyani (@abtinsetyani) August 31, 2022
And where there is open-source, there is Blender! The Blender community has wasted no time in testing Stable Diffusion as a texture generator, and implemeting it to Blender in various ways. Here are some examples that stood out to me:
CEB Stable Diffusion
Carlos Barreto is an add-on developer known for his great tools which often incorporate an AI component, including for motion capture right in Blender. His rapidly developing CEB Stable Diffusion add-on already has both txt2img (image generation via prompt) and img2img (image-based diffusion) implementations and is just generally a great way to have a complete build of Stable Diffusion on your machine.
CEB Stable Diffusion 0.2 wip (with img2img) pic.twitter.com/f8Y5Cc1cT8
— Carlos Barreto (@carlosedubarret) September 5, 2022
CEB Stable Diffusion 0.2 WIP
Multiple images generation and selector pic.twitter.com/Y9YRXoNeGo
— Carlos Barreto (@carlosedubarret) September 3, 2022
Stable Diffusion as a Live Renderer Within Blender
Over on the Blender subreddit, Gorm Labenz shared a video of an add-on he wrote that enables the use of Stable Diffusion as a live renderer, basically reacting to the Blender viewport in realtime and generating an image (img2img) based on it and some prompts that define the style of the result. The whole thing is just incredible to witness and opens up whole new workflows of AI-Assisted look development.
Still, on the subreddit, user "Renaissance_blender" shares a Stable Diffusion enhanced Suzanne, once again using AI as a renderer to the "basic" image prompt from Blender.
And finally, here is an example of one of the most basic yet useful uses for Stable Diffusion: Texture generation, with infinite variations available just a few words away!