Return Home

Jump To:

Making-Of
Pre-Vis Blockout
Textures With StableDiffusion
Prompt-To-3D Model With Meshy
Prompt-To-Image-To-3D Model with Stablediffusion and Meshy




Making-Of


The aim of this project is to uncover how Generative AI can be utilitsed to enhance 3D environments in a Sci-Fi setting. Software used in this project include: Blender, Meshy, StableDiffusion and Materialize.
Here is a behind-the-scenes making-of video describing the process of the development of the environment from the pre-vis blockout to the finished render.









Pre-Vis Blockout


Here is a 3D blockout with the primitive, basic layout and movements set in place.








Textures using StableDiffusion and Materialize


Here are textures applied to 3D primatives. The textures were created by generating images via StbleDiffusion, utilising the "Uranium Technology" LoRA, which were then developed into PBR maps via the free software "Materialize".









Prompt-To-3D Model With Meshy


Here are models developed with the online softwear "Meshy". This is the most direct approach for creating models with Generative AI - Prompts are entered into the interface, and models are created automatically with ready-made textures.









Prompt-To-Image-To-3D Model with StableDiffusion and Meshy


This method of creating 3D models with Generative AI is, in my opinion, a more curated method than using Meshy directly. Here, images of a 3D model are created using StableDiffusion - These images are then fed into Meshy, to be turned into 3D FBX models.