
Notes on drawing to image generation
There are many ways to turn a line drawing into an image with various AI image generation tools.
In this post, we'll explore an innovative approach to AI image generation - using images from 3D models with custom-drawn textures as a means of controlling material assignments in order to achieve a higher degree of precision and artistic control over the final output.
Using a 3D Model with Custom Drawn Textures
An image of a perspective view from a 3D model in Sketchup with custom drawn texture using seamless texture drawing tool
One method involves using a perspective view from a 3D model in SketchUp with a custom drawn texture. For this demonstration, I have created a simple barn house setup with my own seamless textures and exported an image from the model. Now, let's see how we can turn it into an AI-generated image.
Prompt: a barn house, (stone walls), slate roof, renovated, in a forest, mystical fog, atmospheric, dark
Testing with Nightcafe for Image Generation
First, I tried using the prompt above in Nightcafe. Nightcafe is very user-friendly, offers free credits for image generation besides its paid plans, I highly recommended it for those looking to experiment with different models and prompts besides the great offerings of Midjourney, OpenAI, Meta, and Google. However, I found that there was no straightforward way to control how much the initial drawing influences the final image.
Image generated using drawing to image using Nightcafe in "Sketch to image" mode
Exploring Advanced Control Methods
There are other online tools that provide more advanced control over how different inputs—such as line drawings and text prompts—affect the generation process. Additionally, setting up a custom workflow offers even greater flexibility.
One simple approach I tried was combining a line drawing input with a depth map input to guide the AI’s generation process. This method provides more structured control over depth and detail, leading to a more refined output. For those interested, searching for "ControlNet" can provide further insights.
Image generated using drawing to image with both lines and depth control with a simple setup that takes a line drawing and a depth map
By using both a line drawing and a depth map, I achieved a result that better preserved the original structure while allowing the AI to apply realistic textures and details.
Conclusion
Experimenting with AI image generation from line drawings can be an exciting and creative process. While tools like Nightcafe offer a simple and accessible starting point, leveraging additional inputs such as depth maps can significantly enhance control over the final output. Whether using existing platforms or setting up a custom workflow, understanding these techniques opens up new possibilities for AI-assisted image creation - one where inputs are shaped to align with design intent.