Mind Theory Singapore > MT Research Labs > Generative AI Visuals | ComfyUI Stable Diffusion ControlNet

Generative AI Visuals | ComfyUI Stable Diffusion ControlNet

Anime-style girl with blue eyes and short black hair against sky background

Generative AI art refers to artworks created with the help of Generative Artificial Intelligence models. These models are designed to generate new content, such as images, based on prompts, controlnet and node workflows. Below are some examples of what it can achieve.

Mind Theory, as a Learning Centre immersed in cutting-edge technologies, integrating generative AI in educational content creation promises an exciting future. The fusion of creativity and artificial intelligence opens doors to a new era of personalized and effective learning.

Scribble to Image

AI image generation node network for creating anime-style characters

Using Scribble method in Controlnet, we input a quick doodle of a character, and chose a manga training model. Through this method, we achieve the capability to produce a nuanced and shaded output, meticulously crafted in accordance with our specified prompts.

DWOpenpose to Image

Animation software interface with character rig and rendered scene

DWPose is used for human whole-body pose estimation. Using the editor, we can articulate the limbs at any desired angle.

Lineart Sketch to Manga

Digital art workflow showing character creation and editing process

This method is useful if you wish to input a outline sketch of a drawing. Once we feed it into the workflow, and choose our training model, (in this case, a shaded manga style aesthetic.) It is able to generate out closely based on the sketch.

Now, we intend to experiment with an alternative coloring/shading style, lets try import a IP Adaptor.

Digital art workflow diagram with character design nodes and outputs

The generated output closely mirrors the reference input photo, in terms of shading style and color palette.

For Architectural Use | Lineart Sketch to Image

Blender node setup for rendering architectural design with image and text elements

The Lineart to image generation method is suitable for architects and interior designers. Here, a loose sketch of a bungalow is fed into the workflow. We choose a architectural training model, and the AI generates a shaded version of our drawing.

However, lets influence it with a different visual style.

Architecture software interface displaying building design nodes and renderings

Now the generated bungalow image follows the style of our reference input image. We hope this article helps you understand Controlnet better, in the Stable Diffusion workflow.

Mind Theory Singapore provides ComfyUI Stable Diffusion for corporate, tertiary & institutional teams. We are the only course provider in Singapore to teach ComfyUI. Corporate Group Bookings available. Click here to view the program details.
If you like us to implement AI workflows in your organisation, do contact us for a consultation.

Contact us at info@mindtheory.sg for more information.

White circuit board design shaped like a brain on blue background
Previous post 2026 Kids June Holiday Camps in Singapore Next post ComfyUI Stable Diffusion SDXL Course | Singapore
Hello.

Chat with AI

Mind Theory AI is here to assist you.