Part 2: Building Your First ComfyUI Text-to-Image Workflow

In the previous article, we introduced the core nodes of a text-to-image workflow. Now, we'll connect these nodes like puzzle pieces to build a complete workflow and generate your first AI image.
Node Connections: Creating the Generation Pipeline
The principle of connecting nodes is to link data-output nodes to nodes that need to receive that data. Ports are usually color-coded, and connecting ports of the same color is generally correct. Follow these steps:
Connect the Model:
- Link the Model output port of the Checkpoint Loader to the Model input port of the K Sampler.
- Link the CLIP output port of the Checkpoint Loader to the CLIP input ports of both the positive and negative CLIP Text Encoder nodes.
- Link the VAE output port of the Checkpoint Loader to the VAE input port of the VAE Decoder.
Connect the Prompts:
- Link the Conditioning output port of the Positive Prompt CLIP Node to the Positive Conditioning input port of the K Sampler.
- Link the Conditioning output port of the Negative Prompt CLIP Node to the Negative Conditioning input port of the K Sampler.
Set Canvas Dimensions:
- Link the LATENT output port of the Empty Latent node to the Latent Image input port of the K Sampler.
Decode and Display the Image:
- Link the LATENT output port of the K Sampler to the Samples input port of the VAE Decoder.
- Link the IMAGE output port of the VAE Decoder to the Image input port of the Preview Image node.
At this point, a complete text-to-image workflow is successfully built! Your node network should form a logical data flow chain.
Practical Example: Generating an Anime Girl
Now, let’s fill in the parameters and generate an actual image.
- Select a Model: In the Checkpoint Loader, choose a model suitable for anime style, such as
counterfeitV30_v30.safetensors
.
Enter Prompts:
- Positive Prompt: Enter the content you want to see, e.g.:
masterpiece, best quality, official art, extremely detailed CG, 8k wallpaper, girl, solo, night, stars, sky
- Negative Prompt: Enter low-quality content to avoid, e.g.:
lowres, bad anatomy, error, extra digit, fewer digits, cropped, worst quality, low quality
- Set Dimensions: In the Empty Latent node, set the width to
512
, height to768
, and batch size to1
.
Configure the Sampler: In the K Sampler, set the key parameters:
- Steps: 20-25
- CFG: 7
- Sampler:
dpmpp_2m
oreuler
- Scheduler:
karras
- Denoise: 1.0
- Generate the Image: Click the "Add Prompt Queue" button, wait a moment, and you’ll see the generated anime girl image in the preview window! Right-click the preview to save the image.
Summary and Suggestions
Congratulations! You’ve successfully created and run your first ComfyUI workflow. This "text-to-image" workflow is the foundation of all complex workflows. In the future, whether you want to add high-definition restoration, face swapping, pose control, or image style imitation, it will be achieved by adding new nodes (such as LoRA Loader, ControlNet, etc.) and connecting them to this existing workflow.
Suggestions for Beginners:
- Practice Connections: Familiarize yourself with this basic workflow until you can complete it without referring to the tutorial.
- Experiment with Parameters: Don’t be afraid to adjust the parameters in the K Sampler—it’s the best way to understand how each parameter affects the output.
- Start Simple: Master the basics first, then gradually explore more complex nodes and custom modules.
Unlock Full-Powered AI Creation!
Experience ComfyUI online instantly:👉 https://market.cephalon.ai/aigc
Join our global creator community:👉 https://discord.gg/KeRrXtDfjt
Collaborate with creators worldwide & get real-time admin support.