<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[ComfyUI]]></title><description><![CDATA[How to use ComfyUI]]></description><link>https://blog.cephalon.ai/</link><generator>Ghost 5.0</generator><lastBuildDate>Fri, 27 Feb 2026 17:45:09 GMT</lastBuildDate><atom:link href="https://blog.cephalon.ai/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[IPAdapter Complete Guide: Style Transfer & Precise Face Swapping in ComfyUI (Beginner to Advanced)]]></title><description><![CDATA[<p>If you&apos;re using ComfyUI for AI image generation, you&apos;ve probably run into this problem:</p><p>You carefully craft your prompt.<br>You switch models multiple times.<br>Yet the result still feels&#x2026; slightly off.</p><p>Want the exact color style from another image?<br>Want your generated character to actually</p>]]></description><link>https://blog.cephalon.ai/ipadapter-complete-guide-style-transfer-precise-face-swapping-in-comfyui-beginner-to-advanced/</link><guid isPermaLink="false">69a10fcd729e4a0001f1502a</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Fri, 27 Feb 2026 09:24:49 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2026/02/ComfyUI_00002_.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2026/02/ComfyUI_00002_.png" alt="IPAdapter Complete Guide: Style Transfer &amp; Precise Face Swapping in ComfyUI (Beginner to Advanced)"><p>If you&apos;re using ComfyUI for AI image generation, you&apos;ve probably run into this problem:</p><p>You carefully craft your prompt.<br>You switch models multiple times.<br>Yet the result still feels&#x2026; slightly off.</p><p>Want the exact color style from another image?<br>Want your generated character to actually resemble a reference person?<br>Or maybe you want to achieve accurate face swapping inside ComfyUI?</p><p>That&#x2019;s where <strong>IPAdapter</strong> becomes a game changer.</p><p>In this guide, we&#x2019;ll walk through:</p><ul><li>How to install IPAdapter in ComfyUI</li><li>How different IPAdapter models work</li><li>How to build a working face swap workflow</li><li>How to optimize style transfer results</li><li>Common mistakes and practical tips</li></ul><p>By the end, you&#x2019;ll be able to run IPAdapter confidently on your own.</p><hr><h2 id="what-is-ipadapter-why-is-it-more-precise-than-prompts">What Is IPAdapter? Why Is It More Precise Than Prompts?</h2><p>Let&#x2019;s explain it in simple terms:</p><blockquote>Prompts tell the model what to draw with text.<br>IPAdapter lets an image directly influence how the model draws.</blockquote><p>IPAdapter is an image-guided conditioning model. It extracts visual features from a reference image and injects them into the sampling process.</p><p>That means it doesn&apos;t just &#x201C;overlay style&#x201D; &#x2014; it actively influences generation during sampling.</p><h3 id="what-can-ipadapter-do">What Can IPAdapter Do?</h3><ul><li>Style transfer</li><li>Multi-image style blending</li><li>Precise face swapping</li><li>Combine with ControlNet for structure + style control</li></ul><p>If you frequently generate portraits, the face version of IPAdapter is practically essential.</p><hr><h1 id="how-to-install-ipadapter-in-comfyui">How to Install IPAdapter in ComfyUI</h1><p><strong>SEO keyword: IPAdapter ComfyUI installation tutorial</strong></p><p>Installation consists of two parts:</p><ol><li>Node extension</li><li>Model files</li></ol><hr><h2 id="step-1-install-the-ipadapter-node">Step 1: Install the IPAdapter Node</h2><p>Inside ComfyUI:</p><p>Manager &#x2192; Install Custom Nodes &#x2192; Search &#x201C;IPAdapter&#x201D;</p><p>Look for:</p><p><strong>ComfyUI_IPAdapter_plus</strong></p><p>Install it and restart ComfyUI.</p><p>If the node doesn&#x2019;t appear after restart, close and reopen ComfyUI again (cache issue).</p><hr><h2 id="step-2-download-required-model-files">Step 2: Download Required Model Files</h2><p>The node is just the framework &#x2014; the model files do the real work.</p><p>You&#x2019;ll need two types of files:</p><h3 id="1%EF%B8%8F%E2%83%A3-ipadapter-model-files">1&#xFE0F;&#x20E3; IPAdapter Model Files</h3><p>Place them in:</p><pre><code>ComfyUI/models/ipadapter
</code></pre><p>(Create the folder if it doesn&#x2019;t exist.)</p><hr><h3 id="2%EF%B8%8F%E2%83%A3-clip-vision-model">2&#xFE0F;&#x20E3; CLIP Vision Model</h3><p>Place it in:</p><pre><code>ComfyUI/models/clip_vision
</code></pre><p>Many users miss this step and encounter loading errors.</p><hr><h1 id="how-ipadapter-works-inside-comfyui">How IPAdapter Works Inside ComfyUI</h1><p>The core node is:</p><p><strong>IPAdapter Apply</strong></p><p>It:</p><ul><li>Receives the base model (checkpoint)</li><li>Receives CLIP Vision</li><li>Receives IPAdapter model</li><li>Receives reference image</li><li>Outputs a modified model to KSampler</li></ul><p>Essentially, it injects visual features during sampling.</p><hr><h2 id="how-to-adjust-ipadapter-parameters">How to Adjust IPAdapter Parameters</h2><p>This is what most people care about.</p><h3 id="weight">Weight</h3><p>Controls how strongly the reference image influences the result.</p><ul><li>0.5 &#x2013; 0.8 &#x2192; Natural blending</li><li>Above 1 &#x2192; Very close to reference, possible distortion</li></ul><p>Beginner recommendation: start at <strong>0.6</strong></p><hr><h3 id="noise">Noise</h3><p>Higher values reduce reference influence.</p><p>If you want maximum feature retention:</p><p>Set it close to <strong>0.01</strong></p><hr><h3 id="weight-type">Weight Type</h3><p>Common options:</p><ul><li>original</li><li>linear</li><li>channel penalty (more natural for face swap)</li></ul><p>For face swapping, <strong>channel penalty</strong> is usually the best choice.</p><hr><h1 id="which-ipadapter-model-should-you-choose">Which IPAdapter Model Should You Choose?</h1><p>Common SEO question:</p><p>&#x201C;What&#x2019;s the difference between IPAdapter models?&#x201D;</p><p>For SD1.5:</p><ul><li>ip-adapter_sd15 &#x2192; Basic style transfer</li><li>ip-adapter-plus_sd15 &#x2192; Stronger reference similarity</li><li>ip-adapter-plus-face_sd15 &#x2192; Optimized for faces</li><li>ip-adapter-full-face_sd15 &#x2192; Extracts more detailed head features</li></ul><p>If your goal is accurate face swapping, go with <strong>plus-face</strong>.</p><hr><h1 id="face-swapping-in-comfyui-using-ipadapter-full-workflow">Face Swapping in ComfyUI Using IPAdapter (Full Workflow)</h1><p><strong>SEO keyword: ComfyUI face swap tutorial</strong></p><p>Here&#x2019;s the practical logic.</p><hr><h2 id="1-basic-node-setup">1. Basic Node Setup</h2><p>You&#x2019;ll need:</p><ul><li>IPAdapter Apply</li><li>IPAdapter Model Loader</li><li>CLIP Vision Loader</li><li>Checkpoint Loader</li><li>Load Image Node</li></ul><p>Connection logic:</p><p>Model &#x2192; IPAdapter &#x2192; KSampler</p><hr><h2 id="2-choose-a-good-reference-face">2. Choose a Good Reference Face</h2><p>Best practices:</p><ul><li>Front-facing headshot</li><li>Clear lighting</li><li>No obstruction</li><li>Neutral expression</li></ul><p>Higher-quality reference = higher success rate.</p><hr><h2 id="3-add-face-detection-highly-recommended">3. Add Face Detection (Highly Recommended)</h2><p>Without masking, the entire image will be influenced.</p><p>Proper workflow:</p><ul><li>Use YOLOv8 face detection</li><li>Use SAM for segmentation</li><li>Feed mask into VAE encode</li></ul><p>This ensures only the face area is replaced.</p><hr><h2 id="4-recommended-sampling-settings">4. Recommended Sampling Settings</h2><p>Stable combination:</p><ul><li>Steps: 30</li><li>CFG: 6</li><li>Sampler: ddim</li><li>Denoise: 0.85</li></ul><p>If facial results look unnatural, slightly reduce denoise strength.</p><hr><h1 id="style-transfer-with-ipadapter">Style Transfer with IPAdapter</h1><p><strong>SEO keyword: ComfyUI style transfer tutorial</strong></p><p>IPAdapter is also powerful for artistic style blending.</p><p>Suggested approach:</p><ul><li>Generate base structure with a realistic model</li><li>Use an oil painting or illustration reference</li><li>Set weight around 0.7</li><li>Reduce noise below 0.1</li></ul><p>You&#x2019;ll notice:</p><p>Composition remains stable, but style shifts clearly.</p><p>For better control, combine with ControlNet for pose stability.</p><hr><h1 id="frequently-asked-questions-faq">Frequently Asked Questions (FAQ)</h1><h3 id="q1-ipadapter-node-not-showing-after-installation">Q1: IPAdapter node not showing after installation?</h3><p>Restart ComfyUI.</p><hr><h3 id="q2-face-doesn%E2%80%99t-resemble-reference">Q2: Face doesn&#x2019;t resemble reference?</h3><p>Increase weight or switch to plus-face model.</p><hr><h3 id="q3-severe-distortion-in-output">Q3: Severe distortion in output?</h3><p>Lower weight or increase noise.</p><hr><h1 id="practical-tips-from-experience">Practical Tips from Experience</h1><ul><li>Generate multiple times and compare seeds</li><li>Reference image quality matters more than parameters</li><li>Different checkpoints produce very different results</li><li>Always use face models for portraits</li></ul><hr><h1 id="why-ipadapter-is-a-must-learn-tool-in-comfyui">Why IPAdapter Is a Must-Learn Tool in ComfyUI</h1><p>If you only rely on prompts, you&#x2019;re still limited to text-based control.</p><p>IPAdapter upgrades image generation from:</p><p>Text-driven &#x2192; Image-driven</p><p>For creators working on:</p><ul><li>Portrait generation</li><li>Style recreation</li><li>AI face swaps</li><li>Commercial visual production</li></ul><p>IPAdapter is not optional &#x2014; it&#x2019;s essential.<br></p><hr><p><br><strong><strong><strong><strong>Unlock Full-Powered AI Creation!</strong></strong></strong></strong><br>Experience ComfyUI online instantly:&#x1F449; <u><a href="https://market.cephalon.ai/aigc" rel="noopener noreferrer">https://market.cephalon.ai/aigc</a></u><br>Join our global creator community:&#x1F449; <u><a href="https://discord.com/invite/KeRrXtDfjt" rel="noopener noreferrer">https://discord.gg/KeRrXtDfjt</a></u><br>Collaborate with creators worldwide &amp; get real-time admin support.<br></p>]]></content:encoded></item><item><title><![CDATA[Efficency-nodes Extend Guide: Simplify Your AI Image Workflows]]></title><description><![CDATA[<h4></h4><p>ComfyUI is flexible but gets cluttered with repetitive node connections. The Efficiency-nodes extension fixes this by packing multiple functions into 2 core node types (loaders + samplers), cutting workflow complexity and saving time. It also adds features like HD restoration and XY parameter comparison charts.</p><hr><h2 id="key-nodes-in-efficiency-nodes">Key Nodes in Efficiency-nodes</h2><h5 id="efficiency-loaders">Efficiency Loaders</h5>]]></description><link>https://blog.cephalon.ai/efficency-nodes-extend-guide-simplify-your-ai-image-workflows/</link><guid isPermaLink="false">6959bf50729e4a0001f14fa3</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Mon, 05 Jan 2026 10:00:08 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2026/01/ChatGPT-Image-2026-1-4--09_11_49.png" medium="image"/><content:encoded><![CDATA[<h4></h4><img src="https://blog.cephalon.ai/content/images/2026/01/ChatGPT-Image-2026-1-4--09_11_49.png" alt="Efficency-nodes Extend Guide: Simplify Your AI Image Workflows"><p>ComfyUI is flexible but gets cluttered with repetitive node connections. The Efficiency-nodes extension fixes this by packing multiple functions into 2 core node types (loaders + samplers), cutting workflow complexity and saving time. It also adds features like HD restoration and XY parameter comparison charts.</p><hr><h2 id="key-nodes-in-efficiency-nodes">Key Nodes in Efficiency-nodes</h2><h5 id="efficiency-loaders">Efficiency Loaders</h5><ul><li>What it does: Combines 6+ standard nodes (Checkpoint loader, CLIP encoder, Latent, VAE, prompts) into 1 node.</li><li>Variants: &quot;Efficiency Loader&quot; (basic) + &quot;Efficiency Loader (SDXL)&quot; (for SDXL models).</li><li>Extras: Supports LoRA/ControlNet stacking (easily add multiple LoRAs/ControlNets without messy wiring).</li></ul><figure class="kg-card kg-image-card"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_094227_241.png" class="kg-image" alt="Efficency-nodes Extend Guide: Simplify Your AI Image Workflows" loading="lazy" width="668" height="568" srcset="https://blog.cephalon.ai/content/images/size/w600/2026/01/ScreenShot_2026-01-04_094227_241.png 600w, https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_094227_241.png 668w"></a></figure><h5 id="efficiency-samplers">Efficiency Samplers</h5><ul><li>What it does: An upgraded K Sampler with built-in VAE decoding + real-time preview (no extra &quot;Preview Image&quot; node needed).</li><li>Variants: &quot;K Sampler (Efficiency)&quot; (basic), &quot;Advanced Efficiency&quot; (more controls), &quot;SDXL Efficiency&quot; (for SDXL).</li><li>Extras: Works with scripts (e.g., HD upscaling) via a &quot;Script&quot; input port.</li></ul><figure class="kg-card kg-image-card"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_093929_186.png" class="kg-image" alt="Efficency-nodes Extend Guide: Simplify Your AI Image Workflows" loading="lazy" width="918" height="509" srcset="https://blog.cephalon.ai/content/images/size/w600/2026/01/ScreenShot_2026-01-04_093929_186.png 600w, https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_093929_186.png 918w" sizes="(min-width: 720px) 720px"></a></figure><hr><h2 id="quick-guide-use-efficiency%E2%80%99s-xy-chart">Quick Guide: Use Efficiency&#x2019;s XY Chart </h2><p>The XY Chart helps test 2 parameters (e.g., steps + samplers) at once. Here&#x2019;s how:</p><p><strong>Set up the base workflow</strong></p><ol><li>Add &quot;Efficiency Loader&quot; (set model/VAE/prompts) + &quot;K Sampler (Efficiency)&quot; (keep default).</li><li>Connect their ports.</li></ol><figure class="kg-card kg-image-card"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_094638_558.png" class="kg-image" alt="Efficency-nodes Extend Guide: Simplify Your AI Image Workflows" loading="lazy" width="745" height="471" srcset="https://blog.cephalon.ai/content/images/size/w600/2026/01/ScreenShot_2026-01-04_094638_558.png 600w, https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_094638_558.png 745w" sizes="(min-width: 720px) 720px"></a></figure><p><strong>Add the XY Chart node</strong></p><ol><li>Link &quot;Efficiency Loader&quot;&#x2019;s &quot;Dependency&quot; output to XY Chart&#x2019;s &quot;Dependency&quot; input.</li><li>Link XY Chart&#x2019;s &quot;Script&quot; output to the sampler&#x2019;s &quot;Script&quot; input.</li><li>Set &quot;Spacing&quot; (image gap) to 5; set &quot;Image Output&quot; to &quot;Plot&quot;.</li></ol><figure class="kg-card kg-image-card"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_094829_926.png" class="kg-image" alt="Efficency-nodes Extend Guide: Simplify Your AI Image Workflows" loading="lazy" width="746" height="469" srcset="https://blog.cephalon.ai/content/images/size/w600/2026/01/ScreenShot_2026-01-04_094829_926.png 600w, https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_094829_926.png 746w" sizes="(min-width: 720px) 720px"></a></figure><p><strong>Add parameter nodes</strong></p><ol><li>Add &quot;Steps&quot; node: Set &quot;Count=4&quot;, &quot;Start Step=5&quot;, &quot;End Step=20&quot; (test 5/10/15/20 steps).</li><li>Add &quot;Sampler Scheduler&quot; node: Set &quot;Input Count=4&quot;, pick 4 samplers (e.g., euler, dpm_sde, dpmpp_2m, lcm).</li><li>Link &quot;Steps&quot; to XY Chart&#x2019;s X input; link &quot;Sampler Scheduler&quot; to XY Chart&#x2019;s Y input.</li></ol><figure class="kg-card kg-image-card"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_100244_897.png" class="kg-image" alt="Efficency-nodes Extend Guide: Simplify Your AI Image Workflows" loading="lazy" width="907" height="903" srcset="https://blog.cephalon.ai/content/images/size/w600/2026/01/ScreenShot_2026-01-04_100244_897.png 600w, https://blog.cephalon.ai/content/images/2026/01/ScreenShot_2026-01-04_100244_897.png 907w" sizes="(min-width: 720px) 720px"></a></figure><p><strong>Generate &amp; compare</strong></p><ol><li>Click &quot;Add Prompt to Queue&quot; &#x2014; the XY chart will show results for all parameter combinations.</li></ol><hr><p><strong><strong><strong><strong>Unlock Full-Powered AI Creation!</strong></strong></strong></strong><br>Experience ComfyUI online instantly:&#x1F449; <u><a href="https://market.cephalon.ai/aigc" rel="noopener noreferrer">https://market.cephalon.ai/aigc</a></u><br>Join our global creator community:&#x1F449; <u><a href="https://discord.com/invite/KeRrXtDfjt" rel="noopener noreferrer">https://discord.gg/KeRrXtDfjt</a></u><br>Collaborate with creators worldwide &amp; get real-time admin support.<br></p>]]></content:encoded></item><item><title><![CDATA[ComfyUI-Impact-Pack Guide: Streamline Your AI Image Workflow]]></title><description><![CDATA[<p>For anyone using ComfyUI to create AI images, managing and optimizing complex workflows can be a real challenge. Enter a powerful solution: <strong>ComfyUI-Impact-Pack</strong>. This tool automates many advanced yet tedious image processing tasks, making it perfect for beginners looking to boost both efficiency and output quality.</p><hr><p><strong>Overview &amp; Installation</strong></p><p>The</p>]]></description><link>https://blog.cephalon.ai/untitled-5/</link><guid isPermaLink="false">6943a5ce729e4a0001f14f74</guid><category><![CDATA[Advanced Operations in ComfyUI]]></category><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Thu, 18 Dec 2025 07:28:52 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/12/ComfyUI_temp_fvcit_00001_.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/12/ComfyUI_temp_fvcit_00001_.png" alt="ComfyUI-Impact-Pack Guide: Streamline Your AI Image Workflow"><p>For anyone using ComfyUI to create AI images, managing and optimizing complex workflows can be a real challenge. Enter a powerful solution: <strong>ComfyUI-Impact-Pack</strong>. This tool automates many advanced yet tedious image processing tasks, making it perfect for beginners looking to boost both efficiency and output quality.</p><hr><p><strong>Overview &amp; Installation</strong></p><p>The <strong>ComfyUI-Impact-Pack</strong> is a free, custom node package built specifically for ComfyUI. Its main appeal is offering <strong>plug-and-play</strong> intelligent modules. You can achieve sophisticated results without building complex logic from scratch, including:</p><ul><li><strong>Automatic Face Detection &amp; Restoration</strong>: Precisely locates faces in images and enhances their details.</li><li><strong>Advanced Mask Generation</strong>: Intelligently identifies object outlines to create perfect masks for inpainting.</li><li><strong>Detail Refinement</strong>: Performs high-quality, iterative redraws on specific small areas like faces or hands.</li></ul><p><strong>Installation via Manager</strong><br>The process is straightforward and happens entirely within ComfyUI:</p><ol><li>In the main ComfyUI interface, click the <strong>&quot;Manager&quot;</strong> button at the bottom right.</li><li>In the pop-up window, navigate to the <strong>&quot;Install Nodes&quot;</strong> tab.</li><li>Type <strong>&quot;Impact&quot;</strong> into the search box at the top right and press Enter.</li><li>Find <strong>&quot;ComfyUI Impact Pack&quot;</strong> in the results list and click the <strong>&quot;Install&quot;</strong> button on its right.</li><li>Once installed, restart ComfyUI. The new nodes will be available under the <strong>&quot;Impact&quot;</strong> category in the node menu.</li></ol><figure class="kg-card kg-image-card"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/12/image1.jpeg" class="kg-image" alt="ComfyUI-Impact-Pack Guide: Streamline Your AI Image Workflow" loading="lazy" width="2000" height="908" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image1.jpeg 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/12/image1.jpeg 1000w, https://blog.cephalon.ai/content/images/size/w1600/2025/12/image1.jpeg 1600w, https://blog.cephalon.ai/content/images/size/w2400/2025/12/image1.jpeg 2400w" sizes="(min-width: 720px) 720px"></a></figure><hr><p><strong>Core Features: The Three Detectors</strong></p><p>The intelligence of Impact-Pack comes from its built-in detectors. Understanding them helps you use the toolkit more effectively.</p><p><strong>BBOX Detector (Draws a Box)</strong></p><ul><li><strong>What it does</strong>: Quickly identifies the location of specific targets (like a face or hand) by drawing a rectangular bounding box around them.</li><li><strong>Common use</strong>: Employ the <code>bbox/face_yolov8m.pt</code> model to swiftly find all rectangular face regions in a picture.</li></ul><p><strong>Segm Detector (Traces the Outline)</strong></p><ul><li><strong>What it does</strong>: More precise than BBOX, it traces the target&apos;s complete contour to generate a shape-fitting mask.</li><li><strong>Common use</strong>: Use the <code>segm/person_yolov8n-seg.pt</code> model to get an accurate outline of a person in the scene, not just a box.</li></ul><p><strong>SAM Detector (Adds Fine Detail)</strong></p><ul><li><strong>What it does</strong>: This is a powerful segmentation AI capable of generating masks with extremely rich detail. It&apos;s typically used <strong>in tandem with the BBOX detector</strong>: first, BBOX quickly locates the general target area, then SAM performs refined segmentation within that area to produce a perfect, sharp-edged mask.</li></ul><hr><p><strong>Practical Application: Automatic Face Detection &amp; Refinement</strong></p><p>This is one of Impact-Pack&apos;s most popular features, fully automating the enhancement of facial quality in portraits.</p><p>We&apos;ll use the <strong>&quot;Face Detailer&quot;</strong> node as the core. This powerful node integrates a complete pipeline&#x2014;detection, cropping, redrawing, and compositing&#x2014;within itself. You simply connect it like you would a regular sampler.</p><p><strong>Simple Workflow Setup:</strong></p><p><strong>Place the Core Nodes</strong>:</p><ul><li>Add the <strong>&quot;Face Detailer&quot;</strong> node from the menu.</li><li>Add a <strong>&quot;Load Image&quot;</strong> node and upload a portrait.</li><li>Add a <strong>&quot;Checkpoint Loader&quot;</strong> and choose a suitable portrait model (e.g., <code>majicmixRealistic_v7.safetensors</code>).</li><li>Add two <strong>&quot;CLIP Text Encoder&quot;</strong> nodes. It&apos;s good practice to rename them &quot;Positive Prompt&quot; and &quot;Negative Prompt&quot;. Input simple quality prompts, e.g., Positive: &quot;masterpiece, best quality, portrait&quot;; Negative: &quot;blurry, deformed&quot;.</li></ul><p><strong>Connect the Detection Module</strong>:</p><ul><li>Add a <strong>&quot;Detector Loader&quot;</strong> node. In its <strong>&quot;model_name&quot;</strong> dropdown, select <code>bbox/face_yolov8m.pt</code> (the face detection model).</li><li>Add a <strong>&quot;SAM Loader&quot;</strong> node, leaving all parameters at their defaults.</li></ul><p>Now, connect the outputs from all the above nodes to the corresponding inputs on the <strong>&quot;Face Detailer&quot;</strong> node:</p><ul><li>Connect <code>image</code>, <code>model</code>, <code>CLIP</code>, <code>VAE</code>, <code>positive/negative</code> as you normally would.</li><li>Connect the output of the <strong>&quot;Detector Loader&quot;</strong> to the <code>bbox_detector</code> input.</li><li>Connect the output of the <strong>&quot;SAM Loader&quot;</strong> to the <code>sam_model_opt</code> input.</li></ul><p><strong>Key Parameter Settings</strong>:<br>Adjust these few key parameters in the <strong>&quot;Face Detailer&quot;</strong> node for good results:</p><ul><li><code>guide_size</code>: Set to <strong>384</strong>. This means detail enhancement triggers automatically when a detected face region is smaller than 384 pixels.</li><li><code>denoise</code>: Set to <strong>0.5</strong>. This controls the strength of redrawing; 0.5 balances detail restoration and preserving the original look well.</li><li><code>feather</code>: Set to <strong>5</strong>. This creates a more natural transition between the repaired area and the original image, avoiding hard seams.</li><li>Check the boxes for <strong>&quot;Generate Mask Only&quot;</strong> and <strong>&quot;Force Inpaint&quot;</strong> to ensure processing is focused only on the facial area.</li></ul><p><strong>Generate &amp; Preview</strong>:<br>Finally, connect the output image from the <strong>&quot;Face Detailer&quot;</strong> node to a <strong>&quot;Preview Image&quot;</strong> node. Click <strong>&quot;Add Prompt to Queue&quot;</strong>, and after a moment, you&apos;ll get a refined image where facial details (like skin texture and eye clarity) are noticeably enhanced.</p><hr><p><strong>Conclusion</strong></p><p>Think of <strong>ComfyUI-Impact-Pack</strong> as a smart assistant added to your ComfyUI workflow. By packaging complex algorithms, it simplifies professional-level image processing tasks&#x2014;like face refinement and intelligent masking&#x2014;that would otherwise require multiple steps.</p><p>For beginners, starting with the <strong>&quot;Face Detailer&quot;</strong> function is recommended, as it most directly demonstrates the extension&apos;s value. As you grow more comfortable, you can explore its other detectors and tools to gradually build more automated and powerful image processing pipelines.</p><hr><p><strong><strong><strong><strong>Unlock Full-Powered AI Creation!</strong></strong></strong></strong><br>Experience ComfyUI online instantly:&#x1F449; <u><a href="https://market.cephalon.ai/aigc" rel="noopener noreferrer">https://market.cephalon.ai/aigc</a></u><br>Join our global creator community:&#x1F449; <u><a href="https://discord.com/invite/KeRrXtDfjt" rel="noopener noreferrer">https://discord.gg/KeRrXtDfjt</a></u><br>Collaborate with creators worldwide &amp; get real-time admin support.<br></p>]]></content:encoded></item><item><title><![CDATA[Creating AI Animations in ComfyUI]]></title><description><![CDATA[<h2 id="a-beginner-friendly-guide-to-animatediff">A Beginner-Friendly Guide to AnimateDiff</h2><p>If you&apos;re already using Stable Diffusion to generate images and suddenly find yourself thinking, <strong>&#x201C;What if I could make these characters move?&#x201D;</strong> &#x2014; then you&#x2019;re ready to explore AnimateDiff.</p><p>Despite its technical-sounding name, AnimateDiff is surprisingly easy to understand:</p>]]></description><link>https://blog.cephalon.ai/creating-ai-animations-in-comfyui/</link><guid isPermaLink="false">693be4ed729e4a0001f14f59</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Fri, 12 Dec 2025 09:53:40 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/12/ComfyUI_00014_.png" medium="image"/><content:encoded><![CDATA[<h2 id="a-beginner-friendly-guide-to-animatediff">A Beginner-Friendly Guide to AnimateDiff</h2><img src="https://blog.cephalon.ai/content/images/2025/12/ComfyUI_00014_.png" alt="Creating AI Animations in ComfyUI"><p>If you&apos;re already using Stable Diffusion to generate images and suddenly find yourself thinking, <strong>&#x201C;What if I could make these characters move?&#x201D;</strong> &#x2014; then you&#x2019;re ready to explore AnimateDiff.</p><p>Despite its technical-sounding name, AnimateDiff is surprisingly easy to understand:<br><strong>It turns static images into animations.</strong><br>No extra training, no model conversion &#x2014; simply plug it into your ComfyUI workflow and it starts working immediately.</p><p>This guide walks you through installation, setup, workflow structure, recommended parameters, and troubleshooting tips.</p><hr><h1 id="1-what-is-animatediff-and-why-is-it-popular">1. What Is AnimateDiff and Why Is It Popular?</h1><p>Stable Diffusion has evolved rapidly over the past few years. With tools like LoRA and DreamBooth, generating high-quality static images has become simple.<br>The problem is that <strong>those images don&#x2019;t move</strong>.</p><p>That&#x2019;s where AnimateDiff comes in.</p><h3 id="%F0%9F%94%A5-its-core-function-make-your-model-%E2%80%9Cmove%E2%80%9D">&#x1F525; <strong>Its core function: Make your model &#x201C;move.&#x201D;</strong></h3><h3 id="key-advantages">Key Advantages:</h3><p><strong>Model-agnostic:</strong> Most T2I models can be animated directly</p><p><strong>Natural temporal consistency:</strong> No &#x201C;every frame looks different&#x201D; issue</p><p><strong>Frame interpolation and unlimited sequence length</strong></p><p><strong>Compatible with ControlNet and standard sampling workflows</strong></p><p><strong>Creator-friendly:</strong> Keep using your existing LoRAs, prompts, and checkpoints</p><p>AnimateDiff adds motion to your current models without changing how you work.</p><hr><h1 id="2-installing-animatediff-in-comfyui">2. Installing AnimateDiff in ComfyUI</h1><p>AnimateDiff works as a node extension. Follow this sequence to install it correctly.</p><hr><h2 id="21-install-animatediff-node-extension">2.1 Install AnimateDiff Node Extension</h2><p>Open <strong>ComfyUI</strong></p><p>Open the <strong>Manager</strong> panel (bottom-right corner)</p><p>Click <strong>Install Node</strong></p><p>Search for <strong>AnimateDiff</strong></p><p>Select <strong>AnimateDiff-Evolved</strong> and install it</p><blockquote><strong>AnimateDiff-Evolved</strong> is the most actively maintained and recommended version.</blockquote><hr><h2 id="22-download-at-least-one-motion-model">2.2 Download at Least One Motion Model</h2><p>AnimateDiff requires a motion model <code>.ckpt</code> file to function.</p><p>Popular models include:</p><p><code>mm_sd_v14</code></p><p><code>mm_sd_v15</code></p><p><code>mm_sd_v15_v2</code> &#x2190; <strong>most recommended</strong></p><p><code>v3_sd15_mm</code></p><p>Place the files inside the proper <code>models</code> folder under the AnimateDiff extension directory.</p><hr><h1 id="3-the-core-node-dynamic-diffusion-loader">3. The Core Node: Dynamic Diffusion Loader</h1><p>All AnimateDiff workflows rely on:</p><blockquote><strong>Dynamic Diffusion Loader</strong><br>Path: <code>New Node &#x2192; AnimateDiff &#x2192; Gel &#x2192; Dynamic Diffusion Loader</code></blockquote><p>This node injects motion logic into the generation pipeline.</p><hr><h2 id="31-input-ports-simplified-explanation">3.1 Input Ports (Simplified Explanation)</h2><h3 id="model"><strong>Model</strong></h3><p>Must use <strong>SD1.5 models</strong></p><p>SDXL is currently unsupported</p><h3 id="context-settings"><strong>Context Settings</strong></h3><p>Required if generating beyond default frame length</p><p>Without it, V2 motion models enforce a <strong>32-frame limit</strong></p><h3 id="dynamic-lora"><strong>Dynamic LoRA</strong></h3><p>Optional; adds extra style or motion characteristics.</p><h3 id="ad-settings"><strong>AD Settings</strong></h3><p>Advanced parameters; safe to ignore for beginners.</p><h3 id="sampling-settings"><strong>Sampling Settings</strong></h3><p>Controls interaction with the sampler.</p><h3 id="ad-keyframes"><strong>AD Keyframes</strong></h3><p>Used for advanced keyframe animation workflows.</p><hr><h2 id="32-node-properties">3.2 Node Properties</h2><p><strong>Model Name:</strong> Select a motion model such as <code>mm_sd_v15_v2.ckpt</code></p><p><strong>Scheduler:</strong> Recommended &#x2192; <code>slerp</code> or <code>slerp_linear</code></p><p><strong>Dynamic Scale:</strong> Motion intensity</p><p><code>&lt;1</code>: Smoother</p><p><code>1</code>: Stronger motion</p><p><strong>Use V2 Model:</strong> Enable only if using a V2 motion model</p><p>The node&#x2019;s <strong>Model Output</strong> connects directly to <strong>KSampler</strong>.</p><hr><h1 id="4-required-components-for-exporting-video">4. Required Components for Exporting Video</h1><p>AnimateDiff outputs image frames. To convert those frames into video, install:</p><hr><h2 id="41-videohelpersuite">4.1 videoHelperSuite</h2><p>Install via Manager.<br>Adds a <strong>&#x201C;Merge to Video&#x201D;</strong> node supporting common formats.</p><hr><h2 id="42-ffmpeg-system-level">4.2 FFmpeg (System-Level)</h2><p>FFmpeg is required for video encoding.</p><p>Installation steps:</p><p>Download FFmpeg</p><p>Extract the files</p><p>Add the <code>bin</code> directory to your system&#x2019;s <strong>PATH</strong></p><p>Ensure no Chinese characters exist in the file path</p><p>Without FFmpeg, ComfyUI cannot render animations into video.</p><hr><h1 id="5-example-workflow-anime-style-demonstration">5. Example Workflow (Anime-Style Demonstration)</h1><p>If you understand text-to-image workflows, AnimateDiff setup is simple.<br>You&#x2019;re essentially inserting a few additional nodes.</p><h3 id="recommended-workflow-structure"><strong>Recommended Workflow Structure:</strong></h3><p>Start with a standard T2I workflow</p><p>Add <strong>Dynamic Diffusion Loader</strong></p><p>Add <strong>Dynamic Diffusion Context Option</strong></p><p>Connect Checkpoint Loader &#x2192; Loader Model Input</p><p>Connect Context Option &#x2192; Loader</p><p>Loader Model Output &#x2192; KSampler</p><p>Add VAE Decoder &#x2192; Merge to Video</p><p>Feed frames into the video node</p><p>This produces a complete animation.</p><hr><h1 id="6-recommended-parameters-for-high-quality-results">6. Recommended Parameters for High-Quality Results</h1><h2 id="61-checkpoint-prompts">6.1 Checkpoint &amp; Prompts</h2><p><strong>Checkpoint:</strong> <code>counterfeitV30</code> or anime-style equivalents</p><p><strong>Positive prompts:</strong> Character + scene description</p><p><strong>Negative prompts:</strong> Remove blurriness, deformities, etc.</p><hr><h2 id="62-dynamic-diffusion-loader">6.2 Dynamic Diffusion Loader</h2><p>Motion Model: <code>mm_sd_v15_v2.ckpt</code></p><p>Scheduler: <code>slerp_linear</code></p><p>Dynamic Scale: <code>1</code></p><p>Use V2 Model: Enabled</p><hr><h2 id="63-context-options">6.3 Context Options</h2><p>Length: <code>16</code></p><p>Step: <code>1</code></p><p>Overlap: <code>4</code></p><p>Loop Context: Off</p><hr><h2 id="64-ksampler">6.4 KSampler</h2><p>Steps: <code>20</code></p><p>CFG: <code>7</code></p><p>Sampler: <code>dpmpp_2m</code></p><hr><h2 id="65-video-settings">6.5 Video Settings</h2><p>Batch Size: Depends on frame count<br>(Example: 48 frames &#x2248; 6 seconds @ 8fps)</p><p>Format: <code>mp4mpeg4</code></p><p>Resolution: <code>512&#xD7;512</code></p><hr><h1 id="7-generate-your-first-animation">7. Generate Your First Animation</h1><p>After confirming your settings, click <strong>Generate</strong>.<br>AnimateDiff will begin producing frames one by one.</p><p>When finished, you&#x2019;ll notice:</p><p>Natural blinking</p><p>Subtle body movement</p><p>Gentle tilting</p><p>Smooth motion transitions</p><p>The difference from a static image is dramatic.</p><hr><h1 id="8-practical-tips-for-better-animations">8. Practical Tips for Better Animations</h1><p><strong>Start small:</strong> Long clips consume a lot of VRAM</p><p><strong>Try different motion models:</strong> Each has a unique motion signature</p><p><strong>Mind your hardware:</strong> Low VRAM = more crash risk</p><p><strong>Learn from the community:</strong> Users often share LoRAs and workflows</p><hr><h1 id="9-troubleshooting-checklist">9. Troubleshooting Checklist</h1><p>If your workflow isn&#x2019;t running:</p><p>Did the Loader detect your motion model?</p><p>Is the file path correct?</p><p>Is videoHelperSuite installed?</p><p>Is FFmpeg properly added to PATH?</p><p>Are you using an <strong>SD1.5 checkpoint</strong>?</p><p>Most issues come from these points.</p><hr><h1 id="conclusion">Conclusion</h1><p>AnimateDiff brings a new dimension to ComfyUI:<br>From <strong>&#x201C;creating static images&#x201D;</strong> to <strong>&#x201C;making characters move.&#x201D;</strong></p><p>While the nodes may seem complex at first, once you produce your first animation, it becomes clear why AnimateDiff is so popular.<br>From there, you can explore longer scenes, cinematic shots, and multi-character interactions.<br><br><br><strong><strong>Unlock Full-Powered AI Creation!</strong></strong><br>Experience ComfyUI online instantly:&#x1F449; <u><a href="https://market.cephalon.ai/aigc" rel="noopener noreferrer">https://market.cephalon.ai/aigc</a></u><br>Join our global creator community:&#x1F449; <u><a href="https://discord.com/invite/KeRrXtDfjt" rel="noopener noreferrer">https://discord.gg/KeRrXtDfjt</a></u><br>Collaborate with creators worldwide &amp; get real-time admin support.</p>]]></content:encoded></item><item><title><![CDATA[SD UpScale for High-Definition Enlargement: A Practical Guide to Boosting Image Quality in ComfyUI]]></title><description><![CDATA[<p>High-definition upscaling is a frequent requirement in image processing. While ComfyUI offers various methods for enlargement, many require complex node setups or yield limited results. The <strong>SD UpScale extension</strong> provides a more efficient solution by achieving high-quality upscaling through <strong>tiled processing</strong>, which saves both time and storage space. This guide</p>]]></description><link>https://blog.cephalon.ai/sd-upscale-for-high-definition-enlargement-a-practical-guide-to-boosting-image-quality-in-comfyui/</link><guid isPermaLink="false">692e6044729e4a0001f14f06</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Tue, 02 Dec 2025 03:53:57 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/12/20251201-180547.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/12/20251201-180547.jpeg" alt="SD UpScale for High-Definition Enlargement: A Practical Guide to Boosting Image Quality in ComfyUI"><p>High-definition upscaling is a frequent requirement in image processing. While ComfyUI offers various methods for enlargement, many require complex node setups or yield limited results. The <strong>SD UpScale extension</strong> provides a more efficient solution by achieving high-quality upscaling through <strong>tiled processing</strong>, which saves both time and storage space. This guide will cover the working principles of SD UpScale, its installation process, a detailed explanation of its node parameters, and a practical case study.<br></p><h3 id="how-sd-upscale-works-why-its-ideal-for-hd-upscaling">How SD UpScale Works: Why It&apos;s Ideal for HD Upscaling</h3><p><br>The core concept behind SD UpScale is to segment the image into multiple smaller blocks, upscale each block independently, and then seamlessly stitch them back together to form the complete, enlarged image. This method offers two main advantages:</p><ul><li><strong>Preserves Image Quality:</strong> Local optimization prevents overall image distortion.</li><li><strong>Efficient Processing:</strong> Tiled processing reduces memory consumption and speeds up the generation time.It&apos;s important to note that tiling and re-painting blocks can sometimes lead to incoordination between adjacent areas. Therefore, adjusting the tile size and parameter settings is crucial to ensure global consistency.</li></ul><h3 id="installing-the-sd-upscale-extension">Installing the SD UpScale Extension</h3><p><br>SD UpScale is an extension node for ComfyUI. Follow these steps to install it:</p><ul><li>Open the ComfyUI interface and click the <strong>&quot;Manager&quot;</strong> button in the lower right corner.</li><li>In the pop-up window, click <strong>&quot;Install Custom Nodes,&quot;</strong> and then enter <strong>&quot;Upscale&quot;</strong> in the search bar on the top right.</li><li>Locate the <strong>&quot;SD Upscale&quot;</strong> extension in the list and click the <strong>&quot;Install&quot;</strong> button.</li><li>After the installation is complete, <strong>restart ComfyUI</strong> to start using it.Once installed, you can find the node under <strong>New Node</strong> &#x2192; <strong>Image</strong> &#x2192; <strong>Upscale</strong> &#x2192; <strong>SD UpScale</strong>.</li></ul><h3 id="case-study-using-sd-upscale-to-enlarge-a-realistic-portrait">Case Study: Using SD UpScale to Enlarge a Realistic Portrait</h3><p><br>The following steps demonstrate how to set up an SD UpScale workflow to enlarge a realistic-style portrait, assuming you have a base image ready.</p><p><strong>Step 1: Set Up the Base Workflow</strong><br>Load a Text-to-Image workflow and create a new <strong>&quot;SD UpScale&quot;</strong> node. Connect the following nodes to the SD UpScale node:</p><ul><li>The <strong>&quot;image&quot;</strong> output port of the <strong>&quot;VAE Decode&quot;</strong> node $\rightarrow$ SD UpScale node&apos;s <strong>&quot;image&quot;</strong> input port.</li><li>The <strong>&quot;model&quot;</strong> output port of the <strong>&quot;Checkpoint Loader&quot;</strong> node $\rightarrow$ SD UpScale node&apos;s <strong>&quot;model&quot;</strong> input port.</li><li>The <strong>&quot;conditioning&quot;</strong> output port of the positive and negative prompt nodes $\rightarrow$ SD UpScale node&apos;s <strong>&quot;positive&quot;</strong> and <strong>&quot;negative&quot;</strong> input ports.</li><li>The <strong>&quot;VAE&quot;</strong> output port of the <strong>&quot;Checkpoint Loader&quot;</strong> node $\rightarrow$ SD UpScale node&apos;s <strong>&quot;VAE&quot;</strong> input port.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image-8.png" class="kg-image" alt="SD UpScale for High-Definition Enlargement: A Practical Guide to Boosting Image Quality in ComfyUI" loading="lazy" width="1274" height="853" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image-8.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/12/image-8.png 1000w, https://blog.cephalon.ai/content/images/2025/12/image-8.png 1274w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>Step 2: Add Upscale Model and Preview</strong><br>Create an <strong>&quot;Upscale Model Loader&quot;</strong> node and connect its <strong>&quot;upscale_model&quot;</strong> output port to the SD UpScale node&apos;s <strong>&quot;upscale_model&quot;</strong> input port. Then, create a <strong>&quot;Preview Image&quot;</strong> node and connect the SD UpScale node&apos;s <strong>&quot;image&quot;</strong> output port to the Preview node&apos;s <strong>&quot;image&quot;</strong> input port.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image-9.png" class="kg-image" alt="SD UpScale for High-Definition Enlargement: A Practical Guide to Boosting Image Quality in ComfyUI" loading="lazy" width="1069" height="701" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image-9.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/12/image-9.png 1000w, https://blog.cephalon.ai/content/images/2025/12/image-9.png 1069w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>Step 3: Select Model and Prompts</strong><br>Since we are processing a realistic portrait, select a realistic-style model in the Checkpoint Loader (e.g., <code>realisticVisionV50</code>).</p><ul><li>In the <strong>Positive Prompt</strong> box, input quality descriptions such as &quot;best quality, masterpiece, white dress, lake, upper body.&quot;</li><li>In the <strong>Negative Prompt</strong> box, input phrases to avoid low quality, such as &quot;lowres, text, error, blurry.&quot;<br></li></ul><p><strong>Step 4: Set Base Parameters</strong><br>In the <strong>&quot;Empty Latent&quot;</strong> node, set the initial size of the generated image (e.g., $512 \times 768$) and set the batch size to <strong>1</strong>. In the <strong>&quot;Upscale Model Loader&quot;</strong> node, select an upscale model (e.g., <code>ESRGAN_4x.pth</code>).<br></p><p><strong>Step 5: Configure the K Sampler</strong><br>In the K Sampler, set the <strong>seed</strong> to <strong>0</strong>, the operation after running to <strong>&quot;randomize,&quot;</strong> <strong>steps</strong> to <strong>25</strong>, <strong>CFG</strong> to <strong>7</strong>, <strong>sampler</strong> to <strong>&quot;dpmpp_2m,&quot;</strong> <strong>scheduler</strong> to <strong>&quot;karras,&quot;</strong> and <strong>denoise</strong> to <strong>1</strong>.<br></p><p><strong>Step 6: Adjust SD UpScale Node Parameters</strong><br>In the SD UpScale node, set the following parameters:</p><ul><li><strong>Upscale Factor:</strong> 2</li><li><strong>Seed:</strong> 0</li><li><strong>Steps:</strong> 25</li><li><strong>CFG:</strong> 7</li><li><strong>Sampler:</strong> dpmpp_2m</li><li><strong>Scheduler:</strong> karras</li><li><strong>Denoise:</strong> 0.2</li><li><strong>Mode Type:</strong> chess</li><li><strong>Tile Width:</strong> 512</li><li><strong>Tile Height:</strong> 768</li><li><strong>Bluring:</strong> 12</li><li><strong>Tile Padding:</strong> 32</li><li><strong>Seam Fix Mode:</strong> None (Keep other seam parameters at default)</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image-10.png" class="kg-image" alt="SD UpScale for High-Definition Enlargement: A Practical Guide to Boosting Image Quality in ComfyUI" loading="lazy" width="1385" height="773" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image-10.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/12/image-10.png 1000w, https://blog.cephalon.ai/content/images/2025/12/image-10.png 1385w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>Step 7: Generate the Image</strong><br>Click the <strong>&quot;Queue Prompt&quot;</strong> button and wait for the processing to finish. The generated image will be displayed in the preview node, featuring clear person details without any visible seams or artifacts.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image-11.png" class="kg-image" alt="SD UpScale for High-Definition Enlargement: A Practical Guide to Boosting Image Quality in ComfyUI" loading="lazy" width="1230" height="896" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image-11.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/12/image-11.png 1000w, https://blog.cephalon.ai/content/images/2025/12/image-11.png 1230w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>Summary: Advantages and Applicable Scenarios of SD UpScale</strong><br>SD UpScale is a powerful upscaling tool in ComfyUI, particularly suited for processing high-definition portraits, anime, or landscape images. By using tiled processing, it maintains detail richness while increasing resolution. For new users, the key is to adjust the tiling parameters and prompts based on the image type. As ComfyUI continues to update, SD UpScale may receive further optimizations; it is advisable to follow the official extension repository for the latest features.With this guide, you can quickly get started with SD UpScale and apply it to your daily image processing to achieve high-quality upscaling results.</p><p></p><p><strong><strong>Unlock Full-Powered AI Creation!</strong></strong><br><strong><strong>Experience ComfyUI online instantly:</strong></strong><br><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong><strong>Join our global creator community:</strong></strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></strong></p>]]></content:encoded></item><item><title><![CDATA[A Practical Guide to the Tile Model in ComfyUI: Image Detail Restoration and High-Definition Upscaling]]></title><description><![CDATA[<p><br>In the process of image generation and manipulation, a common challenge arises: when using the &quot;Img2Img&quot; function and increasing the Denoising Strength to enhance details, the image content can often change unpredictably. The Tile Model is precisely the tool designed to solve this problem. It is capable of</p>]]></description><link>https://blog.cephalon.ai/a-practical-guide-to-the-tile-model-in-comfyui-image-detail-restoration-and-high-definition-upscaling/</link><guid isPermaLink="false">692d693a729e4a0001f14ebe</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Mon, 01 Dec 2025 10:23:53 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/12/20251201-180540.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/12/20251201-180540.jpeg" alt="A Practical Guide to the Tile Model in ComfyUI: Image Detail Restoration and High-Definition Upscaling"><p><br>In the process of image generation and manipulation, a common challenge arises: when using the &quot;Img2Img&quot; function and increasing the Denoising Strength to enhance details, the image content can often change unpredictably. The Tile Model is precisely the tool designed to solve this problem. It is capable of optimizing image details while maintaining the stability of the overall composition, making it ideal for high-definition image upscaling and detail restoration. This tutorial will introduce the fundamental principles of the Tile Model and provide a step-by-step operational guide to help beginner users easily set up a Tile workflow in ComfyUI.</p><h2 id="tile-model-overview-why-its-perfect-for-detail-restoration">Tile Model Overview: Why It&apos;s Perfect for Detail Restoration</h2><p><br>The core advantage of the Tile Model lies in its block-processing mechanism. It avoids the destruction of the overall structure by segmenting the image into multiple small blocks and optimizing the details within each block sequentially. This means that even when upscaling an image to a significantly higher resolution, the Tile Model can enhance details while ensuring the coherence of the original content remains intact.For instance, if you have a blurry image and use the Tile Model to boost its resolution from a base value to a higher one, you will notice a significant enhancement in image details (such as textures and edges), yet the main structure remains unchanged. This characteristic makes the Tile Model an ideal choice for processing anime, photographs, or artistic images.</p><h2 id="deep-dive-into-the-tile-preprocessor-key-parameter-explanation">Deep Dive into the Tile Preprocessor: Key Parameter Explanation</h2><p><br>The Tile Model utilizes only one preprocessor: the &quot;Tile Preprocessor&quot; (Tile Tiled Preprocessor). This node contains two main components:</p><p>Iterations: Controls the degree of blur applied to the image. A higher value results in a blurrier image. It is generally recommended to set this to 1 to prevent over-processing.</p><p>Resolution: Sets the reference resolution for the processing, which is typically set to match the height of the uploaded image.Tip for Usage: Adjust the Iterations based on the image type. For scenarios requiring slight blurring, you might try a higher value, but usually, keeping it at 1 is sufficient.<br></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image.png" class="kg-image" alt="A Practical Guide to the Tile Model in ComfyUI: Image Detail Restoration and High-Definition Upscaling" loading="lazy" width="884" height="643" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image.png 600w, https://blog.cephalon.ai/content/images/2025/12/image.png 884w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><h2 id="setting-up-the-tile-workflow-a-step-by-step-guide">Setting Up the Tile Workflow: A Step-by-Step Guide</h2><p><br>The following detailed steps for setting up a Tile workflow in ComfyUI are suitable for beginner users. We will assume you are familiar with the basic interface and node operations of ComfyUI.</p><p><strong>Step 1: Load the Base Workflow and Add the Preprocessor</strong><br>Navigate to the ComfyUI interface and load a Text-to-Image workflow. Create a new &quot;Tile Preprocessor&quot; node and connect it to the &quot;Load Image&quot; and &quot;Preview Image&quot; nodes. Click the &quot;choose file to upload&quot; button in the &quot;Load Image&quot; node to upload the blurry image you intend to process.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image-2.png" class="kg-image" alt="A Practical Guide to the Tile Model in ComfyUI: Image Detail Restoration and High-Definition Upscaling" loading="lazy" width="1064" height="421" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image-2.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/12/image-2.png 1000w, https://blog.cephalon.ai/content/images/2025/12/image-2.png 1064w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>Step 2: Connect the ControlNet Nodes</strong><br>Create the &quot;ControlNet Apply&quot; and &quot;ControlNet Loader&quot; nodes and connect them. In the &quot;ControlNet Loader,&quot; select the Tile Model (e.g., <code>control_v1f_sd15_tile.pth</code>). Then, connect the &quot;image&quot; input port of the &quot;ControlNet Apply&quot; node to the &quot;image&quot; output port of the &quot;Tile Preprocessor&quot; node.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image-4.png" class="kg-image" alt="A Practical Guide to the Tile Model in ComfyUI: Image Detail Restoration and High-Definition Upscaling" loading="lazy" width="886" height="314" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image-4.png 600w, https://blog.cephalon.ai/content/images/2025/12/image-4.png 886w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>Step 3: Set Up the Conditional Guidance</strong><br>The ControlNet Apply node serves as a positive condition for image generation. Therefore, its &quot;conditioning&quot; port needs to be connected in series between the &quot;CLIP Text Encoder&quot; and the &quot;K Sampler&quot; nodes. This allows the Tile Model to guide the detail optimization during the generation process.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image-5.png" class="kg-image" alt="A Practical Guide to the Tile Model in ComfyUI: Image Detail Restoration and High-Definition Upscaling" loading="lazy" width="1897" height="1037" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image-5.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/12/image-5.png 1000w, https://blog.cephalon.ai/content/images/size/w1600/2025/12/image-5.png 1600w, https://blog.cephalon.ai/content/images/2025/12/image-5.png 1897w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>Step 4: Select the Model and Prompts</strong><br>Choose an appropriate Checkpoint Model based on the image type. For instance, when working with anime images, you might use counterfeitV30.130.safetensors.In the Positive Prompt box, enter phrases describing a high-quality image (e.g., &quot;best quality, masterpiece, smile, outdoors&quot;). In the Negative Prompt box, input phrases to avoid low quality (e.g., &quot;lowres, error, blurry&quot;).</p><p><strong>Step 5: Configure Parameters</strong><br>In the &quot;Tile Preprocessor&quot; node, set Iterations to 1 and the Resolution to match the uploaded image&apos;s height (e.g., 320). In the &quot;ControlNet Apply&quot; node, set the Strength to 0.8. In the &quot;Empty Latent&quot; node, set the desired output image dimensions (e.g., $1024 \times 1536$) and adjust the batch size.<br></p><p><strong>Step 6: Adjust K Sampler Settings</strong><br>In the &quot;K Sampler&quot; node, set the seed to 0, the operation after running to &quot;randomize,&quot; steps to 25, CFG to 7, sampler to &quot;dpmpp_2m,&quot; scheduler to &quot;karras,&quot; and denoise to 1.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image-6.png" class="kg-image" alt="A Practical Guide to the Tile Model in ComfyUI: Image Detail Restoration and High-Definition Upscaling" loading="lazy" width="1494" height="862" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image-6.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/12/image-6.png 1000w, https://blog.cephalon.ai/content/images/2025/12/image-6.png 1494w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>Step 7: Generate the Image</strong><br>Click the &quot;Queue Prompt&quot; button and wait for the processing to finish. You will obtain a high-definition, upscaled image with visibly enhanced details, while the overall composition remains stable.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/12/image-7.png" class="kg-image" alt="A Practical Guide to the Tile Model in ComfyUI: Image Detail Restoration and High-Definition Upscaling" loading="lazy" width="1248" height="612" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/12/image-7.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/12/image-7.png 1000w, https://blog.cephalon.ai/content/images/2025/12/image-7.png 1248w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><h2 id="conclusion-practical-applications-of-the-tile-model">Conclusion: Practical Applications of the Tile Model<br></h2><p>The Tile Model is not just suitable for image upscaling; it is also a powerful tool for restoring details in blurry or low-quality images. With this tutorial, you can quickly get started with the Tile workflow in ComfyUI and apply it to process anime, photos, or other image types. Remember, the key is to adjust the preprocessor parameters and prompts to suit the needs of different scenarios. If you are interested in the latest features of ComfyUI, keep an eye on official updates, such as optimizations to the Tile Model or support for new nodes.Through practice, you will find the Tile Model to be a simple yet powerful tool that helps you easily achieve detail enhancement while maintaining image integrity.</p><p></p><p><strong><strong>Unlock Full-Powered AI Creation!</strong></strong><br><strong><strong>Experience ComfyUI online instantly:</strong></strong><br><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong><strong>Join our global creator community:</strong></strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></strong></p>]]></content:encoded></item><item><title><![CDATA[ComfyUI Practical Guide: Using the Inpaint Preprocessors]]></title><description><![CDATA[<p>The function and usage of Inpaint are similar to local redrawing, except that Inpaint essentially replaces the algorithm for local redrawing in the native text-to-image function. By means of a deep learning model, it analyzes the missing areas in the image and the information of surrounding pixels, intelligently predicts and</p>]]></description><link>https://blog.cephalon.ai/comfyui-practical-guide-using-the-inpaint-preprocessors/</link><guid isPermaLink="false">691d7c76729e4a0001f14e3b</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Thu, 20 Nov 2025 18:30:55 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/11/----2.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/11/----2.png" alt="ComfyUI Practical Guide: Using the Inpaint Preprocessors"><p>The function and usage of Inpaint are similar to local redrawing, except that Inpaint essentially replaces the algorithm for local redrawing in the native text-to-image function. By means of a deep learning model, it analyzes the missing areas in the image and the information of surrounding pixels, intelligently predicts and fills in pixels that match the surrounding environment, thereby achieving natural restoration of the image.</p><h2 id="inpaint-workflow-construction">Inpaint Workflow Construction</h2><p>What differs from other ControlNet preprocessors is that Inpaint has only one preprocessor and no additional components. Meanwhile, the construction of the Inpaint workflow is quite different from other models&#x2014;since it requires a reference image, its workflow is based on the image generation workflow. The specific setup is as follows:</p><p><br>(1) Enter the ComfyUI interface, load the image generation workflow, delete the &quot;VAE Encoder&quot; node, create a &quot;VAE decoder&quot; node, and connect the &quot;Latent&quot; output port of the &quot;VAE decoder&quot; node to the &quot;Latent&quot; input port of the &quot;K Sampler&quot; node. Then connect the output port of the &quot;Checkpoint Loader&quot; node to the &quot;VAE&quot; input port of the &quot;VAE&quot; node.</p><p><br>(2) Create and connect the &quot;ControlNet Apply&quot; and &quot;ControlNet Loader&quot; nodes. In the &quot;ControlNet Loader,&quot; select the Inpaint model &quot;control_v1p_sd15_inpaint_fp16.safetensors.&quot; Connect the &quot;image&quot; output port of the &quot;Inpaint Preprocessor&quot; node to the &quot;image&quot; input port of the &quot;ControlNet Apply&quot; node.</p><p>(3) In the workflow, the &quot;ControlNet Apply&quot; node acts as a positive condition to guide the generation, so connect the &quot;condition&quot; port of the &quot;ControlNet Apply&quot; node in series between the &quot;CLIP Text Encoder&quot; node and the &quot;K Sampler&quot; node, as shown in the figure below.With this, the Inpaint workflow is fully set up.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/---4.png" class="kg-image" alt="ComfyUI Practical Guide: Using the Inpaint Preprocessors" loading="lazy" width="415" height="279"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><h2 id="practical-operation">Practical Operation</h2><p>Although the Inpaint workflow has been set up, it still requires creating a mask image for use. Here, we&#x2019;ll walk through the Inpaint workflow step-by-step using a product scene case, and explain the case settings in detail. The specific operation steps are as follows:<br></p><p>(1) Enter the ComfyUI interface. In the &quot;Load Image&quot; node, click the &quot;choose file to upload&quot; button to upload the prepared product material image. Right-click on the &quot;Load Image&quot; node, and select the &quot;Open in mask editor&quot; option from the pop-up menu to open the mask editor window.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/---1-1.png" class="kg-image" alt="ComfyUI Practical Guide: Using the Inpaint Preprocessors" loading="lazy" width="415" height="215"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p>(2) In the mask editor window, use the brush to paint over the white areas of the image&#x2014;completely cover all white parts except the perfume bottle. Click the &quot;Save to node&quot; button in the bottom right corner of the window. The masked image will then be displayed in the &quot;Load Image&quot; node.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/---2.png" class="kg-image" alt="ComfyUI Practical Guide: Using the Inpaint Preprocessors" loading="lazy" width="293" height="372"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p>(3) Since the image to be partially redrawn is a photorealistic e-commerce-style image: select the photorealistic Checkpoint model &#x201C;majicmixRealistic_v7.safetensors&#x201D;.</p><p><br>(4) In the positive prompt box, enter a description of the new background. Here, we input: &quot;still life, indoors, spot backdrop, pink flower, best quality, masterpiece, bottle, solo&quot;. In the negative prompt box, enter prompts for undesirable image quality: &quot;lowres, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg&quot;. Set the &quot;Strength&quot; of the &quot;ControlNet Apply&quot; node to 1, as shown in the figure below.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/---3.png" class="kg-image" alt="ComfyUI Practical Guide: Using the Inpaint Preprocessors" loading="lazy" width="415" height="533"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p>(5) In the &quot;K Sampler&quot; node:<br>&#x2022;	Set &quot;Random Seed&quot; to 0<br>&#x2022;	Set &quot;Post-run Action&quot; to &quot;randomize&quot;<br>&#x2022;	Set &quot;Steps&quot; to 25<br>&#x2022;	Set &quot;CFG&quot; to 7<br>&#x2022;	Set &quot;Sampler&quot; to &quot;dpmpp_2m&quot;<br>&#x2022;	Set &quot;Scheduler&quot; to &quot;karras&quot;<br>&#x2022;	Set &quot;Denoise&quot; to 0.8<br>As shown in the figure below. Note that the &quot;Denoise&quot; value should not be set below 0.5, otherwise the redrawing effect will be very unnoticeable.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/---5.png" class="kg-image" alt="ComfyUI Practical Guide: Using the Inpaint Preprocessors" loading="lazy" width="332" height="300"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p>(6) Click the &quot;Add Prompt Queue&quot; button, and an image of the product with a new background will be generated.</p><p><strong><strong>Unlock Full-Powered AI Creation!</strong></strong><br><strong><strong>Experience ComfyUI online instantly:</strong></strong><br><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong><strong>Join our global creator community:</strong></strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></strong></p>]]></content:encoded></item><item><title><![CDATA[ComfyUI Practical Guide: Using the OpenPose Preprocessors]]></title><description><![CDATA[<p>OpenPose is a key model for controlling human poses. It can detect key points of the human body structure&#x2014;such as the position of the head, shoulders, elbows, and knees&#x2014;while ignoring detailed elements like the person&#x2019;s clothing, hairstyle, and background. By capturing the position of</p>]]></description><link>https://blog.cephalon.ai/comfyui-practical-guide-using-the-openpose-preprocessors/</link><guid isPermaLink="false">691d2d2a729e4a0001f14dae</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Thu, 20 Nov 2025 18:00:51 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/11/ChatGPT-Image-2025-11-19--10_50_39.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/11/ChatGPT-Image-2025-11-19--10_50_39.png" alt="ComfyUI Practical Guide: Using the OpenPose Preprocessors"><p>OpenPose is a key model for controlling human poses. It can detect key points of the human body structure&#x2014;such as the position of the head, shoulders, elbows, and knees&#x2014;while ignoring detailed elements like the person&#x2019;s clothing, hairstyle, and background. By capturing the position of the human structure in the frame, it restores the person&#x2019;s pose and expression.<br></p><h2 id="openpose-preprocessors">OpenPose Preprocessors</h2><p><br>OpenPose has 5 preprocessor nodes: <strong>&quot;Dense Pose Preprocessor&quot;, &quot;DW Pose Preprocessor&quot;, &quot;MediaPipe Facial Mesh Preprocessor&quot;, &quot;OpenPose Pose Preprocessor&quot;, and &quot;AnimalPose Animal Pose Preprocessor&quot;</strong>. </p><p>In the &quot;DW Pose Preprocessor&quot; and &quot;OpenPose Pose Preprocessor&quot; nodes, you can control which body parts are included in the skeleton map. The &quot;Dense Pose Preprocessor&quot; node differs from others: it uses different colors to distinguish body parts to achieve pose control. The &quot;MediaPipe Facial Mesh Preprocessor&quot; node can detect and track human faces in real time from input images or videos, then generate a dense mesh containing 468 key points. The &quot;AnimalPose Animal Pose Preprocessor&quot; node detects key points of animal body structures and generates corresponding skeleton maps.</p><p><br>These five nodes vary significantly in function, application scenarios, and features: </p><p>1. Dense and DW Pose Preprocessors focus on full-body pose analysis and recognition.<br>2. MediaPipe Facial Mesh Preprocessor specializes in extracting facial features.<br>3. OpenPose Pose Preprocessor provides full-body pose and key point detection.<br>4. AnimalPose Animal Pose Preprocessor is dedicated to animals.</p><h2 id="practical-operation">Practical Operation</h2><p><br>(1) Enter the ComfyUI interface, load the text-to-image workflow, create a new &quot;DW Pose Preprocessor&quot; node, and connect it to the &quot;Load Image&quot; and &quot;Preview Image&quot; nodes, as shown in the figure below.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/---1.png" class="kg-image" alt="ComfyUI Practical Guide: Using the OpenPose Preprocessors" loading="lazy" width="415" height="191"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p>(2) Create new &quot;ControlNet Apply&quot; and &quot;ControlNet Loader&quot; nodes and connect them. In the &quot;ControlNet Loader&quot;, select the &quot;control_v11p_sd15_openpose.pth&quot; OpenPose model.</p><p><br>(3) In the workflow, the &quot;ControlNet Apply&quot; node acts as a positive condition to guide the generation. Therefore, connect the &quot;Condition&quot; port of the &quot;ControlNet Apply&quot; node between the &quot;CLIP Text Encoder&quot; node and the &quot;K Sampler&quot;, as shown in the figure below.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/ScreenShot_2025-11-19_101721_238.png" class="kg-image" alt="ComfyUI Practical Guide: Using the OpenPose Preprocessors" loading="lazy" width="1424" height="1022" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/ScreenShot_2025-11-19_101721_238.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/ScreenShot_2025-11-19_101721_238.png 1000w, https://blog.cephalon.ai/content/images/2025/11/ScreenShot_2025-11-19_101721_238.png 1424w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p>(4) Since we&#x2019;re generating an IP character image, select the architectural-style model &quot;IP DESIGN_3D Cute Style Model_V3.1.safetensors&quot; as the Checkpoint model.</p><p>(5) Enter the description of the refined style in the positive prompt box&#x2014;here we input: &quot;1girl, Sailor Moon, solo, earrings, upper body, portrait, looking at viewer&quot;. In the negative prompt box, enter prompts for poor image quality&#x2014;here we input: &quot;lowres, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg&quot;. </p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/ScreenShot_2025-11-19_102219_004.png" class="kg-image" alt="ComfyUI Practical Guide: Using the OpenPose Preprocessors" loading="lazy" width="847" height="1097" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/ScreenShot_2025-11-19_102219_004.png 600w, https://blog.cephalon.ai/content/images/2025/11/ScreenShot_2025-11-19_102219_004.png 847w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p>(6) For the &quot;DW Pose Preprocessor&quot; node, set all detection options to &quot;Enable&quot;, set the &quot;Resolution&quot; to 768 (this is the height of the uploaded image, so you can keep the default size of the image here). Set the image size in the &quot;Empty Latent&quot; node to 512&#xD7;768, and set the batch size for image generation to 1, as shown in the figure below.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/ScreenShot_2025-11-19_102458_087.png" class="kg-image" alt="ComfyUI Practical Guide: Using the OpenPose Preprocessors" loading="lazy" width="350" height="478"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p>(7) In the &quot;K Sampler&quot; node:<br>Set &quot;Randomize&quot; to 0<br>Set &quot;Post-run Action&quot; to &quot;Randomize&quot;<br>Set &quot;Steps&quot; to 25<br>Set &quot;CFG&quot; to 7<br>Set &quot;Sampler&quot; to &quot;dpmpp_2m&quot;<br>Set &quot;Scheduler&quot; to &quot;karras&quot;<br>Set &quot;Noise Reduction&quot; to 1</p><p>(8) Click the &quot;Add Prompt Queue&quot; button, and the IP character image with the same action as the uploaded image will be generated.<br></p><h2 id="openpose-skeleton-image">Openpose skeleton image</h2><p><br>In addition to uploading real human images, the Openpose Preprocessor node can also upload simple human action images or directly upload pre-extracted human skeleton images. You can download human action images directly from this website: <a href="www.posemaniacs.com/zh-Hans">www.posemaniacs.com/zh-Hans</a>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/ScreenShot_2025-11-19_102859_406.png" class="kg-image" alt="ComfyUI Practical Guide: Using the OpenPose Preprocessors" loading="lazy" width="2000" height="1038" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/ScreenShot_2025-11-19_102859_406.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/ScreenShot_2025-11-19_102859_406.png 1000w, https://blog.cephalon.ai/content/images/size/w1600/2025/11/ScreenShot_2025-11-19_102859_406.png 1600w, https://blog.cephalon.ai/content/images/size/w2400/2025/11/ScreenShot_2025-11-19_102859_406.png 2400w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p></p><p><strong><strong>Unlock Full-Powered AI Creation!</strong></strong><br><strong><strong>Experience ComfyUI online instantly:</strong></strong><br><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong><strong>Join our global creator community:</strong></strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></strong></p>]]></content:encoded></item><item><title><![CDATA[Scribble and Lineart in ComfyUI: A Beginner-Friendly Guide]]></title><description><![CDATA[<p>In ComfyUI, ControlNet models help users fine-tune and control how images are generated. Among them, <strong>Scribble</strong> and <strong>Lineart</strong> are two of the most commonly used edge-based sketch extraction models. They&#x2019;re ideal for hand-drawn or line-art&#x2013;style control.<br>This guide walks you through the basic concepts, how to</p>]]></description><link>https://blog.cephalon.ai/scribble-and-lineart/</link><guid isPermaLink="false">691153e2729e4a0001f14c00</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Mon, 10 Nov 2025 09:06:30 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/11/seo1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/11/seo1.png" alt="Scribble and Lineart in ComfyUI: A Beginner-Friendly Guide"><p>In ComfyUI, ControlNet models help users fine-tune and control how images are generated. Among them, <strong>Scribble</strong> and <strong>Lineart</strong> are two of the most commonly used edge-based sketch extraction models. They&#x2019;re ideal for hand-drawn or line-art&#x2013;style control.<br>This guide walks you through the basic concepts, how to choose the right preprocessor, and a simple workflow example to help beginners get started quickly.</p><hr><h2 id="1-scribble-model-intro-sketch-style-line-extraction">1. Scribble Model Intro: Sketch-Style Line Extraction</h2><p><strong>Scribble</strong> is an edge detection model that produces results resembling loose hand-drawn sketches.<br>Its lines are thicker and less precise, making it great for situations where you only need a rough outline and want Stable Diffusion to freely fill in the details.<br>For example, you can use it to turn a photo into a casual sketch and then generate a stylized image based on that outline.</p><hr><h2 id="2-scribble-preprocessors-four-types-and-how-to-choose">2. Scribble Preprocessors: Four Types and How to Choose</h2><p>Scribble provides four different preprocessor nodes, each with unique features and use cases:</p><ul><li><strong>FakeScribble Preprocessor</strong> &#x2013; Simulates a doodle-like effect using an approximate algorithm rather than real edge detection. Perfect for quick, rough sketch generation.</li><li><strong>Scribble Preprocessor</strong> &#x2013; Converts an image into a simplified and abstract sketch with cleaner lines.</li><li><strong>ScribbleXDoG Preprocessor</strong> &#x2013; This one uses the Extended Difference of Gaussian (XDoG) method for edge detection. The <strong>threshold</strong> setting controls how detailed the lines are &#x2014; a lower value picks up more edges (even the faint ones), while a higher value keeps things clean and minimal.</li><li><strong>ScribblePiDiNet Preprocessor</strong> &#x2013; Based on the <em>Pixel Difference Network (Pidinet)</em>, this one is good at detecting both curves and straight edges for more precise line control.</li></ul><p><strong>Recommendation:</strong> If you&#x2019;re new to ComfyUI, start with <strong>FakeScribble</strong> for its simplicity. For finer control over detail, try <strong>ScribbleXDoG</strong>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-17.png" class="kg-image" alt="Scribble and Lineart in ComfyUI: A Beginner-Friendly Guide" loading="lazy" width="1992" height="1150" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-17.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-17.png 1000w, https://blog.cephalon.ai/content/images/size/w1600/2025/11/image-17.png 1600w, https://blog.cephalon.ai/content/images/2025/11/image-17.png 1992w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><hr><h2 id="3-what-is-the-lineart-model-detailed-line-extraction">3. What Is the Lineart Model: Detailed Line Extraction</h2><p><strong>Lineart</strong> is another edge detection model, but it focuses on capturing clean, artistic lines.<br>Unlike <strong>Canny</strong>, which produces hard, computer-perfect outlines with uniform width, Lineart lines feel more hand-drawn &#x2014; you can see subtle variations in thickness and brush-like texture.<br>It includes two main styles: <em>realistic</em> and <em>anime</em>, allowing you to choose based on your source image type.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-20.png" class="kg-image" alt="Scribble and Lineart in ComfyUI: A Beginner-Friendly Guide" loading="lazy" width="1570" height="1176" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-20.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-20.png 1000w, https://blog.cephalon.ai/content/images/2025/11/image-20.png 1570w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><hr><h2 id="4-lineart-preprocessors-four-variants-and-when-to-use-them">4. Lineart Preprocessors: Four Variants and When to Use Them</h2><p>Lineart provides four preprocessor nodes, each suited for different artistic effects:</p><ul><li><strong>LineArt Preprocessor</strong> &#x2013; Includes a <em>Roughness</em> option to simulate the irregular texture of hand-drawn lines.</li><li><strong>LineArtStandard Preprocessor</strong> &#x2013; Controls Gaussian blur intensity via the <code>guassian_sigma</code> parameter. A smaller value results in sharper lines; a larger value makes lines smoother.</li><li><strong>AnimeLineArt Preprocessor</strong> &#x2013; Designed specifically for anime-style images, capturing clean and stylized linework.</li><li><strong>MangaAnime Preprocessor</strong> &#x2013; Emphasizes sharp outlines, ideal for manga-like results.</li></ul><p><strong>Recommendation:</strong><br>Use <strong>LineArt</strong> or <strong>LineArtStandard</strong> for real-world photos, and <strong>AnimeLineArt</strong> or <strong>MangaAnime</strong> for anime or comic images.</p><hr><h2 id="5-building-a-lineart-workflow-simple-steps-and-example">5. Building a Lineart Workflow: Simple Steps and Example</h2><p>Lineart workflows are built the same way as Scribble ones &#x2014; simply swap in a Lineart preprocessor.<br>Here&#x2019;s a basic example using anime-style character re-styling:</p><ol><li>Set up a text-to-image workflow in ComfyUI and add your chosen Lineart preprocessor node (e.g., AnimeLineArt).</li><li>Connect the ControlNet Loader and Apply nodes, and select the corresponding Lineart model.</li><li>Connect the preprocessor output to ControlNet Apply and integrate the condition port between the Text Encoder and Sampler.</li><li>Set prompts and parameters (like image size and sampling steps), then generate the image.</li></ol><p>For instance, you can upload an anime character, extract its line art using AnimeLineArt, and then change its visual style &#x2014; say, from casual attire to a fantasy outfit. Lineart helps preserve structure while allowing the model to freely reinterpret the style. </p><p><strong><strong>Unlock Full-Powered AI Creation!</strong></strong><br><strong><strong>Experience ComfyUI online instantly:</strong></strong><br><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong><strong>Join our global creator community:</strong></strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></strong></p>]]></content:encoded></item><item><title><![CDATA[ComfyUI Practical Guide: Using the MLSD and Depth Preprocessors]]></title><description><![CDATA[<p>In ComfyUI, preprocessors play a key role in controlling how your images are generated. This article introduces two commonly used ones&#x2014;<strong>MLSD</strong> and <strong>Depth</strong>&#x2014;and walks you through real examples to help you get started quickly.<br>Whether you&#x2019;re new to ComfyUI or looking to optimize your</p>]]></description><link>https://blog.cephalon.ai/comfyui-practical-guide-using-the-mlsd-and-depth-preprocessors-2/</link><guid isPermaLink="false">691198a3729e4a0001f14c9b</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Mon, 10 Nov 2025 09:06:12 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/11/seo2.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/11/seo2.png" alt="ComfyUI Practical Guide: Using the MLSD and Depth Preprocessors"><p>In ComfyUI, preprocessors play a key role in controlling how your images are generated. This article introduces two commonly used ones&#x2014;<strong>MLSD</strong> and <strong>Depth</strong>&#x2014;and walks you through real examples to help you get started quickly.<br>Whether you&#x2019;re new to ComfyUI or looking to optimize your workflow, these tips will help you create more efficiently and with greater control.</p><hr><h2 id="1-mlsd-preprocessor-the-straight-line-extraction-specialist">1. <strong>MLSD Preprocessor: The Straight-Line Extraction Specialist</strong></h2><p>The <strong>MLSD (M-LSD) Preprocessor</strong> is designed to extract straight-line edges from an image.<br>It identifies and preserves linear features while ignoring curves&#x2014;perfect for tasks that require precise geometric boundaries, such as <strong>architectural design, interior decoration, and engineering drawings</strong>.</p><hr><h2 id="2-understanding-the-mlsd-preprocessor-node">2. <strong>Understanding the MLSD Preprocessor Node</strong></h2><p>ComfyUI includes one MLSD-related node:<br><strong>&#x201C;M-LSD Line Segment Preprocessor.&#x201D;</strong><br>This node provides two main parameters:</p><ul><li><strong>Score Threshold (0&#x2013;2):</strong> Controls the strength of line detection. The higher the value, the fewer lines will be kept.</li><li><strong>Distance Threshold (0&#x2013;20):</strong> Filters out lines that are too short, keeping only longer segments.</li></ul><hr><h2 id="3-practical-example-from-empty-room-to-finished-interior">3. <strong>Practical Example: From Empty Room to Finished Interior</strong></h2><p>Here&#x2019;s a simple walkthrough to help you use MLSD in practice:</p><ol><li>Open ComfyUI and load a text-to-image workflow. Add the <strong>&#x201C;M-LSD Line Segment Preprocessor&#x201D;</strong> node. Connect it to <strong>&#x201C;Load Image&#x201D;</strong> and <strong>&#x201C;Preview Image,&#x201D;</strong> then upload a raw interior photo.</li><li>Create <strong>&#x201C;ControlNet Apply&#x201D;</strong> and <strong>&#x201C;ControlNet Loader&#x201D;</strong> nodes. In the loader, select the model:<code>control_v11p_sd15_mlsd_fp16.safetensors</code>, and link the image output to the MLSD node.</li><li>Connect the <strong>&#x201C;Condition&#x201D;</strong> port from <strong>ControlNet Apply</strong> between the <strong>CLIP Text Encoder</strong> and the <strong>K Sampler</strong>.</li><li>Load a suitable checkpoint model, for example:<br><code>Interior_ModernStyle_Fine_2.0.safetensors</code>.</li><li>Enter prompts:</li></ol><ul><li><strong>Positive prompt:</strong> <code>Ceramic tiles, ceiling lights, doors, windows, sofas, fine decoration</code></li><li><strong>Negative prompt:</strong> <code>lowres, text, error, extra digit, cropped, low quality, jpeg</code></li></ul><p>6. &#xA0; Set parameters:</p><ul><li>MLSD <strong>Score Threshold</strong> and <strong>Distance Threshold</strong> = 0.1</li><li>Resolution = 512</li><li><strong>ControlNet Strength</strong> = 1</li></ul><p>7. &#xA0; In the <strong>Empty Latent</strong> node, set output size to 512&#xD7;512 and batch = 1.</p><p>8. &#xA0; Configure <strong>K Sampler</strong>: seed = 0, steps = 25, CFG = 7, sampler = <code>dpmpp_2m</code>, scheduler = <code>karras</code>, denoise = 1.</p><p>9. &#xA0; &#xA0;Click <strong>&#x201C;Add Prompt Queue&#x201D;</strong> to generate the image.</p><p>After completing these steps, you&#x2019;ll see how MLSD extracts clean structural lines from the raw photo, providing a solid base for refined interior design generation.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-22.png" class="kg-image" alt="ComfyUI Practical Guide: Using the MLSD and Depth Preprocessors" loading="lazy" width="2000" height="1115" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-22.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-22.png 1000w, https://blog.cephalon.ai/content/images/size/w1600/2025/11/image-22.png 1600w, https://blog.cephalon.ai/content/images/2025/11/image-22.png 2318w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><hr><h2 id="4-depth-preprocessor-controlling-3d-depth-and-perspective">4. <strong>Depth Preprocessor: Controlling 3D Depth and Perspective</strong></h2><p>The <strong>Depth Preprocessor</strong> generates a depth map from an image, showing the distance of objects in grayscale&#x2014;<strong>the closer an object is, the lighter it appears; the farther it is, the darker</strong>.<br>It&#x2019;s particularly useful for managing foreground and background relationships and enhancing the spatial depth of generated images.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-23.png" class="kg-image" alt="ComfyUI Practical Guide: Using the MLSD and Depth Preprocessors" loading="lazy" width="2000" height="862" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-23.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-23.png 1000w, https://blog.cephalon.ai/content/images/size/w1600/2025/11/image-23.png 1600w, https://blog.cephalon.ai/content/images/size/w2400/2025/11/image-23.png 2400w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><hr><h2 id="5-choosing-a-depth-preprocessor-node">5. <strong>Choosing a Depth Preprocessor Node</strong></h2><p>ComfyUI offers <strong>three Depth preprocessors</strong>, each suited to different needs:</p><ul><li><strong>LeReS Depth Preprocessor:</strong> Allows foreground/background removal. The <strong>&#x201C;Enhance&#x201D;</strong> option strengthens edge details and mid-range object separation.</li><li><strong>MiDaS Depth Preprocessor:</strong> The default option. Adjust the <strong>&#x201C;Angle&#x201D;</strong> for better depth interpretation from different viewpoints; <strong>&#x201C;Background Threshold&#x201D;</strong> separates foreground and background.</li><li><strong>Zoe Depth Preprocessor:</strong> Balances detail and stability, sitting between LeReS and MiDaS&#x2014;ideal for general use.</li></ul><hr><h3 id="conclusion"><strong>Conclusion</strong></h3><p>The <strong>MLSD</strong> and <strong>Depth</strong> preprocessors are powerful tools within ComfyUI that give you greater control over geometric structure and 3D composition.<br>By fine-tuning their parameters and combining them with the right models, you can easily move from basic outlines to complex visual designs.<br>Experiment with these tools in different projects&#x2014;you&#x2019;ll find endless creative possibilities waiting to be explored</p><p><strong><strong>Unlock Full-Powered AI Creation!</strong></strong><br><strong><strong>Experience ComfyUI online instantly:</strong></strong><br><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong><strong>Join our global creator community:</strong></strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></strong></p>]]></content:encoded></item><item><title><![CDATA[ControlNet Practical Guide: A Deep Dive into Canny and SoftEdge Models]]></title><description><![CDATA[<p>When using <strong>ComfyUI</strong> for AI image generation, <strong>ControlNet</strong> plays a key role in enabling structured and controllable image outputs.<br>Among its many tools, <strong>Canny</strong> and <strong>SoftEdge</strong> are two of the most widely used edge detection models. They extract outlines in different ways, allowing creators to precisely guide the structure and</p>]]></description><link>https://blog.cephalon.ai/canny-and-softedge/</link><guid isPermaLink="false">690c5a6a729e4a0001f14b5b</guid><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Thu, 06 Nov 2025 08:51:55 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/11/ComfyUI_00029_.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/11/ComfyUI_00029_.png" alt="ControlNet Practical Guide: A Deep Dive into Canny and SoftEdge Models"><p>When using <strong>ComfyUI</strong> for AI image generation, <strong>ControlNet</strong> plays a key role in enabling structured and controllable image outputs.<br>Among its many tools, <strong>Canny</strong> and <strong>SoftEdge</strong> are two of the most widely used edge detection models. They extract outlines in different ways, allowing creators to precisely guide the structure and details of their generated images.</p><p>This guide explains the principles, parameters, and workflow setups for both models, helping beginners quickly master their use in ComfyUI.<br></p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/2-1.png" class="kg-image" alt="ControlNet Practical Guide: A Deep Dive into Canny and SoftEdge Models" loading="lazy" width="512" height="658"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><hr><h2 id="1-introduction-to-the-canny-model">1. Introduction to the Canny Model</h2><p><strong>Canny</strong> is one of the most widely adopted and essential models in ControlNet.<br>Based on a classic edge detection algorithm, it captures image contours with great precision and uses them to guide the generation of new images.</p><p>The preprocessed output from Canny looks like a finely drawn sketch with clean outlines &#x2014; perfect for scenes that require clear structure and sharp edges.<br>For example, when converting a realistic portrait into anime style, Canny helps preserve facial proportions and overall structure.</p><h3 id="%F0%9F%93%8C-canny-preprocessor-node">&#x1F4CC; Canny Preprocessor Node</h3><p>Canny uses a single preprocessing node called the <strong>&#x201C;Canny Fine Edge Preprocessor&#x201D;</strong>, which consists of three main components:</p><ul><li><strong>Low Threshold</strong></li><li><strong>High Threshold</strong></li><li><strong>Resolution</strong></li></ul><h4 id="threshold-settings">Threshold Settings</h4><p>Thresholds control how detailed the sketch lines will appear, with values ranging from <strong>1 to 255</strong>:</p><ul><li><strong>Lower values </strong>&#x2192; more complex lines, capturing finer details.</li><li><strong>Higher values </strong>&#x2192; simpler lines, keeping only major outlines.</li></ul><p>Canny&#x2019;s &#x201C;dual-threshold&#x201D; logic works as follows:</p><ul><li><strong>Above high threshold</strong> &#x2192; strong edges, always kept.</li><li><strong>Between thresholds</strong> &#x2192; weak edges, kept only if connected to strong edges.</li><li><strong>Below low threshold</strong> &#x2192; ignored as noise.</li></ul><p>You can fine-tune the thresholds to achieve your preferred sketch complexity:<br>too detailed may lead to noisy results, while too simple can reduce precision.<br>A typical range is <strong>100&#x2013;200</strong>, keeping resolution consistent with the original image.<br></p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-6.png" class="kg-image" alt="ControlNet Practical Guide: A Deep Dive into Canny and SoftEdge Models" loading="lazy" width="1515" height="777" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-6.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-6.png 1000w, https://blog.cephalon.ai/content/images/2025/11/image-6.png 1515w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-8.png" class="kg-image" alt="ControlNet Practical Guide: A Deep Dive into Canny and SoftEdge Models" loading="lazy" width="1391" height="706" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-8.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-8.png 1000w, https://blog.cephalon.ai/content/images/2025/11/image-8.png 1391w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><hr><h3 id="%F0%9F%A7%A9-example-workflow-realistic-portrait-%E2%86%92-anime-style">&#x1F9E9; Example Workflow: Realistic Portrait &#x2192; Anime Style</h3><p>Here&#x2019;s how to build a Canny-based workflow in ComfyUI to turn a real portrait into anime-style artwork:</p><p><strong>1.Create Nodes</strong><br>Open ComfyUI, load your text-to-image workflow, and add a &#x201C;Canny Fine Edge Preprocessor&#x201D; node.<br>Connect it to &#x201C;Load Image&#x201D; and &#x201C;Preview Image&#x201D; nodes, then upload your source photo.</p><p><strong>2.Load ControlNet Model</strong><br>Add &#x201C;ControlNet Loader&#x201D; and &#x201C;ControlNet Apply&#x201D; nodes.<br>In the loader, select <code>control_v11p_sd15_canny</code>, and connect it to the preprocessor output.</p><p><strong>3.Integrate into Main Flow</strong><br>Connect &#x201C;ControlNet Apply&#x201D; between the &#x201C;CLIP Text Encoder&#x201D; and the &#x201C;K Sampler.&#x201D;</p><p><strong>4.Select Checkpoint Model</strong><br>Use an anime-style model, e.g., <code>counterfeitV20_v30.safetensors</code>.</p><p><strong>5.Enter Prompts</strong></p><ul><li><strong>Positive prompt:</strong><br> <code>anime, 1girl, outdoors, short hair, green shirt, depth of field...</code></li><li><strong>Negative prompt:</strong><br> <code>lowres, error, cropped, low quality...</code></li></ul><p><strong>6.Adjust Parameters</strong></p><ul><li>Threshold: 50&#x2013;80</li><li>Resolution: same as source (e.g. 512)</li><li>Control Strength: 1</li><li>Size: 512&#xD7;512</li><li>Steps: 20 | CFG: 8</li><li>Sampler: dpmpp_2m | Scheduler: karras</li></ul><p><strong>7.Generate</strong><br>Click &#x201C;<strong>Add Prompt Queue</strong>&#x201D; to generate the anime-style result.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-12.png" class="kg-image" alt="ControlNet Practical Guide: A Deep Dive into Canny and SoftEdge Models" loading="lazy" width="1596" height="1144" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-12.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-12.png 1000w, https://blog.cephalon.ai/content/images/2025/11/image-12.png 1596w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-13.png" class="kg-image" alt="ControlNet Practical Guide: A Deep Dive into Canny and SoftEdge Models" loading="lazy" width="1054" height="526" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-13.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-13.png 1000w, https://blog.cephalon.ai/content/images/2025/11/image-13.png 1054w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><hr><h2 id="2introduction-to-the-softedge-model">2.Introduction to the SoftEdge Model</h2><p>Unlike Canny, <strong>SoftEdge</strong> extracts smoother, more natural edge transitions &#x2014; ideal for illustration, oil painting, or 3D anime-style works.<br>It generates blurred yet coherent outlines, giving the image a soft and artistic feel.</p><hr><h3 id="%E2%9A%99%EF%B8%8Fsoftedge-preprocessor-nodes">&#x2699;&#xFE0F;SoftEdge Preprocessor Nodes</h3><p>SoftEdge provides two preprocessor options:</p><ul><li><strong>HED Soft Edge Preprocessor</strong></li><li><strong>PidiNet Soft Edge Preprocessor</strong></li></ul><p>Both include:</p><ul><li><strong>Stabilize</strong> &#x2014; enhances contrast and reduces excessive blur.</li><li><strong>Resolution</strong> &#x2014; controls detail level in the output sketch.<br></li></ul><h4 id="algorithm-comparison">Algorithm Comparison</h4><!--kg-card-begin: html--><table data-start="4044" data-end="4362" class="w-fit min-w-(--thread-content-width)"><thead data-start="4044" data-end="4095"><tr data-start="4044" data-end="4095"><th data-start="4044" data-end="4059" data-col-size="sm">Preprocessor</th><th data-start="4059" data-end="4071" data-col-size="sm">Algorithm</th><th data-start="4071" data-end="4095" data-col-size="md">Features &amp; Use Cases</th></tr></thead><tbody data-start="4150" data-end="4362"><tr data-start="4150" data-end="4263"><td data-start="4150" data-end="4160" data-col-size="sm"><strong data-start="4152" data-end="4159">HED</strong></td><td data-start="4160" data-end="4197" data-col-size="sm">Holistically-Nested Edge Detection</td><td data-start="4197" data-end="4263" data-col-size="md">Produces smooth edges; best for hand-drawn or artistic styles.</td></tr><tr data-start="4264" data-end="4362"><td data-start="4264" data-end="4278" data-col-size="sm"><strong data-start="4266" data-end="4277">PidiNet</strong></td><td data-start="4278" data-end="4305" data-col-size="sm">Pixel Difference Network</td><td data-start="4305" data-end="4362" data-col-size="md">Extracts clearer edges; better for structured scenes.</td></tr></tbody></table><!--kg-card-end: html--><figure class="kg-card kg-image-card"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-16.png" class="kg-image" alt="ControlNet Practical Guide: A Deep Dive into Canny and SoftEdge Models" loading="lazy" width="1319" height="885" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-16.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-16.png 1000w, https://blog.cephalon.ai/content/images/2025/11/image-16.png 1319w" sizes="(min-width: 720px) 720px"></a></figure><hr><h3 id="%F0%9F%A7%A9-example-workflow-stylized-portrait-generation">&#x1F9E9; Example Workflow: Stylized Portrait Generation</h3><p>The setup is similar to Canny &#x2014; just replace the preprocessing node with SoftEdge.<br>Here&#x2019;s how to create an &#x201C;oil painting portrait&#x201D; workflow:</p><p><strong>1.Create Nodes</strong><br>Add an &#x201C;HED Soft Edge Preprocessor,&#x201D; connect it to &#x201C;Load Image&#x201D; and &#x201C;Preview Image,&#x201D; and upload your material.</p><p><strong>2.Load Model</strong><br>Choose <code>control_v11p_sd15_softedge_ip</code> in the &#x201C;ControlNet Loader&#x201D; and link it to the preprocessor.</p><p><strong>3.Integrate into Main Flow</strong><br>Connect &#x201C;ControlNet Apply&#x201D; between &#x201C;CLIP Text Encoder&#x201D; and &#x201C;K Sampler.&#x201D;</p><p><strong>4.Select Checkpoint Model</strong><br>Use an oil-painting model like <code>SHIMILV_OilPainting_V2.1.safetensors</code>.</p><p><strong>5.Enter Prompts</strong></p><ul><li><strong>Positive:</strong><br> <code>1girl, jewelry, hanfu, outdoors, lake, red lips, upper body...</code></li><li><strong>Negative:</strong><code>lowres, error, cropped, low quality...</code></li></ul><p><strong>Set Parameters</strong></p><ul><li>Stabilize: enabled</li><li>Resolution: match source (e.g., 768)</li><li>Control strength: 1</li><li>Output: 512&#xD7;768, Steps: 25, CFG: 7, Sampler: <code>dpmpp_2m</code></li></ul><p><strong>Generate</strong><br>Click &#x201C;<strong>Add Prompt Queue</strong>&#x201D; to create an oil-painting-style portrait.</p><hr><h2 id="summary">Summary</h2><ul><li><strong>Canny</strong>: great for clear, structured images like realistic-to-anime conversions.</li><li><strong>SoftEdge</strong>: ideal for softer artistic transitions like illustrations or oil paintings.</li></ul><p>Both models give creators precise, controllable results in ControlNet.<br>Whether you&#x2019;re a beginner or an experienced artist, tuning parameters and combining models will help you find your ideal creative balance.<br><br><strong>Unlock Full-Powered AI Creation!<br>Experience ComfyUI online instantly:</strong><br><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong>Join our global creator community:</strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></p>]]></content:encoded></item><item><title><![CDATA[Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)]]></title><description><![CDATA[<p>When working with <strong>Stable Diffusion (SD)</strong>, you may have noticed how unpredictable the results can be &#x2014; generating the same prompt several times often gives entirely different images.<br><strong>ControlNet</strong> was created to fix exactly that problem.</p><p>By using <strong>Conditional Generative Networks</strong>, ControlNet allows you to provide structured visual guidance &#x2014;</p>]]></description><link>https://blog.cephalon.ai/installing-and-using-controlnet/</link><guid isPermaLink="false">690b16cc729e4a0001f14aad</guid><category><![CDATA[Explanation of Common Node Operations]]></category><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Thu, 06 Nov 2025 07:52:40 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/11/ComfyUI_00025_.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/11/ComfyUI_00025_.png" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)"><p>When working with <strong>Stable Diffusion (SD)</strong>, you may have noticed how unpredictable the results can be &#x2014; generating the same prompt several times often gives entirely different images.<br><strong>ControlNet</strong> was created to fix exactly that problem.</p><p>By using <strong>Conditional Generative Networks</strong>, ControlNet allows you to provide structured visual guidance &#x2014; such as poses, sketches, or depth maps &#x2014; so the model can generate results that follow your intended structure instead of leaving everything to randomness.</p><p>This tutorial walks you through how to install and use ControlNet in <strong>ComfyUI</strong>, step by step:</p><ol><li>Install ControlNet preprocessors</li><li>Install ControlNet models</li><li>Set up and connect ControlNet nodes in ComfyUI</li></ol><hr><p>On the<a href="https://market.cephalon.ai/aigc"> Cephalon Cloud platform</a>, skip the complex installation process&#x2014;simply create a ComfyUI application to directly experience the powerful image control capabilities of the ControlNet plugin.</p><p>The platform offers multiple GPU configurations and abundant model resources to meet various creative needs. You can easily access high-performance RTX 5090 graphics cards in the cloud, benefiting from faster rendering and greater stability&#x2014;all for just $0.712 per hour. This makes creation more efficient and economical.</p><figure class="kg-card kg-image-card"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/------_17624112652078.png" class="kg-image" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)" loading="lazy" width="965" height="520" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/------_17624112652078.png 600w, https://blog.cephalon.ai/content/images/2025/11/------_17624112652078.png 965w" sizes="(min-width: 720px) 720px"></a></figure><hr><p>What Is ControlNet?</p><p>ControlNet is an extension for Stable Diffusion that adds <strong>extra control inputs</strong> to the generation process.<br>With it, you can guide your image using visual references such as:</p><ul><li>Human pose or body position</li><li>Object contours or edges</li><li>Depth maps, segmentation maps, sketches, or line art</li></ul><p>In other words, ControlNet helps the AI <strong>understand your intended composition and structure</strong>, giving you much more control over what the model generates.</p><hr><h2 id="installation-steps">Installation Steps</h2><p>To use ControlNet in <strong>ComfyUI</strong>, you&#x2019;ll need to install two parts:</p><ol><li><strong>ControlNet Preprocessors</strong> &#x2013; Convert reference images (e.g., sketches or poses) into structured data that the model can understand.</li><li><strong>ControlNet Models</strong> &#x2013; The actual models that process the structured data to influence image generation.</li></ol><p>Let&#x2019;s go through these one by one.</p><hr><h3 id="1-installing-controlnet-preprocessors">1. Installing ControlNet Preprocessors</h3><ol><li>Open <strong>ComfyUI</strong> and click the <strong>Manager</strong> button on the right sidebar.</li><li>In the <strong>ComfyUI Manager</strong> window, select <strong>Install Custom Nodes</strong>.</li><li>Use the search bar at the top right and type <strong>&#x201C;ControlNet&#x201D;</strong>.</li><li>Find the package named <strong><code>ControlNet-Aux</code></strong> and click <strong>Install</strong>.</li></ol><figure class="kg-card kg-image-card"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-1.png" class="kg-image" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)" loading="lazy" width="1312" height="1066" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-1.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-1.png 1000w, https://blog.cephalon.ai/content/images/2025/11/image-1.png 1312w" sizes="(min-width: 720px) 720px"></a></figure><blockquote>&#x26A0;&#xFE0F; <strong>Important Notes:</strong></blockquote><ul><li><strong>Do not install</strong> the old <code>comfy_controlnet_preprocessors</code> plugin &#x2014; it&#x2019;s deprecated.</li><li>The new auxiliary preprocessors are actively maintained.</li><li>If you already installed the old one, uninstall it before installing the new version.</li></ul><p>After installation, <strong>restart ComfyUI</strong>.<br>You&#x2019;ll now see a &#x201C;ControlNet Preprocessors&#x201D; category when right-clicking to add new nodes, including popular options like <em>Canny</em>, <em>Depth</em>, and <em>Pose</em>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-2.png" class="kg-image" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)" loading="lazy" width="874" height="543" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-2.png 600w, https://blog.cephalon.ai/content/images/2025/11/image-2.png 874w" sizes="(min-width: 720px) 720px"></a><figcaption>Plugin screenshots in the Cephalon Cloud ComfyUI image</figcaption></figure><hr><h3 id="2-installing-controlnet-models">2. Installing ControlNet Models</h3><ol><li>Go back to the <strong>ComfyUI Manager</strong> window and click <strong> Models Manager</strong>.</li><li>Search for <strong>&#x201C;ControlNet&#x201D;</strong> in the top-right bar.</li><li>Choose the model version that matches your base Stable Diffusion version (e.g., SD1.5 or SDXL).</li></ol><figure class="kg-card kg-image-card"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/------_17624131256949.png" class="kg-image" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)" loading="lazy" width="1201" height="842" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/------_17624131256949.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/------_17624131256949.png 1000w, https://blog.cephalon.ai/content/images/2025/11/------_17624131256949.png 1201w" sizes="(min-width: 720px) 720px"></a></figure><p>You can also download the models manually:</p><p>Official model repository (Hugging Face):<br>&#x1F449; <a href="https://huggingface.co/lllyasviel/ControlNet-v1-1/">https://huggingface.co/lllyasviel/ControlNet-v1-1/</a></p><p>Download the model you need (e.g. <code>control_v11p_sd15_depth.pth</code>)</p><p>Place the file in:</p><pre><code class="language-bash">ComfyUI/models/controlnet/
</code></pre><p>Once placed, create a <strong>Load ControlNet Model</strong> node in ComfyUI &#x2014; your new models should appear automatically in the dropdown.</p><blockquote>&#x1F4A1; <strong>Tip:</strong><br>If you&#x2019;ve previously installed ControlNet models in <em>Automatic1111 WebUI</em>, ComfyUI can use those same files directly, as long as they&#x2019;re in the same directory path.</blockquote><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-3.png" class="kg-image" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)" loading="lazy" width="815" height="1090" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-3.png 600w, https://blog.cephalon.ai/content/images/2025/11/image-3.png 815w" sizes="(min-width: 720px) 720px"></a><figcaption>Pre-installed ControlNet models in the Cephalon Cloud ComfyUI image</figcaption></figure><h2 id="controlnet-11-model-version-overview">ControlNet 1.1 Model Version Overview</h2><p>In <strong>ControlNet 1.1</strong>, the number of models has increased from the previous 1.0 version to <strong>14</strong>, with a more standardized naming convention.<br>The naming rule is:<br><strong>&#x201C;Version + Model Status + Stable Diffusion Version + Model Type&#x201D;</strong><br>For example: <code>control_v11p_sd15_canny</code>.</p><figure class="kg-card kg-image-card"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/------_17624143208039.png" class="kg-image" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)" loading="lazy" width="1109" height="691" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/------_17624143208039.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/------_17624143208039.png 1000w, https://blog.cephalon.ai/content/images/2025/11/------_17624143208039.png 1109w" sizes="(min-width: 720px) 720px"></a></figure><hr><h3 id="model-status-description">Model Status Description</h3><p>ControlNet 1.1 introduces three model statuses to help users distinguish between stability and use cases:</p><!--kg-card-begin: html--><table>
<thead>
<tr>
<th>Status</th>
<th>Meaning</th>
<th>Features &amp; Use Cases</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>p</strong></td>
<td>Production</td>
<td>Stable and recommended for most users and beginners. Naming format: <code>Control_VxxyP_</code>.</td>
</tr>
<tr>
<td><strong>e</strong></td>
<td>Experimental</td>
<td>Still in testing; results may vary. Suitable for research or exploration. Naming format: <code>Control_VxxyE_</code>.</td>
</tr>
<tr>
<td><strong>u</strong></td>
<td>Unfinished</td>
<td>Incomplete and not recommended for production use. Naming format: <code>Control_VxxxU_</code>.</td>
</tr>
</tbody>
</table><!--kg-card-end: html--><hr><h3 id="naming-examples">Naming Examples</h3><p>Here are some common examples to help you quickly understand version differences:</p><ul><li><code>control_v11p_sd15_canny</code> &#x2192; <strong>Version 1.1 Production</strong>, based on SD1.5, using the Canny edge detection model</li><li><code>control_v11e_sd15_depth</code> &#x2192; <strong>Version 1.1 Experimental</strong>, based on SD1.5, using the Depth model</li><li><code>control_v11u_sd15_pose</code> &#x2192; <strong>Version 1.1 Unfinished</strong>, based on SD1.5, used for pose control</li></ul><hr><p>Using ControlNet in ComfyUI</p><p>In ComfyUI, ControlNet works through a few interconnected nodes. Here&#x2019;s how they function and connect.</p><hr><h3 id="1-controlnet-loader">1. ControlNet Loader</h3><p><strong>Path:</strong><code>Loaders &gt; ControlNet Loader</code></p><p><strong>Purpose:</strong> Loads a ControlNet model file.</p><p><strong>Output:</strong> <code>ControlNet</code></p><p><strong>Connects to:</strong> The <code>ControlNet</code> input of the <em>ControlNet Apply</em> node.</p><figure class="kg-card kg-image-card"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/------_17624155077545.png" class="kg-image" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)" loading="lazy" width="1192" height="437" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/------_17624155077545.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/------_17624155077545.png 1000w, https://blog.cephalon.ai/content/images/2025/11/------_17624155077545.png 1192w" sizes="(min-width: 720px) 720px"></a></figure><hr><h3 id="2-controlnet-apply">2. ControlNet Apply</h3><p><strong>Path:</strong> <code>Conditioning &gt; ControlNet Apply</code></p><p><strong>Purpose:</strong> Combines your text prompt, ControlNet model, and preprocessed image into a conditioning signal that guides image generation.</p><p><strong>Typical setup:</strong><code>CLIP Text Encoder &#x2192; ControlNet Apply &#x2192; KSampler</code></p><p><strong>Inputs:</strong></p><p>Condition (from CLIP encoder)</p><p>ControlNet (from ControlNet Loader)</p><p>Image (from ControlNet Preprocessor)</p><p><strong>Output:</strong></p><p>Condition (connect this to the <em>positive conditioning</em> input of your sampler)</p><p>You can chain multiple <em>ControlNet Apply</em> nodes to combine multiple conditions &#x2014; for example, <strong>Pose + Edges + Depth</strong> &#x2014; giving you multi-layered control over the generation process.<br></p><figure class="kg-card kg-image-card"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-4.png" class="kg-image" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)" loading="lazy" width="1315" height="521" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-4.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-4.png 1000w, https://blog.cephalon.ai/content/images/2025/11/image-4.png 1315w" sizes="(min-width: 720px) 720px"></a></figure><hr><h3 id="3-controlnet-preprocessor">3. ControlNet Preprocessor</h3><p><strong>Path:</strong> <code>ControlNet Preprocessors</code></p><p><strong>Purpose:</strong> Converts an image into a structural guide (e.g., edge map, pose, depth map).</p><p><strong>Input:</strong> <code>Image</code></p><p><strong>Output:</strong> <code>Image</code></p><p><strong>Connects to:</strong> The <code>Image</code> input of the <em>ControlNet Apply</em> node.</p><p>You can preview the preprocessed result by connecting it to a &#x201C;Preview Image&#x201D; node.</p><figure class="kg-card kg-image-card"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/11/image-5.png" class="kg-image" alt="Complete Guide to Installing and Using ControlNet in ComfyUI (Beginner-Friendly)" loading="lazy" width="1283" height="567" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/11/image-5.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/11/image-5.png 1000w, https://blog.cephalon.ai/content/images/2025/11/image-5.png 1283w" sizes="(min-width: 720px) 720px"></a></figure><hr><h2 id="what%E2%80%99s-next">What&#x2019;s Next?</h2><p>ControlNet includes a wide range of model types &#x2014; such as pose, line art, segmentation, normal map, and depth.<br>In upcoming tutorials, we&#x2019;ll explore each of these with practical ComfyUI workflow examples so you can learn how to build advanced, controllable generation pipelines step by step.</p><hr><h2 id="%E2%9C%85-summary">&#x2705; Summary</h2><p>ControlNet transforms Stable Diffusion from a <em>random generator</em> into a <em>controllable creative tool</em>.<br>Whether you want to lock character poses, maintain scene structure, or refine image consistency, ControlNet is one of the most essential tools to learn.</p><p>With ComfyUI&#x2019;s visual, node-based workflow, using ControlNet becomes much easier and intuitive &#x2014; perfect for both beginners and advanced creators alike.</p><hr><p><strong>Unlock Full-Powered AI Creation!<br>Experience ComfyUI online instantly:</strong><br> <a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong>Join our global creator community: </strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></p>]]></content:encoded></item><item><title><![CDATA[Understanding and Using SD Base Models]]></title><description><![CDATA[<p>When starting your AI art journey in ComfyUI, the first and most crucial choice you&apos;ll encounter is selecting the Base Model (often called the &quot;checkpoint&quot; or &quot;large model&quot;). Positioned at the very beginning of a typical workflow, its placement alone hints at its fundamental</p>]]></description><link>https://blog.cephalon.ai/sd-base-models/</link><guid isPermaLink="false">69008938729e4a0001f14a72</guid><category><![CDATA[ComfyUI Basic Operations]]></category><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Wed, 29 Oct 2025 02:12:36 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/10/1.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/10/1.jpeg" alt="Understanding and Using SD Base Models"><p>When starting your AI art journey in ComfyUI, the first and most crucial choice you&apos;ll encounter is selecting the Base Model (often called the &quot;checkpoint&quot; or &quot;large model&quot;). Positioned at the very beginning of a typical workflow, its placement alone hints at its fundamental role.</p><h4 id="what-is-a-base-model-think-of-it-as-the-ais-artistic-brain">What is a Base Model? Think of It as the AI&apos;s &quot;Artistic Brain&quot;</h4><p>You can think of a base model as the AI&apos;s &quot;knowledge repository&quot; and &quot;artistic style amalgamation&quot;&#x2014;the result of extensive training on massive datasets. It doesn&apos;t store thousands of images itself; instead, it contains the visual patterns and artistic principles distilled from that data.Because they encapsulate vast amounts of learned information, base model files are typically very large. Current mainstream models, often based on architectures like SD 1.5 or SDXL, usually range from around 2GB to over 7GB in size&#x2014;a direct reflection of the &quot;knowledge&quot; they contain.</p><h4 id="why-do-you-need-multiple-models-its-like-hiring-different-specialist-artists">Why Do You Need Multiple Models? It&apos;s Like Hiring Different Specialist Artists</h4><p>Once you understand that a base model represents a specific set of styles and capabilities, it becomes clear why no single model can do everything perfectly.</p><ul><li>Specializations Vary: Some models are specifically trained to excel at creating photorealistic portraits, while others shine in producing anime-style artwork. You&apos;ll find specialists in architectural visualizations, fantasy landscapes, and more.</li><li>Choose the Right Tool: This is why many AI creators build extensive libraries of different base models. It&apos;s akin to hiring a portrait painter for a portrait and a cartoonist for a comic&#x2014;in ComfyUI, you select the most suitable &quot;specialist artist&quot; (base model) for your specific creative task.</li></ul><h4 id="a-key-difference-between-sd-and-midjourney-open-ecosystem-vs-unified-service">A Key Difference Between SD and Midjourney: Open Ecosystem vs. Unified Service</h4><p>This distinction is crucial for understanding the SD ecosystem:</p><ul><li>Midjourney operates more like a single, highly capable &quot;master artist&quot; with a consistent style. You guide this artist with prompts but cannot fundamentally change its core approach.</li><li>Stable Diffusion (via ComfyUI), in contrast, provides you with an open studio filled with diverse &quot;artist brains&quot; (base models), each with unique specialties. You have the freedom to choose which artist works for you, or even combine their talents.</li></ul><h4 id="see-the-difference-a-practical-demonstration">See the Difference: A Practical Demonstration</h4><p>The best way to grasp the impact of a base model is through comparison. Using the exact same prompt and generation settings while only swapping the base model will yield strikingly different results.<br>For instance, if you use a base model specifically trained for realistic portraits (like <code>majicmixRealistic</code>), even a simple prompt can produce a figure with convincing skin textures and lifelike lighting. Feed that same prompt into an anime-style model, and the output will be entirely different in character.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/10/image-49.png" class="kg-image" alt="Understanding and Using SD Base Models" loading="lazy" width="831" height="412" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/10/image-49.png 600w, https://blog.cephalon.ai/content/images/2025/10/image-49.png 831w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p>The Key Takeaway: Selecting a base model aligned with your desired artistic style is the first&#x2014;and most critical&#x2014;step toward successfully generating your envisioned image.<br></p><p><strong>Unlock Full-Powered AI Creation!<br>Experience ComfyUI online instantly:</strong><br><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong>Join our global creator community:</strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></p>]]></content:encoded></item><item><title><![CDATA[A Deep Dive into the 3 Core Techniques for Controlling Prompts in ComfyUI]]></title><description><![CDATA[<p>When creating images in ComfyUI, your prompts are the main language for communicating with the AI model. How can you precisely express your ideas to generate images that match your vision? Mastering these three core techniques will significantly boost your control over the results.</p><p><em><strong>Technique 1: Control Element Importance with</strong></em></p>]]></description><link>https://blog.cephalon.ai/a-deep-dive-into-the-3-core-techniques-for-controlling-prompts-in-comfyui-2/</link><guid isPermaLink="false">6900814f729e4a0001f14a07</guid><category><![CDATA[ComfyUI Basic Operations]]></category><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Tue, 28 Oct 2025 09:02:27 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/10/20251028-170131.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/10/20251028-170131.jpeg" alt="A Deep Dive into the 3 Core Techniques for Controlling Prompts in ComfyUI"><p>When creating images in ComfyUI, your prompts are the main language for communicating with the AI model. How can you precisely express your ideas to generate images that match your vision? Mastering these three core techniques will significantly boost your control over the results.</p><p><em><strong>Technique 1: Control Element Importance with Word Order</strong></em><br>Many beginners overlook a crucial detail: Words placed closer to the front of your prompt are generally treated as more important by the AI.<br>If you find a specific element isn&apos;t appearing prominently in your generated image, besides using weight brackets, the most straightforward method is to **move it toward the beginning of your prompt.</p><p><br><em>Practical Example</em></p><ul><li><strong>Initial Prompt:</strong> <code>masterpiece, best quality, girl, shining eyes, pure girl, solo, long hair... (details omitted) ...teddy bear</code></li><li><strong>Result:</strong> The &quot;teddy bear,&quot; mentioned at the very end, was barely noticeable or ignored.</li><li><strong>Adjusted Prompt:</strong> <code>masterpiece, best quality, teddy bear, girl, shining eyes... (other words follow)...</code></li><li><strong>Result:</strong> By moving &quot;teddy bear&quot; to the front, its presence and clarity in the image improved dramatically.<br><em><strong>Key Takeaway:</strong></em> Place your most important subjects and core characteristics at the beginning of your prompt.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/10/image-47.png" class="kg-image" alt="A Deep Dive into the 3 Core Techniques for Controlling Prompts in ComfyUI" loading="lazy" width="978" height="360" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/10/image-47.png 600w, https://blog.cephalon.ai/content/images/2025/10/image-47.png 978w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><em><strong>Technique 2: Isolate Element Descriptions with Prompt Comments</strong></em><br>Sometimes, even after adjusting weights, certain descriptions (especially colors) can &quot;bleed&quot; and affect the entire scene. For instance, a word describing clothing color might incorrectly apply to hair or the background.<br>This is where <strong>prompt comments</strong> come in handy, allowing you to create isolated description zones for different subjects. The basic format is: <code>subject\(comment1, comment2)</code></p><p><br><em><strong>Practical Example</strong></em></p><ul><li><strong>Initial Prompt:</strong> <code>1girl, silver hair, blue eyes, (yellow business suit:1.4), slim body... black handbag...</code></li><li><strong>Result:</strong> Even though the prompt specified a &quot;black handbag,&quot; the highly weighted &quot;yellow business suit&quot; often caused the bag to render in yellow too.</li><li><strong>Adjusted Prompt:</strong> <code>1girl\(silver hair, blue eyes, (yellow business suit:1.4)), slim body... black handbag...</code></li><li><strong>Result:</strong> By encapsulating the girl&apos;s description (including her yellow suit) within comments, the handbag correctly appeared black, isolated from the subject&apos;s color influence.<br><em><strong>Key Takeaway:</strong></em> Use the <code>\( )</code> comment syntax to isolate descriptions when you need to assign independent attributes to different parts of your image.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.cephalon.ai/content/images/2025/10/image-48.png" class="kg-image" alt="A Deep Dive into the 3 Core Techniques for Controlling Prompts in ComfyUI" loading="lazy" width="832" height="542" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/10/image-48.png 600w, https://blog.cephalon.ai/content/images/2025/10/image-48.png 832w" sizes="(min-width: 720px) 720px"><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong><em>Technique 3: Integrate a Translator Node - Stop App Switching</em></strong><br>For non-native English speakers, constantly switching between translation apps and ComfyUI to craft prompts can be a cumbersome workflow. While some plugins offer dictionary-based translation, they fail with words outside their predefined lists.<br>You can solve this by installing the translation node from the **AlekPet Node Pack**, which handles Chinese-to-English translation directly within your ComfyUI workflow.</p><p><em><strong><em><br></em>Setup Steps</strong></em><br></p><ol><li><strong>Install the Node:</strong></li></ol><blockquote>Click &quot;Manager&quot; at the bottom right of the ComfyUI interface.</blockquote><blockquote>In the pop-up window, click &quot;Install Node,&quot; search for &quot;AlekPet.&quot;</blockquote><blockquote>Find the node pack, click &quot;Install,&quot; and restart ComfyUI after installation.</blockquote><p><strong>2. Build the Translation Workflow:</strong></p><blockquote>From the node menu, navigate to &quot;Add Node &gt; AlekPet Nodes &gt; Text &gt; Translate Text (Argos Translate).&quot;</blockquote><blockquote>To preview the translation, it&apos;s helpful to also add a &quot;Preview Text&quot; node (found under &quot;Add Node &gt; AlekPet Nodes &gt; Extras&quot;).</blockquote><p><strong>3. Configure and Connect:</strong></p><blockquote>In the &quot;Translate Text&quot; node, set the source language to Chinese (zh) and the target language to English (en).</blockquote><blockquote>Connect the node&apos;s &quot;text&quot; output to the &quot;text&quot; input of the &quot;Preview Text&quot; node.</blockquote><p><strong>4. Translate and Use the Result:</strong></p><blockquote>Type your Chinese prompt into the &quot;Translate Text&quot; node&apos;s input box, e.g., &quot;&#x6700;&#x4F73;&#x8D28;&#x91CF;&#xFF0C;&#x6770;&#x4F5C;&#xFF0C;1&#x5973;&#x751F;&#xFF0C;&#x886C;&#x886B;&#xFF0C;&#x725B;&#x4ED4;&#x88E4;&#xFF0C;&#x957F;&#x53D1;&quot; (best quality, masterpiece, 1girl, shirt, jeans, long hair).</blockquote><blockquote>Click &quot;Queue Prompt&quot; to execute the translation. The translated English text will appear in the &quot;Preview Text&quot; node.</blockquote><figure class="kg-card kg-image-card"><img src="blob:https://blog.cephalon.ai/d41aa4ac-f96c-4a3b-9903-d16303f0ae73" class="kg-image" alt="A Deep Dive into the 3 Core Techniques for Controlling Prompts in ComfyUI" loading="lazy"></figure><p>To use this translated text for image generation, connect it to a &quot;CLIP Text Encode&quot; node. By default, this node lacks a direct text input. <strong>Right-click on the &quot;CLIP Text Encode&quot; node, select &quot;Convert To Input &gt; Convert Text to Input&quot;</strong> &#x2013; this adds a &quot;text&quot; input port, allowing you to connect it to the &quot;Preview Text&quot; node&apos;s output.</p><p><br><em><strong>Key Takeaway:</strong></em> The translator node seamlessly integrates Chinese prompts into your workflow, significantly improving efficiency and prompt accuracy for non-English users.<br></p><p><strong>Unlock Full-Powered AI Creation!<br>Experience ComfyUI online instantly:</strong><br> <a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW" rel="noopener noreferrer">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW</a><br><strong>Join our global creator community: </strong><br><a href="https://discord.gg/MSEkCDfNSW" rel="noopener noreferrer">https://discord.gg/MSEkCDfNSW</a><br><strong>Collaborate with creators worldwide &amp; get real-time admin support.</strong></p>]]></content:encoded></item><item><title><![CDATA[Five Practical Tips for Mastering Prompt Weights]]></title><description><![CDATA[<p>When writing prompts, you can influence the local effects in an image by adjusting the weights of words in the prompt. This is typically done using different symbols and numbers, as follows.</p><p><strong>1.Adjusting weights with curly braces &quot;{}&quot;</strong><br>If you add &#x201C;{}&#x201D; to a word, you can</p>]]></description><link>https://blog.cephalon.ai/mastering-prompt-weights/</link><guid isPermaLink="false">68f74ab7729e4a0001f1492c</guid><category><![CDATA[ComfyUI Basic Operations]]></category><dc:creator><![CDATA[Willow]]></dc:creator><pubDate>Thu, 23 Oct 2025 12:00:15 GMT</pubDate><media:content url="https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-21--17_14_34.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-21--17_14_34.png" alt="Five Practical Tips for Mastering Prompt Weights"><p>When writing prompts, you can influence the local effects in an image by adjusting the weights of words in the prompt. This is typically done using different symbols and numbers, as follows.</p><p><strong>1.Adjusting weights with curly braces &quot;{}&quot;</strong><br>If you add &#x201C;{}&#x201D; to a word, you can increase its weight by a factor of 1.05 to enhance its presence in the image.</p><p><strong>2.Adjusting weights with parentheses &quot;()&quot;</strong><br>If you add &#x201C;()&#x201D; to a word, you can increase its weight by a factor of 1.1.</p><p><strong>3.Adjusting weights with double parentheses &quot;(())&quot;</strong><br>If you use double parentheses, the weights are multiplied, increasing the word&apos;s weight to 1.21 times (1.1 &#xD7; 1.1). You can nest these up to three levels (i.e., &quot;((()))&quot;), for a maximum multiplier of 1.331 (1.1 &#xD7; 1.1 &#xD7; 1.1).</p><p>For example, when generating an image with the prompt &quot;1girl, shining eyes, pure girl, (full body:0.5), luminous petals, short hair, Hidden in the light yellow flowers, Many flying drops of water, Many scattered leaves, branch, angle, contour deepening, cinematic angle&quot;, you get the first image shown on the bottom.<br></p><p>However, if you apply three levels of nesting to &quot;Many flying drops of water&quot; (i.e., &quot;(((Many flying drops of water)))&quot;), you get the second image shown on the bottom. As you can see, the number of water drops has significantly increased.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-20--16_50_55.png" class="kg-image" alt="Five Practical Tips for Mastering Prompt Weights" loading="lazy" width="1024" height="1536" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/10/ChatGPT-Image-2025-10-20--16_50_55.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/10/ChatGPT-Image-2025-10-20--16_50_55.png 1000w, https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-20--16_50_55.png 1024w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-20--16_50_57.png" class="kg-image" alt="Five Practical Tips for Mastering Prompt Weights" loading="lazy" width="1024" height="1536" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/10/ChatGPT-Image-2025-10-20--16_50_57.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/10/ChatGPT-Image-2025-10-20--16_50_57.png 1000w, https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-20--16_50_57.png 1024w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>4.Adjusting weights with square brackets &quot;[]&quot;</strong><br>The symbols introduced earlier all add weight. If you want to decrease the weight, you can use square brackets to reduce the prominence of that word in the image. When &#x201C;[]&#x201D; is added, the inherent weight of the word is reduced by 0.9. Similarly, you can use up to three nested square brackets.</p><p><br>For example, the first image below was generated using the prompt &apos;1girl, shining eyes, pure girl,(full body:0.5),<strong>(((falling leaves)))</strong>,luminous petals, short hair, Hidden in the light yellow flowers, branch, angle, contour deepening, cinematic angle&apos;. </p><p>The second image below shows the effect after applying three nested square brackets <strong>&quot;[[[]]]&quot;</strong> to the prompt &apos;falling leaves&apos;. It is clearly visible that the number of falling leaves has decreased.</p><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwW"><img src="https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-20--17_04_46.png" class="kg-image" alt="Five Practical Tips for Mastering Prompt Weights" loading="lazy" width="1024" height="1536" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/10/ChatGPT-Image-2025-10-20--17_04_46.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/10/ChatGPT-Image-2025-10-20--17_04_46.png 1000w, https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-20--17_04_46.png 1024w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://discord.gg/MSEkCDfNSW"><img src="https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-20--17_04_48.png" class="kg-image" alt="Five Practical Tips for Mastering Prompt Weights" loading="lazy" width="1024" height="1536" srcset="https://blog.cephalon.ai/content/images/size/w600/2025/10/ChatGPT-Image-2025-10-20--17_04_48.png 600w, https://blog.cephalon.ai/content/images/size/w1000/2025/10/ChatGPT-Image-2025-10-20--17_04_48.png 1000w, https://blog.cephalon.ai/content/images/2025/10/ChatGPT-Image-2025-10-20--17_04_48.png 1024w" sizes="(min-width: 720px) 720px"></a><figcaption>This image was created using ComfyUI on the Cephalon AI platform.</figcaption></figure><p><strong>5.Adjusting weights with colon &#x201C;:&#x201D;</strong><br>In addition to using brackets above, you can also use a colon followed by a number to modify weights.For example, (fractal art:1.6) means adding 1.6 times the weight to fractal art.</p><p><strong><em>Tips and Strategies for Adjusting Weights</em></strong><br><em>Tips for Adjusting Weights</em><br>After selecting a word in the positive or negative prompts, hold down the Ctrl key and the up or down arrow keys to quickly add brackets to the word and adjust its weight.</p><p><br><em>Strategies for Adjusting Weights</em><br>When adjusting weights, you can first generate an image with no weights assigned to the positive prompts. Then, based on the image&apos;s effects, increase or decrease the weights of certain words to precisely modify the image&apos;s effect. However, be careful not to reduce weights too much, causing them to disappear; increasing weights too much can completely alter the image&apos;s overall appearance.</p><p></p><p><strong>Unlock Full-Powered AI Creation!</strong></p><p>Experience ComfyUI online instantly:</p><p> <a href="https://market.cephalon.ai/share/register-landing?invite_id=RS3EwWJoin">https://market.cephalon.ai/share/register-landing?invite_id=RS3EwWJoin</a> </p><p>our global creator community: </p><p><a href="https://discord.gg/MSEkCDfNSWCollaborate">https://discord.gg/MSEkCDfNSWCollaborate</a> </p><p>with creators worldwide &amp; get real-time admin support.</p>]]></content:encoded></item></channel></rss>