Stable Diffusion Cheat Sheet: Troubleshooting & Optimization

May 04, 2023

Updated March 2026. The original version of this cheat sheet was written for SD 1.5 in May 2023. Almost everything has changed since then -- new architectures (SDXL, SD 3.5, Flux), new UIs (ComfyUI), new hardware (RTX 5090), and a complete reversal on negative prompt philosophy. This is the current version.

This is my working reference for Stable Diffusion parameters. Not a tutorial -- just the settings I reach for when things aren't working or when I want to push quality.

Which Model to Use

This is the first decision now, and it matters more than any parameter tweak.

Model Best For Resolution Notes
Flux 2 Photorealism, prompt adherence 1024x1024+ Best open-weight model for photorealism in 2026. Integrated into Adobe Photoshop [1]
SDXL General use 1024x1024 Massive ecosystem of fine-tunes. Juggernaut XL, Realistic Vision, DreamShaper
SD 3.5 Large Top quality (Stability's flagship) 1024x1024 MMDiT architecture. SD 3.0 was deprecated April 2025 [2]
SDXL Lightning Speed 1024x1024 2-8 step generation. Better quality than Turbo at higher resolution [3]
SD 1.5 Legacy workflows 512x512 Huge fine-tune library but being phased out. SD 2.0/2.1 officially deprecated

If you're starting fresh: Flux 2 for photorealism, SDXL for everything else. SD 3.5 is good but the ecosystem is smaller.

Which UI to Use

UI Best For
ComfyUI Power users. Node-based, better VRAM management, 15% faster, best Flux support. Industry standard for serious work as of 2025 [4]
Automatic1111 Beginners. Simpler interface, huge extension library. Still works fine for SDXL
Fooocus One-click generation. Minimal configuration. Good for quick results

I use ComfyUI. The learning curve is steeper (expect 10-20 hours to get comfortable), but the VRAM management alone is worth it -- it runs SDXL on 8GB where A1111 crashes.

Samplers

The sampler debate is mostly settled.

Go-to choices:

  • DPM++ 2M Karras -- best speed-to-quality ratio. This is my default for almost everything.
  • DPM++ SDE Karras -- slightly better at low step counts. Good when you're iterating fast.
  • Euler a -- still reliable. More variety in outputs, good for exploration.

When to switch:

  • Lack of diversity in outputs? Try DPM++ SDE or Euler a.
  • Artifacts or oversaturation? Try DPM++ 2M Karras or plain Euler.
  • Need speed above all? Euler a or DPM++ 2M (non-Karras).
  • Want maximum quality? DPM++ 3M SDE Karras or UniPC.

Step counts: 20-30 steps for most samplers. Lightning models need only 2-8.

CFG (Classifier Free Guidance)

How strictly the model follows your prompt vs. its own interpretation.

Range Effect
1-4 Very creative, loose interpretation. Often incoherent
5-7 Good balance for most work
7-10 Strong prompt adherence. Sweet spot for SDXL photorealism
10-15 Risk of artifacts and overcooked colors
15+ Almost always too much. Artifacts guaranteed

Note: SD 3.5 uses a different guidance mechanism. The CFG concept still applies but the scale behaves differently -- start lower (3-5) and adjust.

Resolution

The days of 512x512 are over.

Model Native Resolution Practical Range
SD 1.5 512x512 512x512 to 768x768
SDXL 1024x1024 1024x1024 (standard), 1024x768, 768x1024
SD 3.5 1024x1024 1024x1024+
Flux 1024x1024 1024x1024+, 4K possible on high-end GPUs

Going above the native resolution risks artifacts and composition issues. Use hi-res fix or upscaling instead of generating at 2048x2048 directly.

Clip Skip

Less relevant than it used to be.

  • SD 1.5: Clip skip 1-2 matters a lot. Anime models often use clip skip 2.
  • SDXL: Uses dual text encoders (CLIP + OpenCLIP). Clip skip is mostly ignored -- the architecture handles it differently.
  • SD 3.5 / Flux: Not applicable in the same way. These models use transformer-based text encoding.

If you're on SDXL or newer: don't worry about clip skip. If you're on SD 1.5: keep it at 1 for photorealism, 2 for anime.

Negative Prompts

The philosophy has flipped. In 2023, the advice was to use long negative prompt lists. In 2026, the consensus is: start with nothing and add only what you need to fix.

Why the change:

  • SDXL and Flux understand natural language much better than SD 1.5
  • Long negative prompts can actually restrict creativity and produce worse results
  • "bad anatomy" is too vague to be useful. "ugly" doesn't work because SD wasn't trained on labeled "ugly" images
  • Some models perform demonstrably worse with long negatives [5]

Current approach:

  1. Generate without any negative prompt first.
  2. If you see a specific problem (extra fingers, blurry background), add a targeted negative for that.
  3. Use emphasis weighting: (blurry:1.3) instead of just blurry.
  4. Keep it short -- 5-10 terms max.

GPU Quick Reference

GPU VRAM Good For
RTX 3060 12GB 12GB SD 1.5, basic SDXL
RTX 4070 Ti 12GB SDXL, some Flux
RTX 4090 24GB Everything. The workhorse
RTX 5090 32GB Everything including 4K and batch generation
8GB cards 8GB Minimum viable. ComfyUI helps with VRAM management

The 24GB mark is where things get comfortable for SDXL and Flux without constant VRAM juggling.

Troubleshooting Quick Fixes

Problem Try
Blurry output Increase steps. Check resolution matches model's native res
Extra fingers/limbs Add extra fingers, extra limbs to negative prompt. Or use ControlNet
Oversaturated colors Lower CFG. Switch to DPM++ 2M Karras
Composition is wrong Use ControlNet (depth, canny, pose) instead of fighting the prompt
Generation is slow Use Lightning model, reduce steps, use ComfyUI for better VRAM
Out of VRAM Switch to ComfyUI, reduce batch size, use fp16

References

1. Flux 2 and NVIDIA RTX AI Integration -- NVIDIA's coverage of Flux 2 with ComfyUI.
2. Stability AI Release Notes -- SD 3.0 deprecation and 3.5 release.
3. SDXL-Lightning by ByteDance -- 2-8 step generation at 1024px.
4. ComfyUI vs Automatic1111 2026 Comparison -- Performance and feature comparison.
5. How to Use Negative Prompts Effectively -- Updated guide on minimal negative prompt philosophy.
6. Understanding Stable Diffusion Samplers -- Sampler comparison and selection guide.
7. Best Stable Diffusion Models for 2026 -- Current model landscape.


Related Posts


Comments

Boris D. Teoharov

Hey, I'm Boris

I write about software development, AI experiments, and the occasional deep dive into computer science topics that catch my interest.

Senior Software Developer at GetHookd AI with expertise in web development, AI/ML, DevOps, and low-level programming. Passionate about exploring theoretical computer science, mathematics, and the creative applications of AI.