Skip to content

PaperCodex

Subscribe

Controllable Diffusion Models

Uni-ControlNet: Unified Visual Control for Text-to-Image Generation Without Retraining Everything

Uni-ControlNet: Unified Visual Control for Text-to-Image Generation Without Retraining Everything 664

Generating high-quality images from text prompts has become remarkably powerful thanks to diffusion models like Stable Diffusion. Yet, for many…

01/13/2026Controllable Diffusion Models, Multimodal Conditioning, Text-to-Image Generation
Less-to-More Generalization: Unlock Controllable, Consistent Multi-Subject Image Generation with UNO

Less-to-More Generalization: Unlock Controllable, Consistent Multi-Subject Image Generation with UNO 1337

Subject-driven image generation—where users provide one or more reference images of specific objects to guide the creation of new scenes—is…

12/19/2025Controllable Diffusion Models, Multi-subject Image Synthesis, Subject-driven Image Generation
Flow-GRPO: Boost Text-to-Image Accuracy with Online RL—Without Sacrificing Quality or Diversity

Flow-GRPO: Boost Text-to-Image Accuracy with Online RL—Without Sacrificing Quality or Diversity 1720

If you’ve ever struggled with diffusion models failing to follow detailed prompts—like “a golden retriever sitting to the left of…

12/19/2025Controllable Diffusion Models, Reinforcement Learning For Generative Models, Text-to-Image Generation
Copyright © 2026 PaperCodex.
  • Facebook
  • YouTube
  • Twitter

PaperCodex