Diffusion models have rapidly taken over as the most common choice for generative modeling for images and videos, offering state-of-the-art results in synthesis, manipulation, and editing. These models now serve as foundational technology for visual content generation in both academic research and industrial creative pipelines. From generating hyper-realistic portraits and dynamic textures to simulating motion in videos and synthesizing 3D-aware scenes, diffusion-based techniques are reshaping how graphics professionals conceptualize and produce content.
Building on our earlier edition of this course in 2024, our updated SIGGRAPH course is a significantly revised and expanded offering. This year, we go beyond the basics to focus on practical, real-world applications and the latest developments in both image and video diffusion models. While a brief theoretical overview is included to ground attendees in the fundamental concepts, the tutorial is primarily designed to empower graphics researchers, technical artists, and developers with immediately applicable knowledge and workflows. We are also augmenting the background material to cover the latest advances in terms of flow matching, sampling techniques, and VAE formulations.
Join us as we navigate the fascinating intersection of visual computing and diffusion models.
@inproceedings{Mitra:2025:DiffusionModels4ContentCreation_SG25, author = {Mitra, Niloy J. and Patashnik, Or and CohenOr, Danny and Guerrero, Paul and Koo, Juil and Sung, Minhyuk}, title = {Diffusion Models for Image and Video Generation: From Foundations to Emerging Directions}, booktitle = {Siggraph Course}, year = {2025}, location = {Vancouver, Canada}, }