Diffusion models are now the state-of-the-art for producing images. These models have been trained on vast datasets and are increasingly repurposed for various image processing and conditional image generation tasks. We expect these models to be widely used in Computer Graphics and related research areas. Image generation has evolved into a rich promise of new possibilities, and in this tutorial, we will guide you through the intricacies of understanding and using diffusion models.

This course is targeted towards graphics researchers with an interest in image/video synthesis and manipulation. Attending the tutorial will enable participants to build a working knowledge of the core formulation, understand how to get started in this area, and study practical use cases to explore this new tool. Our goal is to get more researchers with expertise in computer graphics to start exploring the open challenges in this topic and explore innovative use cases in CG contexts in image synthesis and other media formats.

Join us as we navigate the fascinating intersection of visual computing and diffusion models.

Slides


BibTex

@inproceedings{Mitra:2024:DiffusionModels4ContentCreation_SG24,
author = {Mitra, Niloy J. and Ceylan, Duygu and Patashnik, Or and CohenOr, Danny and Guerrero, Paul and Huang, Chun-Hao and Sung, Minhyuk},
title = {Diffusion Models for Visual Content Creation},
booktitle = {Siggraph Tutorial},
year = {2024},
location = {Denver, USA},
}