Monte Carlo Tree Diffusion and Loopholing Diffusion
December 8 @ 1:00 pm - 2:00 pm

KAIST/New York University
Monday, December 8, 2025
1:00 pm to 2:00 pm
4013 Donald Bren Hall
Abstract
Diffusion models have rapidly advanced to become a central generative engine in modern AI. Yet two major challenges remain: enhancing test-time compute scalability and developing diffusion-based language models that can rival or ultimately replace autoregressive approaches. In this talk, I present two recent advances addressing these directions. To enable scalable test-time reasoning and planning, I introduce Monte Carlo Tree Diffusion, a method that integrates diffusion models with Monte Carlo Tree Search for substantial performance gains under increased computation. To advance diffusion-based language modeling, I introduce Loopholing Discrete Diffusion Models, a new framework that overcomes key limitations of discrete diffusion and offers a promising path toward competitive alternatives to autoregressive language models.
Speaker Bio
Sungjin Ahn (CV) is currently an Associate Professor in the School of Computing and the Graduate School of AI at KAIST, and also holds a joint appointment at New York University. Before joining KAIST, he was an Assistant Professor of Computer Science at Rutgers University, where he was affiliated with the Center for Cognitive Science. At KAIST, he directs the Machine Learning and Mind Lab and the KAIST-Mila Prefrontal AI Research Center. He received his Ph.D. from the University of California, Irvine under the supervision of Prof. Max Welling, and subsequently completed a postdoctoral fellowship at MILA, conducting deep learning research under the mentorship of Prof. Yoshua Bengio.