EN / KO

Activities

Seminars

Home Activities Seminars

FIELD
AI and Natural Sciences
DATE
Apr 09 (Thu), 2026
TIME
14:00 ~ 16:00
PLACE
7323
SPEAKER
Sangwoong Yoon
HOST
Park, Jinseong
INSTITUTE
UNIST
TITLE
Generative Modeling and Reinforcement Learning: Energy, Reward, and Value
ABSTRACT
Generative modeling and reinforcement learning (RL) are two major paradigms in machine learning. Although they may appear disjoint, they can be understood from a unified probabilistic perspective. In this talk, I will share my research journey exploring the connection between generative modeling and RL, centered around the concepts of energy, reward, and value. Energy: My journey began with energy-based models (EBMs), a class of probabilistic models that are naturally suited to evaluating the relative likelihood of samples. I will present the Normalized Autoencoder, an EBM that achieves strong anomaly detection performance by leveraging the manifold structure of data ( https://arxiv.org/abs/2105.05735, https://arxiv.org/abs/2310.18677). Reward: I then realized that the energy in EBMs is equivalent to the reward in maximum entropy RL. This insight allows the problem of learning energy functions to be framed as an inverse RL problem. I applied an inverse RL approach to diffusion models, enabling high-quality few-step sampling ( https://arxiv.org/abs/2407.00626). Value: Many modern generative models produce samples through a sequential process. By computing the energy of intermediate distributions in this process, we obtain a value function over the generation trajectory. This value-function perspective enables the application of RL techniques to generative modeling ( https://arxiv.org/abs/2502.13280) and allows us to steer the generation process at inference time ( https://arxiv.org/abs/2503.08796). I will conclude the talk by discussing future research directions that I am currently exploring.
FILE