Seminars
Home
Centers & Programs
AI and Natural Sciences
Seminars
- FIELD
- AI and Natural Sciences
- DATE
-
May 16 (Thu), 2024
- TIME
- 14:00 ~ 16:00
- PLACE
- 7323
- SPEAKER
- 김동환
- HOST
- Choi, Jaewoong
- INSTITUTE
- KAIST
- TITLE
- How to make the gradient descent-ascent converge to local minimax optima
- ABSTRACT
- Can we effectively train a generative adversarial network (GAN) (or equivalently, optimize a minimax problem), similar to how we successfully learn a classification neural network (or equivalently, minimize a function) by gradient methods? The answer to this question at the moment is “No”.
A remarkable success of gradient descent in minimization is supported by theoretical results; under mild conditions, gradient descent converges to a local minimum, and almost surely avoids strict saddle points. However, there is currently a lack of comparable theory in minimax optimization, and this talk will discuss recent progress made in addressing this aspect, using dynamical systems theory. In specific, this talk will present new variants of gradient descent-ascent that, under mild conditions, converge to local minimax optima, which the standard gradient descent-ascent fails to converge.
- FILE
-