Robust Transfer Learning for Out-of-Distribution Dataset
ABSTRACT
Transfer learning using pre-trained models has shown remarkable results across various tasks. However, it still suffers from challenges with out-of-distribution datasets and faces numerous problems when applied to real-world scenarios. In this talk, I will present three research themes for robust transfer learning on out-of-distribution datasets: 1) embedding property, 2) model merging, and 3) causal relationships. First, I will discuss embedding properties that maintain robustness against out-of-distribution data. Second, I will introduce research on improving robustness by leveraging diverse models simultaneously through model merging. Finally, I will examine methods to achieve robust performance even under distribution shifts by thoroughly learning the causal relationships within given data.