About me
I’m an individual machine learning researcher. Prior to this, I was a postdoctoral researcher at Mila and Université de Montréal under the supervision of Prof. Yoshua Bengio. I earned Ph.D degree from KAIST under the supervision of Prof. Sung Ju Hwang.
Research Interest
My research interest include
- System 2 Deep Learning
- Generative Flow Networks (GFlowNet)
- Bayesian Inference and Learning
- Meta-Learning / Multi-Task Learning / Transfer Learning
- AutoML
Contact
haebeom dot lee at kaist dot ac dot kr
Awards
- Global Ph.D Fellowship Program, 2019-2021
- Google Ph.D Fellowship Program 2021
- Outstanding reviewer (ICML2020 - Top 33%, ICML2022 - Top 10%)
New Preprints
- Dataset Condensation with Latent Space Knowledge Factorization and Sharing
[paper]
Hae Beom Lee*, Dong Bok Lee*, Sung Ju Hwang
(*: equal contribution)
arXiv, 2022 - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation
[paper]
Dong Bok Lee, Aoxuan Silvia Zhang*, Byungjoo Kim*, Junhyeon Park*, Juho Lee, Sung Ju Hwang, Hae Beom Lee
(*: equal contribution)
arXiv, 2024
Conference Publications
- Delta-AI: Local Objectives for Amortized Inference in Sparse Graphical Models
[paper]
Jean-Pierre René Falet*, Hae Beom Lee*, Nikolay Malkin*, Chen Sun, Dragos Secrieru, Dinghuai Zhang, Guillaume Lajoie, Yoshua Bengio
(*: equal contribution)
ICLR 2024 - Online Hyperparameter Meta-Learning with Hypergradient Distillation
[paper]
Hae Beom Lee, Hayeon Lee, Jaewoong Shin, Eunho Yang, Timothy M. Hospedales, Sung Ju Hwang
ICLR 2022 (spotlight) - Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning
[paper]
Seanie Lee*, Hae Beom Lee*, Juho Lee, Sung Ju Hwang
(*: equal contribution)
ICLR 2022 - Meta-Learning Low Rank Covariance Factors for Energy-Based Deterministic Uncertainty
[paper]
Jeffrey Ryan Willette, Hae Beom Lee, Juho Lee, Sung Ju Hwang
ICLR 2022 - Large-Scale Meta-Learning with Continual Trajectory Shifting
[paper] [code]
Jaewoong Shin*, Hae Beom Lee*, Boqing Gong, Sung Ju Hwang
(*: equal contribution)
ICML 2021 - MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
[paper] [code]
Jeongun Ryu*, Jaewoong Shin*, Hae Beom Lee*, Sung Ju Hwang
(*: equal contribution)
NeurIPS 2020 (spotlight) - Meta-Learning for Short Utterance Speaker Recognition with Imbalance Length Pairs
[paper] [code]
Seong Min Kye, Youngmoon Jung, Hae Beom Lee, Sung Ju Hwang, and Hoirin Kim
Interspeech 2020 - Meta Variance Transfer: Learning to Augment from the Others
[paper]
Seong Jin Park, Seungju Han, Ji-won Baek, Insoo Kim, Juhwan Song, Hae Beom Lee, Jae-Joon Han and Sung Ju Hwang
ICML 2020 - Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks
[paper] [code]
Hae Beom Lee*, Hayeon Lee*, Donghyun Na*, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang
(*: equal contribution)
ICLR 2020 (oral presentation) - Meta Dropout: Learning to Perturb Latent Features for Generalization
[paper] [code]
Hae Beom Lee, Taewook Nam, Eunho Yang, Sung Ju Hwang
ICLR 2020 - DropMax: Adaptive Variational Softmax
[paper][code]
Hae Beom Lee, Juho Lee, Saehoon Kim, Eunho Yang, Sung Ju Hwang
NeurIPS 2018 - Uncertainty-Aware Attention for Reliable Interpretation and Prediction
[paper][code]
Jay Heo*, Hae Beom Lee*, Saehoon Kim, Juho Lee, Kwang Joon Kim, Eunho Yang, Sung Ju Hwang
(*: equal contribution)
NeurIPS 2018 - Deep Asymmetric Multi-task Feature Learning
[paper][code]
Hae Beom Lee, Eunho Yang, Sung Ju Hwang
ICML 2018
Old Preprints
- Meta-Learned Confidence for Few-shot Learning
[paper][code]
Sung Min Kye, Hae Beom Lee, Hoirin Kim, Sung Ju Hwang
arXiv, 2020 - Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout
[paper][code]
Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang
arXiv, 2018