About me
I’m an assitant professor in the School of Electrical Engineering at Korea University. Prior to this, I was a postdoctoral researcher at KAIST under the supervision of Prof. Juho Lee, and at Mila and Université de Montréal under the supervision of Prof. Yoshua Bengio. I earned my Ph.D degree from KAIST under the supervision of Prof. Sung Ju Hwang.
Research Interest
My research interest include
- Large Language Model Reasoning
- System 2 Deep Learning
- Meta-Learning / Bi-level Optimization
- AutoML / Hyperparameter Optimization
- Bayesian Inference and Learning
- Generative Flow Networks (GFlowNet)
- Transfer Learning / Multi-Task Learning / Continual Learning
For the prospective students
I am looking for self-motivated students with various backgrounds, like electrical engineering, computer science, or mathematics. If you are interested in the topics above and doing research together, please feel free to reach out through my email.
- For the prospective undergraduate interns, you don’t need to have strong backgrounds on AI or machine learining. How enthusiastic you are is probably the most important.
- For the prospective master students, I may expect you to have some backgrounds on AI or machine learning, e.g., having interned at other AI/ML labs or company, or having good grades in AI-related courses.
- For the prospective PhD students, I expect you to have published at least one first-authored paper to top-tier machine learning conferences, e.g., NeurIPS, ICML, ICLR, ACL, EMNLP, CVPR, ICCV, etc.
In case of applying for our lab as a graduate student, I strongly recommend you to have an internship period in our lab for at least 3 months, before applying.
For the students taking Engineering Design I or II
Please feel free to reach out if you are interested or if you’d like to have an experience in machine learning or deep learning. I can provide the following for one semester:
- An opportunity to do actual research together on some recent topics in machine learning and deep learning.
- An opportunity to write a research paper.
- Google Colab pro plus account provided for 3 months, for all students.
Contact
haebeomlee dot korea dot ac dot kr
Awards
- Global Ph.D Fellowship Program, 2019-2021
- Google Ph.D Fellowship Program 2021
- Outstanding reviewer (ICML2020 - Top 33%, ICML2022 - Top 10%)
New Preprints
- Bayesian Neural Scaling Laws Extrapolation with Prior-Fitted Networks
Dongwoo Lee*, Dong Bok Lee*, Steven Adriaensen, Juho Lee, Sung Ju Hwang, Frank Hutter, Seon Joo Kim, Hae Beom Lee
(*: equal contribution)
2025 - Dataset Condensation with Latent Space Knowledge Factorization and Sharing
[paper]
Hae Beom Lee*, Dong Bok Lee*, Sung Ju Hwang
(*: equal contribution)
arXiv, 2022 - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation
[paper]
Dong Bok Lee, Aoxuan Silvia Zhang*, Byungjoo Kim*, Junhyeon Park*, Juho Lee, Sung Ju Hwang, Hae Beom Lee
(*: equal contribution)
arXiv, 2024
Conference Publications
- Delta-AI: Local Objectives for Amortized Inference in Sparse Graphical Models
[paper]
Jean-Pierre René Falet*, Hae Beom Lee*, Nikolay Malkin*, Chen Sun, Dragos Secrieru, Dinghuai Zhang, Guillaume Lajoie, Yoshua Bengio
(*: equal contribution)
ICLR 2024 - Online Hyperparameter Meta-Learning with Hypergradient Distillation
[paper]
Hae Beom Lee, Hayeon Lee, Jaewoong Shin, Eunho Yang, Timothy M. Hospedales, Sung Ju Hwang
ICLR 2022 (spotlight) - Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning
[paper]
Seanie Lee*, Hae Beom Lee*, Juho Lee, Sung Ju Hwang
(*: equal contribution)
ICLR 2022 - Meta-Learning Low Rank Covariance Factors for Energy-Based Deterministic Uncertainty
[paper]
Jeffrey Ryan Willette, Hae Beom Lee, Juho Lee, Sung Ju Hwang
ICLR 2022 - Large-Scale Meta-Learning with Continual Trajectory Shifting
[paper] [code]
Jaewoong Shin*, Hae Beom Lee*, Boqing Gong, Sung Ju Hwang
(*: equal contribution)
ICML 2021 - MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
[paper] [code]
Jeongun Ryu*, Jaewoong Shin*, Hae Beom Lee*, Sung Ju Hwang
(*: equal contribution)
NeurIPS 2020 (spotlight) - Meta-Learning for Short Utterance Speaker Recognition with Imbalance Length Pairs
[paper] [code]
Seong Min Kye, Youngmoon Jung, Hae Beom Lee, Sung Ju Hwang, and Hoirin Kim
Interspeech 2020 - Meta Variance Transfer: Learning to Augment from the Others
[paper]
Seong Jin Park, Seungju Han, Ji-won Baek, Insoo Kim, Juhwan Song, Hae Beom Lee, Jae-Joon Han and Sung Ju Hwang
ICML 2020 - Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks
[paper] [code]
Hae Beom Lee*, Hayeon Lee*, Donghyun Na*, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang
(*: equal contribution)
ICLR 2020 (oral presentation) - Meta Dropout: Learning to Perturb Latent Features for Generalization
[paper] [code]
Hae Beom Lee, Taewook Nam, Eunho Yang, Sung Ju Hwang
ICLR 2020 - DropMax: Adaptive Variational Softmax
[paper][code]
Hae Beom Lee, Juho Lee, Saehoon Kim, Eunho Yang, Sung Ju Hwang
NeurIPS 2018 - Uncertainty-Aware Attention for Reliable Interpretation and Prediction
[paper][code]
Jay Heo*, Hae Beom Lee*, Saehoon Kim, Juho Lee, Kwang Joon Kim, Eunho Yang, Sung Ju Hwang
(*: equal contribution)
NeurIPS 2018 - Deep Asymmetric Multi-task Feature Learning
[paper][code]
Hae Beom Lee, Eunho Yang, Sung Ju Hwang
ICML 2018
Old Preprints
- Meta-Learned Confidence for Few-shot Learning
[paper][code]
Sung Min Kye, Hae Beom Lee, Hoirin Kim, Sung Ju Hwang
arXiv, 2020 - Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout
[paper][code]
Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang
arXiv, 2018