Research

🔬 Research Focus

My research centers on building (1) trustworthy and (2) scalable machine learning systems.

1) Trustworthy Machine Learning
I am particularly interested in uncertainty estimation in LLMs. My recent work — “Reconsidering LLM Uncertainty Estimation Methods in the Wild” — focuses on quantifying the uncertainty of LLM generations in realistic scenarios. I am currently exploring how knowledge conflict affects model uncertainty, specifically when the parametric knowledge contradicts the contextual knowledge provided in the prompt.

2) Scalable Machine Learning
I study how to enhance efficiency in the learning process by leveraging gradient signals. My recent work proposes a parameter-efficient fine-tuning method that identifies and updates only the most important parameters, assessed by gradient-based signals. Furthermore, I have been doing research on reducing communication costs in federated learning by recycling gradient information (paper).

Previously, I earned my B.S. in Electronic Engineering from Sogang University in South Korea, where I worked with Hongseok Kim on distributed optimization and federated learning, covering both algorithmic design and practical implementation.


🌱 Ongoing Research (Coming soon!)

Scale-Aware and Distribution-Sensitive Fine-Tuning
A parameter scale-aware and layer distribution-sensitive parameter-efficient fine-tuning framework

In-Context Uncertainty Estimation
Uncertainty Quantification on knowledge conflict scenarios

Foundational Modeling for AC-OPF with Federated Learning
Building a foundational GNN-based model for solving AC Optimal Power Flow (AC-OPF) problems using federated learning