Overview

We study optimization methods for efficient training and adaptation of deep models and large language models (LLMs), aiming at better convergence, stability, and compute efficiency.

Outcomes

Publications