Tianhao Wang (王天浩)
3234 Matthews Lane
La Jolla, CA 92093
tianhaowang@ucsd.edu
I am an Assistant Professor in the Halıcıoğlu Data Science Institute at University of California, San Diego. I am broadly interested in various aspects of machine learning, optimization, and statistics.
Prior to UCSD, I was a Research Assistant Professor in the Toyota Technological Institute at Chicago from 2024 to 2025, working with Zhiyuan Li and Nathan Srebro. Before that, I received my Ph.D. from the Department of Statistics and Data Science at Yale University, where I was fortunate to be advised by Zhou Fan.
CVRecent papers(*: equal contribution)
- A Tale of Two Geometries: Adaptive Optimizers and Non-Euclidean DescentarXiv:2511.20584, 2025
- Honesty over Accuracy: Trustworthy Language Models through Reinforced HesitationarXiv:2511.11500, 2025
- Through the Judge’s Eyes: Inferred Thinking Traces Improve Reliability of LLM RatersarXiv:2510.25860, 2025
- Provable Benefit of Sign Descent: A Minimal Model Under Heavy-Tail Class ImbalanceIn NeurIPS 2025 Workshop on Optimization for Machine Learning (OPT 2025), 2025 (Oral)
- On Universality of Non-Separable Approximate Message Passing AlgorithmsarXiv:2506.23010, 2025
- Taming Polysemanticity in LLMs: Provable Feature Recovery via Sparse AutoencodersarXiv:2506.14002, 2025
- Structured Preconditioners in Adaptive Optimization: A Unified AnalysisIn International Conference on Machine Learning (ICML), 2025
- Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index ModelIn International Conference on Learning Representations (ICLR), 2025 (Oral)Presented at NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning
- How well can Transformers emulate in-context Newton’s method?In International Conference on Artificial Intelligence and Statistics (AISTATS), 2025Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
- Implicit regularization of gradient flow on one-layer softmax attentionarXiv:2403.08699, 2024Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning