Tianhao Wang (王天浩)

profile.jpeg

3234 Matthews Lane
La Jolla, CA 92093
tianhaowang@ucsd.edu

I am an Assistant Professor in the Halıcıoğlu Data Science Institute at University of California, San Diego. I am broadly interested in various aspects of machine learning, optimization, and statistics.

Prior to UCSD, I was a Research Assistant Professor in the Toyota Technological Institute at Chicago from 2024 to 2025, working with Zhiyuan Li and Nathan Srebro. Before that, I received my Ph.D. from the Department of Statistics and Data Science at Yale University, where I was fortunate to be advised by Zhou Fan.

CV


Recent papers(*: equal contribution)

  1. A Tale of Two Geometries: Adaptive Optimizers and Non-Euclidean Descent
    Shuo Xie, Tianhao Wang, Beining Wu, and Zhiyuan Li
    arXiv:2511.20584, 2025
  2. Honesty over Accuracy: Trustworthy Language Models through Reinforced Hesitation
    Mohamad Amin Mohamadi, Tianhao Wang, and Zhiyuan Li
    arXiv:2511.11500, 2025
  3. Through the Judge’s Eyes: Inferred Thinking Traces Improve Reliability of LLM Raters
    Xingjian Zhang, Tianhong Gao, Suliang Jin, Tianhao Wang, Teng Ye,  Eytan Adar and 1 more author
    arXiv:2510.25860, 2025
  4. Provable Benefit of Sign Descent: A Minimal Model Under Heavy-Tail Class Imbalance
    Robin Yadav, Shuo Xie, Tianhao Wang, and Zhiyuan Li
    In NeurIPS 2025 Workshop on Optimization for Machine Learning (OPT 2025), 2025  (Oral)
  5. On Universality of Non-Separable Approximate Message Passing Algorithms
    Max Lovig, Tianhao Wang, and Zhou Fan
    arXiv:2506.23010, 2025
  6. Taming Polysemanticity in LLMs: Provable Feature Recovery via Sparse Autoencoders
    Siyu Chen, Heejune Sheen, Xuyuan Xiong, Tianhao Wang, and Zhuoran Yang
    arXiv:2506.14002, 2025
  7. Structured Preconditioners in Adaptive Optimization: A Unified Analysis
    Shuo Xie, Tianhao Wang, Sashank Reddi, Sanjiv Kumar, and Zhiyuan Li
    In International Conference on Machine Learning (ICML), 2025
  8. Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index Model
    Siyu Chen*, Beining Wu*, Miao Lu, Zhuoran Yang, and Tianhao Wang
    In International Conference on Learning Representations (ICLR), 2025  (Oral)
    Presented at NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning
  9. How well can Transformers emulate in-context Newton’s method?
    Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos, and Jason D. Lee
    In International Conference on Artificial Intelligence and Statistics (AISTATS), 2025
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
  10. Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers
    Siyu Chen, Heejune Sheen, Tianhao Wang, and Zhuoran Yang
    In Advances in Neural Information Processing Systems (NeurIPS), 2024
    Presented at ICML 2024 Workshop on Theoretical Foundations of Foundation Models
  11. Implicit regularization of gradient flow on one-layer softmax attention
    Heejune Sheen, Siyu Chen, Tianhao Wang, and Harrison H. Zhou
    arXiv:2403.08699, 2024
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning