About Me
I'm a Research Scientist at Salesforce AI Research. I earned my PhD at Georgia Tech, where I was fortunate to be advised by
Mark Davenport and collaborate closely with Ashwin Pananjady. Prior to that, I earned by BSE at the
University of Michigan. During graduate school, I spent time interning at Duolingo, where I worked with Will Monroe, and at Amazon, where I worked with Arjun Seshadri,
Mariya Vasileva, and Achal Dave.
My current research broadly focuses on how to improve the reasoning ability of foundation models, in particular large language models. I'm also interested in the role humans play in the era of large models: when
are human responses necessary, and when we can avoid collecting human feedback?
In my free time, I enjoy cooking (and eating), reading, running, and watching basketball (NBA and college).
Selected publications and preprints
* Denotes equal contribution
- Direct Judgement Preference Optimization
Peifeng Wang*, Austin Xu*, Yilun Zhou, Caiming Xiong, Shafiq Joty
arXiv 2024
- SFR-RAG: Towards Contextually Faithful LLMs
Xuan-Phi Nguyen, Shrey Pandit, Senthil Purushwalkam, Austin Xu, Hailin Chen, Yifei Ming, Zixuan Ke, Silvio Savarese, Caiming Xong, Shafiq Joty
arXiv 2024
- HandsOff: Labeled dataset generation with no additional human annotations
Austin Xu, Mariya I. Vasileva, Achal Dave, and Arjun Seshadri
CVPR 2023
Highlight Award (top 2.5% of submissions, 26% conference acceptance rate)
Short version in the NeurIPS 2022 SyntheticData4ML Workshop
[arxiv][website] [code]
- Perceptual adjustment queries and an inverted measurement paradigm for low-rank metric learning
Austin Xu, Andrew D. McRae, Jingyan Wang, Mark A. Davenport, and Ashwin Pananjady
NeurIPS 2023
Short version in the ICML 2023 Many Facets of Preference Learning Workshop
[arxiv - extended version][code]
- Simultaneous Preference and Metric Learning from Paired Comparisons
Austin Xu and Mark A. Davenport
NeurIPS 2020
Spotlight Presentation (top 4% of submissions, 20% conference acceptance rate)
[arxiv] [website] [talk]
PhD Thesis: Learning with and without human feedback. Georgia Institute of Technology, 2024.
[local copy][defense slides]
Experience
Work Experience
- AI Research Intern at Duolingo (Summer 2023)
- Applied Scientist Intern at Amazon (Summer, Fall 2022)
- R&D Summer Intern at Sandia National Laboratories (Summer 2018)
- Student Intern at General Motors (Summer 2017)
Teaching Experience
- Spring 2022: Head TA, Statistical Machine Learning (ECE 6254) [website]
- Fall 2019/Spring 2020/Summer 2020: TA, Professional and Technical Communications (ECE 3005)
- Fall 2018/Spring 2019: IA, Discrete Mathematics (University of Michigan -- EECS 203)