Yilun Zhou

I currently work at Salesforce as a research scientist. Before that, I worked at Amazon as an applied scientist. I received my Ph.D. degree at MIT Department of Electrical Engineering and Computer Science (EECS), advised by Prof. Julie Shah, from which I also received my Master of Science degree in 2019. I received my Bachelor of Science in Engineering degree from Duke University, with double major in Computer Sciene and Electrical & Computer Engineering. I worked with Prof. George Konidaris and Prof. Kris Hauser on my undergraduate research on robotics.

My long-term research goal is to enable trustworthy and responsible machine learning. These days, I am interested in anything related to interpretability and/or large language models (LLMs). My recent research explores LLM-generated free-text self-explanations, their mathematical reasoning abilities, as well as evaluations, utilities and societal implications of model explanations in general.

[Email: yilun@csail.mit.edu] [CV (PDF)] [Google Scholar] [YouTube (paper summary videos)]

Publications

* Equal Contribution       Recent Highlight       Journal    Conference    Workshop    Preprint    Other   

AAAI 2023 Tutorial: Trustworthy and Responsible AI: Fairness, Interpretability, Transparency and Their Interactions [Website]
CHAMP: A Competition-level Dataset for Fine-Grained Analyses of LLMs' Mathematical Reasoning Capabilities
Yujun Mao, Yoon Kim, Yilun Zhou
arXiv preprint: 2401.06961, 2024
Preliminary version in NeurIPS 2023 Workshop on Mathematical Reasoning and AI (MATH-AI)
[Paper] [Code] [Website]
Evaluating the Utility of Model Explanations for Model Development
Shawn Im, Jacob Andreas, Yilun Zhou
NeurIPS Workshop on Attributing Model Behavior at Scale (ATTRIB), 2023
[Paper]
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Shiyuan Huang, Siddarth Mamidanna, Shreedhar Jangam, Yilun Zhou, Leilani H. Gilpin
arXiv preprint: 2310.11207, 2023
[Paper]
Iterative Partial Fulfillment of Counterfactual Explanations: Benefits and Risks
Yilun Zhou
AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2023
[Paper]
Improving Generalization in Language Model-Based Text-to-SQL Semantic Parsing: Two Simple Semantic Boundary-Based Techniques
Daking Rai, Bailin Wang, Yilun Zhou, Ziyu Yao
Annual Meeting of the Association for Computational Linguistics (ACL), 2023
[Paper] [Code]
Techniques for Interpretability and Transparency of Black-Box Models
Yilun Zhou
MIT Ph.D. Thesis, 2023
[Thesis]
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou, Julie Shah
Conference of the European Chapter of the Association for Computational Linguistics (EACL) Findings, 2023
[Paper] [Code] [Website]
Explaining Large Language Model-Based Neural Semantic Parsers
Daking Rai, Yilun Zhou, Bailin Wang, Ziyu Yao
AAAI Conference on Artificial Intelligence: Student Abstract and Poster Program, 2023
[Paper]
ExSum: From Local Explanations to Model Understanding
Yilun Zhou, Marco Tulio Ribeiro, Julie Shah
Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT), 2022
[Paper] [Code] [Video] [Website] [MIT News (Featured on 5/5/2022 MIT Homepage)]
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou, Serena Booth, Marco Tulio Ribeiro, Julie Shah
AAAI Conference on Artificial Intelligence (AAAI), 2022
Preliminary version in NeurIPS 2021 Workshop on Explainable AI Approaches for Debugging and Diagnosis
[Paper] [Poster] [Code] [Video] [Website] [MIT News]
The Irrationality of Neural Rationale Models
Yiming Zheng, Serena Booth, Julie Shah, Yilun Zhou
NAACL Workshop on Trustworthy Natural Language Processing (TrustNLP), 2022
[Paper] [Poster] [Code]
Long-Term Resource Allocation Fairness in Average Markov Decision Process (AMDP) Environment
Ganesh Ghalme*, Vineet Nair*, Vishakha Patil*, Yilun Zhou*
International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2022
[Paper] [Code] [Website]
Latent Space Alignment Using Adversarially Guided Self-Play
Mycal Tucker, Yilun Zhou, Julie Shah
International Journal of Human-Computer Interaction (IJHCI), 2022
[Paper]
RoCUS: Robot Controller Understanding via Sampling
Yilun Zhou, Serena Booth, Nadia Figueroa, Julie Shah
Conference on Robot Learning (CoRL), 2021
[Paper] [Poster] [Code] [Video] [Website]
Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by Example
Serena Booth*, Yilun Zhou*, Ankit Shah, Julie Shah
AAAI Conference on Artificial Intelligence (AAAI), 2021
Preliminary version in AAAI 2020 Workshop on Statistical Relational AI
[Paper] [Poster] [Code] [MIT News]
Towards Understanding the Behaviors of Optimal Deep Active Learning Algorithms
Yilun Zhou, Adithya Renduchintala, Xian Li, Sida Wang, Yashar Mehdad, Asish Ghoshal
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
[Paper] [Poster] [Code] [Video]
Learning Household Task Knowledge from WikiHow Descriptions
Yilun Zhou, Julie Shah, Steven Schockaert
International Joint Conference on Artificial Intelligence (IJCAI) Workshop on Semantic Deep Learning, 2019
[Paper] [Code]
Predicting ConceptNet Path Quality Using Crowdsourced Assessments of Naturalness
Yilun Zhou, Steven Schockaert, Julie Shah
The Web Conference (WWW), 2019
[Paper] [Code]
Representing, Learning, and Controlling Complex Object Interactions
Yilun Zhou, Benjamin Burchfiel, George Konidaris
Autonomous Robots (AuRo), 2018
Original version in Robotics: Science and Systems (RSS), 2016
[Paper] [Video]
6DOF Grasp Planning by Optimizing a Deep Learning Scoring Function
Yilun Zhou, Kris Hauser
Robotics: Science and Systems (RSS) Workshop on Revisiting Contact - Turning a Problem into a Solution, 2017
[Paper] [Poster]
Incorporating Side-Channel Information into Convolutional Neural Networks for Robotic Tasks
Yilun Zhou, Kris Hauser
IEEE International Conference on Robotics and Automation (ICRA), 2017
[Paper] [Code]
Asymptotically Optimal Planning by Feasible Kinodynamic Planning in a State-Cost Space
Kris Hauser, Yilun Zhou
IEEE Transactions on Robotics (TRO), 2016
[Paper] [Code] [Website]