I aim to facilitate informed decision-making and empower machine learning practitioners, data scientists and policymakers with the tools needed to learn from data effectively, emphasizing interpretability, robustness, and trustworthiness. My goals are both to provide theoretical foundations that explain common phenomena observed in the data (especially for interpretable ML) and to design practical tools for reliable and trustworthy AI. The applications of my work are typically in high-stakes decision domains such as healthcare, finance, criminal justice, and governance.
In my recent research, I have established a theoretical foundation that explains when and why accurate interpretable/simple models exist. To do so, I leveraged the Rashomon effect, which is the phenomenon when multiple models perform equally well, and proposed the first effort in quantifying the Rashomon effect. Turns out that when the measure of the Rashomon effect is large, well-performing simpler models are more likely to exist.
Publications
Google Scholar | dblp | * denotes equal contribution-
2024
Advances in Neural Information Processing Systems (NeurIPS), 2024Proceedings of the International Conference on Machine Learning (ICML), 2024spotlightINFORMS Journal on Data Science, 2024Workshop on Interpretable Policies in Reinforcement Learning@ RLC-2024, 2024oral -
2023
Advances in Neural Information Processing Systems (NeurIPS), 2023Medical Imaging meets NeurIPS Workshop, 2023oralarXiv preprint arXiv:2311.13015, 2023The Journal of Infectious Disease (JID), 2023 -
2022
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022NeurIPS 2022 Workshop on Causality for Real-world Impact, 2022won 2022 American Statistical Association Data Challenge Expo Student CompetitionStatistics Surveys, 2022 -
2021
Second Workshop on Scholarly Document Processing at NAACL, 2021oral, won third place in the 3C Shared Task Competition