lesia_semenova.jpg

Lesia Semenova

Safe, Trustworthy, and Interpretable AI


Assistant Professor at Rutgers University
New York Metropolitan Area
E-mail: lesia.semenova [at] rutgers.edu


I am an Assistant Professor of Computer Science at Rutgers University, where I lead a research group that advances the foundations, algorithms, and applied practice for safe, trustworthy, and interpretable AI through model and representation multiplicity. My work formalizes the Rashomon Effect—the existence of many equally accurate but behaviorally different models—to move the field beyond a single-model mindset.

A key question my work addresses is how to use model multiplicity in practice. By characterizing Rashomon sets, or sets of near-optimal models, I develop methods to navigate these spaces and identify models that satisfy additional desiderata, such as interpretability or robustness. This approach leverages model diversity to enable new algorithmic tools for robust recourse, personalized alignment, and stable decision-making in high-stakes fields like healthcare and public policy. Ultimately, my research aims to transform uncertainty from a source of instability into a resource for trust. I am increasingly extending these ideas to foundation models and LLMs, where multiplicity naturally arises through internal representations and reasoning paths.

Before joining Rutgers, I was a postdoctoral researcher at Microsoft Research (NYC) and received my PhD in Computer Science from Duke University. Earlier, I worked on augmented reality at Samsung R&D Institute Ukraine and earned my MS and BS in Applied Mathematics from Taras Shevchenko National University of Kyiv.

I am currently recruiting students at Rutgers. If you're interested in collaborating or joining my group, please take a look at this page.


Recent News

Dec 21, 2025 Proud to share this recap of our journey to NeurIPS through Ira’s eyes (article in Ukrainian). It’s a raw look at the student side of research: from the foundational months of deep-diving into literature to the high-intensity push for the final submission and the paper’s presentation in San Diego.
Sep 23, 2025 Our paper This EEG Looks Like These EEGs: Interpretable Interictal Epileptiform Discharge Detection With ProtoEEG-kNN has been accepted to MICCAI 2025 and will be presented at the conference this week. Congratulations to Dennis!
Sep 18, 2025 We got two papers accepted to NeurIPS this year on the trustworthiness of Rashomon sets: ElliCE: Efficient and Provably Robust Algorithmic Recourse via the Rashomon Sets (Spotlight) and The Rashomon Set Has It All: Analyzing Trustworthiness of Trees under Multiplicity. Congratulations to Bohdan, Iryna, Ethan, and Tony!
Sep 15, 2025 I’m co-organizing a workshop and a tutorial on the Rashomon Effect at AAAI 2026. If you’re planning to attend the conference, I’d love to see you there—and encourage you to consider submitting to the workshop.
Sep 01, 2025 I am excited to be starting a new position as Assistant Professor in the Department of Computer Science at Rutgers University. Looking forward to continuing research and teaching in responsible and trustwothy AI.
Aug 06, 2025 I presented at the JSM Annual Meeting in Nashville. We prove that simple, interpretable models can often achieve accuracy comparable to complex ones, which has implications for AI policy.
Jul 31, 2025 Our paper on evaluating equitable transit subsidy programs has been accepted to Harvard Data Science Review. We propose an interpretable causal inference pipeline to study long-term ridership impacts of King County Metro’s Programs.
May 28, 2025 I have been selected as a Top Reviewer at ICML 2025.
Feb 17, 2025 New preprint! We identified distinct immune profiles in people with HIV on ART, linked to CD4:CD8 ratio (a key marker of immune recovery). Our work is now on bioRxiv.
Jan 06, 2025 Congratulations to Harry Chen for being selected as a finalist for the 2024-2025 Outstanding Undergraduate Researcher Award from the Computing Research Association!
Dec 15, 2024 Presented at 18th International Joint Conference CFE-CMStatistics 2024.
Dec 12, 2024 Our paper Fast and Interpretable Mortality Risk Scores for Critical Care Patients has been accepted to JAMIA. Congrats to Tony and Chloe!
Nov 12, 2024 I received the PhD Dissertation Award from the Department of Computer Science at Duke University.
Nov 08, 2024 I will serve as a mentor at WiML Workshop at NeurIPS 2024 . Come join us!
Nov 07, 2024 I will attend AI-Mediated Society Mixer at Rutgers University on November 20.
Oct 20, 2024 Presented the density trees and lists paper at the 2024 INFORMS Annual Meeting. I also chaired a session on interpretable ML.
Sep 25, 2024 Our paper on Using Noise to Infer Aspects of Simplicity Without Learning was accepted to NeurIPS 2024.
Sep 05, 2024 Presented at the Theory of Interpretable AI Seminar. Please see the recording here.
Aug 02, 2024 We are organizing a NeurIPS 2024 workshop on Interpretable AI. Our topics span from classical interpretability to modern methods for foundation models and mechanistic interpretability.
Jul 26, 2024 Presented at the 25th International Symposium on Mathematical Programming at Montreal, Canada.
Jul 01, 2024 I started my postdoctoral research position at Microsoft Research, NYC.
Jun 12, 2024 Our position paper Amazing Things Come From Having Many Good Models was selected as a spotlight-designated paper at the ICML 2024.
May 14, 2024 Presented at the AI/ML seminar for the Johns Hopkins Applied Physics Lab.
Mar 07, 2024 I was named as one of the 2024 Rising Stars in Computational and Data Sciences.
Jan 24, 2024 Congratulations to Dennis Tang and Harry Chen for being selected for Honorable Mention for the 2023-2024 Outstanding Undergraduate Researcher Award from the Computing Research Association!