Thank you for your interest in my research group!

Prospective Ph.D. Students

I am currently looking for motivated students. Please apply through the Rutgers Ph.D. Program in Computer Science and mention my name in your application. The Ph.D. application deadline for students starting in Fall 2026 is January 1, 2026. Feel free to send me a short email with a brief description of your research interests and experience. Be sure to include your CV and transcript, and add “Prospective Ph.D. Student” to the subject line.

I am especially looking for students to work on the following topics:

  • Diversity/Personalization/Uncertainty in LLMs - Developing methods that allow large language models to produce diverse yet valid answers. Such diversity reflects natural human reasoning and supports personalization, alignment, and uncertainty estimation. It can arise from different reasoning paths, internal representations, or parameter configurations, and connects directly to the Rashomon Effect (the existence of many equally good but behaviorally different solutions).
  • Interpretable AI - Understanding the internal computation of large language models, including how they organize information across layers, neurons, and attention heads. This includes studying sparse or modular circuits, identifying functional subcomponents, and developing transformer-specific tools that trace how information flows through the model. A key challenge is faithfulness—ensuring that the explanations these tools provide genuinely reflect the computations the model relies on, rather than artifacts of the analysis method.
  • Stability in LLMs and Foundation Models - Examining how sensitive LLMs are to small changes in prompts, data, fine-tuning, or random seeds. Even models with identical performance can vary internally across training runs. My goal is to identify which aspects of model behavior are stable and reproducible, and which ones vary across alternative solutions.
  • Interpretable ML and Theory-Driven ML - Designing interpretable machine learning algorithms and studying practical ML phenomena from a theoretical perspective. This includes understanding when simple models can match the performance of black-box systems, and developing interpretable methods for high-stakes domains such as medical and financial data. This line of work builds on my prior research on the Rashomon Effect and interpretable ML, and you can read my publications for more details on these topics.

Rutgers Undergraduate or Graduate Students

Please visit the Research page to learn more about my projects. If you are interested in collaborating, send me an email describing your background and what you would like to work on. Include your CV and transcript, indicate how much time you can dedicate to the project, and add “Rutgers Student Collaborator” to the email subject line.

Visiting Student Researchers or Interns

We have various projects that would benefit from collaboration with interns or visiting students who have a strong background in machine learning, Python programming, and/or optimization, machine learning theory, human-computer interaction, mechanistic interpretability, or foundation models. If you are interested in working together, send me a brief email outlining your research interests and experience. Include your CV, a note on how much time you can dedicate to the project, and add “Visiting Researcher/Intern” to the email subject line. I may have limited bandwidth, but I will do my best to reply to your message.