Thank you for your interest in my research group!
Prospective Ph.D. Students
I am currently looking for motivated students. Please apply through the Rutgers Ph.D. Program in Computer Science and mention my name in your application. The Ph.D. application deadline for students starting in Fall 2025 is January 1, 2025. Feel free to email me with a brief description of your research interests and experience. Be sure to include your CV and transcript, and add “Prospective Ph.D. Student” to the subject line.
Rutgers Undergraduate or Graduate Students
Please visit the Research page to learn more about my projects. If you are interested in collaborating, send me an email describing your background and what you would like to work on. Include your CV and transcript, indicate how much time you can dedicate to the project, and add “Rutgers Student Collaborator” to the email subject line.
Visiting Student Researchers or Interns
We have various projects that would benefit from collaboration with interns or visiting students who have a strong background in machine learning, Python programming, and/or optimization, machine learning theory, human-computer interaction, mechanistic interpretability, or foundation models. If you are interested in working together, send me a brief email outlining your research interests and experience. Include your CV, a note on how much time you can dedicate to the project, and add “Visiting Researcher/Intern” to the email subject line. I may have limited bandwidth, but I will do my best to reply to your message.
Some of the topics that we are currently working on or plan to work on include:
- Quantifying and measuring model multiplicity/uncertainty in machine learning
- Studying properties of the set of models that perform approximately equally well
- Learning under noise
- Robustness, distribution shifts, and dataset shifts
- Designing interpretable machine learning models
- Improving fairness and safety of machine learning models and data science pipelines
- Evaluating trustworthiness and uncertainty of LLMs
- Interpretability in reinforcement learning
- Applications of machine learning in high-stakes decision domains (such as healthcare, finance, criminal justice, and governance), focusing on responsible AI/ML