Harvard University AI4LIFE research group
My research interests lie within the broad area of trustworthy machine learning. More specifically, I focus on improving the interpretability, fairness, privacy, robustness, and reasoning capabilities of different kinds of ML models, including large language models and other pre-trained models. My research addresses the following fundamental questions pertaining to human and algorithmic decision-making:
- How can we build interpretable and accurate models to assist in human decision-making?
- How do we identify and correct underlying biases in both human decisions and model predictions?
- How can we ensure that models and their interpretations are robust to adversarial and privacy attacks?
- How do we train and evaluate models when faced with missing counterfactuals?
These questions have far-reaching implications in domains involving high-stakes decisions such as health care, policy, law, and business.
I lead the AI4LIFE research group at Harvard and I recently co-founded the Trustworthy ML Initiative (TrustML) to help lower entry barriers into trustworthy ML and bring together researchers and practitioners working in the field. My research is being generously supported by NSF, Google, Amazon, JP Morgan, Adobe, Bayer, Harvard Data Science Initiative, and D^3 Insitute at Harvard. My work has been featured in various major media outlets including the New York Times, TIME magazine, Fortune, Forbes, MIT Technology Review, and Harvard Business Review.
Please check out my CV for more details about me and my research.
NOTE: I am looking for motivated graduate and undergraduate students and postdocs who are broadly interested in trustworthy machine learning and large pre-trained models. If you are excited about this line of research and would like to work with me, please read this before contacting me.