Portrait of Chandan Singh

Chandan Singh

Senior Researcher

About

πŸ‘‹HelloπŸ‘‹! I work in the deep learning group (opens in new tab) on interpretable machine learning with the broad goal of improving science & medicine with data. For details, see my personal website (opens in new tab) or my google scholar (opens in new tab). Here’s what I’m excited about these days:
πŸ”Ž Interpretability. I’m interested in rethinking interpretability (opens in new tab) in the context of LLMs

augmented imodels (opens in new tab) – use LLMs to build a transparent model
imodels (opens in new tab) – build interpretable models in the style of scikit-learn
explanation penalization (opens in new tab) – regularize explanations to align models with prior knowledge
adaptive wavelet distillation (opens in new tab) – replace neural nets with simple, performant wavelet models

πŸš— LLM steering. Interpretability tools can provide ways to better guide and use LLMs

tree prompting (opens in new tab) – improve black-box few-shot text classification with decision trees
attention steering (opens in new tab) – guide LLMs by emphasizing specific input spans
interpretable autoprompting (opens in new tab) – automatically find fluent natural-language prompts

🧠 Neuroscience. Since joining MSR, I have been focused on leveraging LLMs to understand how the human brain represents language (using fMRI in collaboration with the Huth lab (opens in new tab) at UT Austin).

explanation-mediated validation (opens in new tab) – build and test fMRI explanations using LLM-generated stimuli
qa embeddings (opens in new tab) – build interpretable fMRI encoding models by asking yes/no questions to LLMs
summarize & score explanations (opens in new tab) – generate natural-language explanations of fMRI encoding models

πŸ’Š Healthcare. I’m also actively working on how we can improve clinical decision instruments by using the information contained across various sources in the medical literature (in collaboration with many folks including Dr. Aaron Kornblith (opens in new tab) at UCSF and the MSR Health Futures team (opens in new tab)).

clinical self-verification (opens in new tab) – self-verification improves performance and interpretability of clinical information extraction
clinical rule vetting (opens in new tab) – stress testing a clinical decision instrument performance for intra-abdominal injury

My PhD at UC Berkeley (advised by Bin Yu (opens in new tab)) focused on working with scientists and doctors to develop interpretations for scientific domains.

 

Internships / collaborations

If you want to chat about research (or are interested in interning at MSR), feel free to reach out over email!

Previously, I’ve been lucky to help mentor some wonderful students: