Nitish Joshi

Nitish Joshi
Nitish Joshi

PhD Student
Courant Institute
New York University

I'm a PhD student in computer science at New York University's Courant Institute of Mathematical Sciences advised by Prof. He He. I work on Machine Learning and Natural Language Processing, and I am affiliated with the ML2 research group.

I am broadly interested in robust language understanding --- making sure that LLMs don't optimize functions not intended by designers (e.g. spurious correlations in finetuning, reward hacking). I am also interested in empirically understanding how LLMs learn certain phenomena from the data as we scale up models (e.g. truthfulness). My research is graciously supported by a NSF Graduate Research Fellowship and NYU Dean's Dissertation Fellowship.

I have spent summers interning at Google Gemini/Bard and Amazon AWS AI during my PhD. Previously, I completed my undergraduate degree in Computer Science at IIT Bombay where I did research with Preethi Jyothi and Mohit Bansal (at UNC Chapel Hill).


Email: nitish@nyu.edu / joshinh@gmail.com


Links: [CV] [Twitter] [Github] [Google Scholar]



Publications

  • LLMs Are Prone to Fallacies in Causal Inference
    Nitish Joshi, Abulhair Saparov, Yixin Wang, He He
    EMNLP 2024
    [bib] [abstract]

  • Personas as a Way to Model Truthfulness in Language Models
    Nitish Joshi*, Javier Rando*, Abulhair Saparov, Najoung Kim, He He
    EMNLP 2024
    [bib] [abstract]

  • Nuisances via Negativa: Adjusting for Spurious Correlations via Data Augmentation
    Aahlad Puli, Nitish Joshi, Yoav Wald, He He, and Rajesh Ranganath
    TMLR, 2024
    [bib] [abstract]

  • Improving Multi-Hop Reasoning in LLMs by Learning from Rich Human Feedback
    Nitish Joshi, Koushik Kalyanaraman, Zhiting Hu, Kumar Chellapilla, He He, Li Erran Li
    NucLeaR Workshop, AAAI 2024
    [bib] [abstract]

  • Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples
    Abulhair Saparov, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi, Najoung Kim*, He He*
    NeurIPS 2023
    [bib] [abstract]

  • Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations
    Chenglei Si*, Dan Friedman*, Nitish Joshi, Shi Feng, Danqi Chen, He He
    ACL 2023
    [bib] [abstract]

  • Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens
    Nitish Joshi*, Xiang Pan* and He He
    EMNLP 2022
    [bib] [abstract]

  • QuALITY: Question Answering with Long Input Texts, Yes!
    Richard Yuanzhe Pang*, Alicia Parrish*, Nitish Joshi*, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, Samuel R. Bowman
    NAACL 2022
    [bib] [abstract]

  • An Investigation of the (In)effectiveness of Counterfactually Augmented Data
    Nitish Joshi, and He He.
    ACL 2022
    [bib] [abstract]

  • Coupled Training of Sequence-to-Sequence Models for Accented Speech Recognition
    Vinit Unni*, Nitish Joshi*, and Preethi Jyothi.
    ICASSP 2020
    [bib] [abstract]

  • Explore, Propose and Assemble: An Interpretable Model for Multi-hop Reading Comprehension
    Yichen Jiang*, Nitish Joshi*, Yen-Chun Chen and Mohit Bansal
    ACL 2019
    [bib] [abstract] [code]

  • Cross-lingual Training for Automatic Question Generation
    Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan, and Preethi Jyothi.
    ACL 2019
    [bib] [abstract] [dataset]


Miscellany

  • In my free time, I like to go for a run and read books. I also like hiking and bird watching.
  • The source code for this website was borrowed from Nelson Liu (https://nelsonliu.me)