Nitish Joshi

Nitish Joshi
Nitish Joshi

PhD Student
Courant Institute
New York University

I'm a PhD student in computer science at New York University's Courant Institute of Mathematical Sciences advised by Prof. He He. I work on Machine Learning and Natural Language Processing, and I am affiliated with the Machine Learning for Language (ML2) research group.

I am broadly interested in robust language understanding -- making sure that our models are not relying on spurious correlations and generalize well on out-of-distribution data. I have also been excited about the recent developments in large language models and scaling, and trying to empirically understand some of the phenomena that we see. My research is graciously supported by a NSF Graduate Research Fellowship.

Previously, I spent four amazing years as an undergraduate at IIT Bombay where I majored in Computer Science and Engineering. I did a few years of research at the CSALT Lab working with Prof. Preethi Jyothi . I've also spent time at NEC Labs Princeton and UNC Chapel Hill where I worked with Prof. Mohit Bansal.


Email: nitish@nyu.edu / joshinh@gmail.com


Links: [CV] [Twitter] [Github] [Google Scholar]



Publications

  • Personas as a Way to Model Truthfulness in Language Models
    Nitish Joshi*, Javier Rando*, Abulhair Saparov, Najoung Kim, He He
    Preprint
    [bib] [abstract]

  • Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples
    Abulhair Saparov, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi, Najoung Kim*, He He*
    NeurIPS 2023
    [bib] [abstract]

  • Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations
    Chenglei Si*, Dan Friedman*, Nitish Joshi, Shi Feng, Danqi Chen, He He
    ACL 2023
    [bib] [abstract]

  • Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens
    Nitish Joshi*, Xiang Pan* and He He
    EMNLP 2022
    [bib] [abstract]

  • Nuisances via Negativa: Adjusting for Spurious Correlations via Data Augmentation
    Aahlad Puli, Nitish Joshi, He He, and Rajesh Ranganath
    Preprint
    [bib] [abstract]

  • QuALITY: Question Answering with Long Input Texts, Yes!
    Richard Yuanzhe Pang*, Alicia Parrish*, Nitish Joshi*, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, Samuel R. Bowman
    NAACL 2022
    [bib] [abstract]

  • An Investigation of the (In)effectiveness of Counterfactually Augmented Data
    Nitish Joshi, and He He.
    ACL 2022
    [bib] [abstract]

  • Coupled Training of Sequence-to-Sequence Models for Accented Speech Recognition
    Vinit Unni*, Nitish Joshi*, and Preethi Jyothi.
    ICASSP 2020
    [bib] [abstract]

  • Explore, Propose and Assemble: An Interpretable Model for Multi-hop Reading Comprehension
    Yichen Jiang*, Nitish Joshi*, Yen-Chun Chen and Mohit Bansal
    ACL 2019
    [bib] [abstract] [code]

  • Cross-lingual Training for Automatic Question Generation
    Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan, and Preethi Jyothi.
    ACL 2019
    [bib] [abstract] [dataset]


Miscellany

  • In my free time, I like to go for a run and read books. I also like hiking and bird watching.
  • The source code for this website was borrowed from Nelson Liu (https://nelsonliu.me)