Follow
Prasanna Parthasarathi
Title
Cited by
Cited by
Year
Extending neural generative conversational model using external knowledge sources
P Parthasarathi, J Pineau
arXiv preprint arXiv:1809.05524, 2018
982018
Learning an unreferenced metric for online dialogue evaluation
K Sinha, P Parthasarathi, J Wang, R Lowe, WL Hamilton, J Pineau
arXiv preprint arXiv:2005.00583, 2020
872020
Unnatural language inference
K Sinha, P Parthasarathi, J Pineau, A Williams
arXiv preprint arXiv:2101.00010, 2020
862020
Attend, adapt and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain
J Rajendran, A Srinivas, MM Khapra, P Prasanna, B Ravindran
arXiv preprint arXiv:1510.02879, 2015
612015
Neural assistant: Joint action prediction, response generation, and latent knowledge reasoning
A Neelakantan, S Yavuz, S Narang, V Prasad, B Goodrich, D Duckworth, ...
arXiv preprint arXiv:1910.14613, 2019
152019
Local structure matters most: Perturbation study in NLU
L Clouatre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2107.13955, 2021
132021
Sometimes we want ungrammatical translations
P Parthasarathi, K Sinha, J Pineau, A Williams
Findings of the Association for Computational Linguistics: EMNLP 2021, 3205-3227, 2021
12*2021
Adaapt: A deep architecture for adaptive policy transfer from multiple sources
J Rajendran, P Prasanna, B Ravindran, MM Khapra
arXiv preprint arXiv 1510, 2015
122015
Maca: A modular architecture for conversational agents
HP Truong, P Parthasarathi, J Pineau
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, 93-102, 2017
112017
Demystifying neural language models’ insensitivity to word-order
L Clouatre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2107.13955 5, 45-67, 2021
102021
Deep learning on a healthy data diet: Finding important examples for fairness
A Zayed, P Parthasarathi, G Mordido, H Palangi, S Shabanian, S Chandar
Proceedings of the AAAI Conference on Artificial Intelligence 37 (12), 14593 …, 2023
52023
Memory Augmented Optimizers for Deep Learning
PA McRae, P Parthasarathi, M Assran, S Chandar
arXiv preprint arXiv:2106.10708, 2021
52021
Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task?
P Parthasarathi, J Pineau, S Chandar
Proceedings of the 22nd Annual Meeting of the Special Interest Group on …, 2021
4*2021
On task-level dialogue composition of generative transformer model
P Parthasarathi, A Neelakantan, S Narang
arXiv preprint arXiv:2010.04826, 2020
32020
Variational encoder decoder for image generation conditioned on captions
J Romoff, N Angelard-Gontier, P Parthasarathi
22016
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
A Kazemnejad, M Rezagholizadeh, P Parthasarathi, S Chandar
arXiv preprint arXiv:2305.14775, 2023
12023
Practical Takes on Federated Learning with Pretrained Language Models
A Agarwal, M Rezagholizadeh, P Parthasarathi
Findings of the Association for Computational Linguistics: EACL 2023, 454-471, 2023
12023
Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes
L Clouâtre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2211.05015, 2022
12022
Local Structure Matters Most in Most Languages
L Clouâtre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2211.05025, 2022
12022
A brief study on the effects of training generative dialogue models with a semantic loss
P Parthasarathi, M Abdelsalam, J Pineau, S Chandar
arXiv preprint arXiv:2106.10619, 2021
12021
The system can't perform the operation now. Try again later.
Articles 1–20