Document ranking with a pretrained sequence-to-sequence model R Nogueira, Z Jiang, J Lin arXiv preprint arXiv:2003.06713, 2020 | 119 | 2020 |
Describing a knowledge base Q Wang, X Pan, L Huang, B Zhang, Z Jiang, H Ji, K Knight arXiv preprint arXiv:1809.01797, 2018 | 38 | 2018 |
Paperrobot: Incremental draft generation of scientific ideas Q Wang, L Huang, Z Jiang, K Knight, H Ji, M Bansal, Y Luan arXiv preprint arXiv:1905.07870, 2019 | 32 | 2019 |
Investigating the limitations of transformers with simple arithmetic tasks R Nogueira, Z Jiang, J Lin arXiv preprint arXiv:2102.13019, 2021 | 12 | 2021 |
Chengyu Cloze Test. Z Jiang, B Zhang, L Huang, H Ji BEA@ NAACL-HLT, 154-158, 2018 | 9 | 2018 |
Navigation-based candidate expansion and pretrained language models for citation recommendation R Nogueira, Z Jiang, K Cho, J Lin Scientometrics 125 (3), 3001-3016, 2020 | 7 | 2020 |
Evaluating pretrained transformer models for citation recommendation R Nogueira, Z Jiang, K Cho, J Lin CEUR Workshop Proceedings 2591, 89-100, 2020 | 6 | 2020 |
Inserting information bottlenecks for attribution in transformers Z Jiang, R Tang, J Xin, J Lin arXiv preprint arXiv:2012.13838, 2020 | 4 | 2020 |
How Does BERT Rerank Passages? An Attribution Analysis with Information Bottlenecks Z Jiang, R Tang, J Xin, J Lin Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021 | 1 | 2021 |
Narrating a Knowledge Base Q Wang, X Pan, L Huang, B Zhang, Z Jiang, H Ji, K Knight | | |
Rensselaer Polytechnic Institute DiDi Labs Universit 4 University of North Carolina at Chapel Hill U kevinknight@ didiglobal. com, heng Q Wang, L Huang, Z Jiang, Q Wang, L Huang, Z Jiang | | |