On the opportunities and risks of foundation models R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ... arXiv preprint arXiv:2108.07258, 2021 | 4280 | 2021 |
Prefix-tuning: Optimizing continuous prompts for generation XL Li, P Liang arXiv preprint arXiv:2101.00190, 2021 | 3859 | 2021 |
Diffusion-lm improves controllable text generation X Li, J Thickstun, I Gulrajani, PS Liang, TB Hashimoto Advances in Neural Information Processing Systems 35, 4328-4343, 2022 | 655 | 2022 |
Contrastive decoding: Open-ended text generation as optimization XL Li, A Holtzman, D Fried, P Liang, J Eisner, T Hashimoto, L Zettlemoyer, ... arXiv preprint arXiv:2210.15097, 2022 | 224 | 2022 |
Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp O Khattab, K Santhanam, XL Li, D Hall, P Liang, C Potts, M Zaharia arXiv preprint arXiv:2212.14024, 2022 | 192 | 2022 |
Learning to compress prompts with gist tokens J Mu, X Li, N Goodman Advances in Neural Information Processing Systems 36, 2024 | 139 | 2024 |
Evaluating human-language model interaction M Lee, M Srivastava, A Hardy, J Thickstun, E Durmus, A Paranjape, ... arXiv preprint arXiv:2212.09746, 2022 | 101 | 2022 |
Specializing word embeddings (for parsing) by information bottleneck XL Li, J Eisner arXiv preprint arXiv:1910.00163, 2019 | 78 | 2019 |
Posterior control of blackbox generation XL Li, AM Rush arXiv preprint arXiv:2005.04560, 2020 | 28 | 2020 |
Decoding methods for neural narrative generation A DeLucia, A Mueller, XL Li, J Sedoc arXiv preprint arXiv:2010.07375, 2020 | 26 | 2020 |
On the learnability of watermarks for language models C Gu, XL Li, P Liang, T Hashimoto arXiv preprint arXiv:2312.04469, 2023 | 25 | 2023 |
Benchmarking and improving generator-validator consistency of language models XL Li, V Shrivastava, S Li, T Hashimoto, P Liang arXiv preprint arXiv:2310.01846, 2023 | 22 | 2023 |
Ensembles and cocktails: Robust finetuning for natural language generation J Hewitt, XL Li, SM Xie, B Newman, P Liang | 10 | 2021 |
TempLM: Distilling language models into template-based generators T Zhang, M Lee, L Li, E Shen, TB Hashimoto arXiv preprint arXiv:2205.11055, 2022 | 6 | 2022 |
A generative model for punctuation in dependency trees XL Li, D Wang, J Eisner Transactions of the Association for Computational Linguistics 7, 357-373, 2019 | 5 | 2019 |
Autobencher: Creating salient, novel, difficult datasets for language models XL Li, EZ Liu, P Liang, T Hashimoto arXiv preprint arXiv:2407.08351, 2024 | 4 | 2024 |
Few-Shot Recalibration of Language Models XL Li, U Khandelwal, K Guu arXiv preprint arXiv:2403.18286, 2024 | 4 | 2024 |
Calibrated on Average, but not Within Each Slice: Few-shot Calibration for All Slices of a Distribution XL Li, U Khandelwal, K Guu | | |