Follow
Xiang Lisa Li
Xiang Lisa Li
Verified email at stanford.edu - Homepage
Title
Cited by
Cited by
Year
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
42802021
Prefix-tuning: Optimizing continuous prompts for generation
XL Li, P Liang
arXiv preprint arXiv:2101.00190, 2021
38592021
Diffusion-lm improves controllable text generation
X Li, J Thickstun, I Gulrajani, PS Liang, TB Hashimoto
Advances in Neural Information Processing Systems 35, 4328-4343, 2022
6552022
Contrastive decoding: Open-ended text generation as optimization
XL Li, A Holtzman, D Fried, P Liang, J Eisner, T Hashimoto, L Zettlemoyer, ...
arXiv preprint arXiv:2210.15097, 2022
2242022
Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp
O Khattab, K Santhanam, XL Li, D Hall, P Liang, C Potts, M Zaharia
arXiv preprint arXiv:2212.14024, 2022
1922022
Learning to compress prompts with gist tokens
J Mu, X Li, N Goodman
Advances in Neural Information Processing Systems 36, 2024
1392024
Evaluating human-language model interaction
M Lee, M Srivastava, A Hardy, J Thickstun, E Durmus, A Paranjape, ...
arXiv preprint arXiv:2212.09746, 2022
1012022
Specializing word embeddings (for parsing) by information bottleneck
XL Li, J Eisner
arXiv preprint arXiv:1910.00163, 2019
782019
Posterior control of blackbox generation
XL Li, AM Rush
arXiv preprint arXiv:2005.04560, 2020
282020
Decoding methods for neural narrative generation
A DeLucia, A Mueller, XL Li, J Sedoc
arXiv preprint arXiv:2010.07375, 2020
262020
On the learnability of watermarks for language models
C Gu, XL Li, P Liang, T Hashimoto
arXiv preprint arXiv:2312.04469, 2023
252023
Benchmarking and improving generator-validator consistency of language models
XL Li, V Shrivastava, S Li, T Hashimoto, P Liang
arXiv preprint arXiv:2310.01846, 2023
222023
Ensembles and cocktails: Robust finetuning for natural language generation
J Hewitt, XL Li, SM Xie, B Newman, P Liang
102021
TempLM: Distilling language models into template-based generators
T Zhang, M Lee, L Li, E Shen, TB Hashimoto
arXiv preprint arXiv:2205.11055, 2022
62022
A generative model for punctuation in dependency trees
XL Li, D Wang, J Eisner
Transactions of the Association for Computational Linguistics 7, 357-373, 2019
52019
Autobencher: Creating salient, novel, difficult datasets for language models
XL Li, EZ Liu, P Liang, T Hashimoto
arXiv preprint arXiv:2407.08351, 2024
42024
Few-Shot Recalibration of Language Models
XL Li, U Khandelwal, K Guu
arXiv preprint arXiv:2403.18286, 2024
42024
Calibrated on Average, but not Within Each Slice: Few-shot Calibration for All Slices of a Distribution
XL Li, U Khandelwal, K Guu
The system can't perform the operation now. Try again later.
Articles 1–18