Follow
Xuechen Li
Title
Cited by
Cited by
Year
Isolating sources of disentanglement in variational autoencoders
RTQ Chen, X Li, RB Grosse, DK Duvenaud
Advances in neural information processing systems 31, 2018
9102018
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
5172021
Inference Suboptimality in Variational Autoencoders
C Cremer, X Li, D Duvenaud
International Conference on Machine Learning, 2018
2242018
Scalable gradients for stochastic differential equations
X Li, TKL Wong, RTQ Chen, D Duvenaud
International Conference on Artificial Intelligence and Statistics, 3870-3882, 2020
1732020
Stochastic runge-kutta accelerates langevin monte carlo and beyond
X Li, Y Wu, L Mackey, MA Erdogdu
Advances in neural information processing systems 32, 2019
472019
Large language models can be strong differentially private learners
X Li, F Tramer, P Liang, T Hashimoto
arXiv preprint arXiv:2110.05679, 2021
462021
Neural sdes as infinite-dimensional gans
P Kidger, J Foster, X Li, TJ Lyons
International Conference on Machine Learning, 5453-5463, 2021
382021
When does preconditioning help or hurt generalization?
S Amari, J Ba, R Grosse, X Li, A Nitanda, T Suzuki, D Wu, J Xu
arXiv preprint arXiv:2006.10732, 2020
222020
Scalable gradients and variational inference for stochastic differential equations
X Li, TKL Wong, RTQ Chen, DK Duvenaud
Symposium on Advances in Approximate Bayesian Inference, 1-28, 2020
182020
Infinitely deep bayesian neural networks with stochastic differential equations
W Xu, RTQ Chen, X Li, D Duvenaud
International Conference on Artificial Intelligence and Statistics, 721-738, 2022
172022
Efficient and accurate gradients for neural sdes
P Kidger, J Foster, XC Li, T Lyons
Advances in Neural Information Processing Systems 34, 18747-18761, 2021
92021
Isolating sources of disentanglement in VAEs
RTQ Chen, X Li, R Grosse, D Duvenaud
Proceedings of the 32nd International Conference on Neural Information …, 0
5
When Does Differentially Private Learning Not Suffer in High Dimensions?
X Li, D Liu, T Hashimoto, HA Inan, J Kulkarni, YT Lee, AG Thakurta
arXiv preprint arXiv:2207.00160, 2022
12022
Holistic Evaluation of Language Models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
2022
Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe
X Yue, HA Inan, X Li, G Kumar, J McAnallen, H Sun, D Levitan, R Sim
arXiv preprint arXiv:2210.14348, 2022
2022
A Closer Look at the Calibration of Differentially Private Learners
H Zhang, X Li, P Sen, S Roukos, T Hashimoto
arXiv preprint arXiv:2210.08248, 2022
2022
Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing
X Li, F Tramer, P Liang, T Hashimoto
NeurIPS 2021 Workshop Privacy in Machine Learning, 2021
2021
Learning to Extend Program Graphs to Work-in-Progress Code
X Li, CJ Maddison, D Tarlow
arXiv preprint arXiv:2105.14038, 2021
2021
The idemetric property: when most distances are (almost) the same
G Barmpalias, N Huang, A Lewis-Pye, A Li, X Li, Y Pan, T Roughgarden
Proceedings of the Royal Society A, 2019
2019
Isolating Sources of Disentanglement in VAEs
RTQ Chen, X Li, R Grosse, D Duvenaud
Proceedings of the 32nd International Conference on Neural Information …, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–20