Follow
Gen Li
Title
Cited by
Cited by
Year
Breaking the sample size barrier in model-based reinforcement learning with a generative model
G Li, Y Wei, Y Chi, Y Gu, Y Chen
Advances in neural information processing systems 33, 12861-12872, 2020
1282020
Nonconvex low-rank tensor completion from noisy data
C Cai, G Li, HV Poor, Y Chen
Advances in neural information processing systems 32, 2019
1062019
Sample complexity of asynchronous Q-learning: Sharper analysis and variance reduction
G Li, Y Wei, Y Chi, Y Gu, Y Chen
IEEE Transactions on Information Theory 68 (1), 448-473, 2021
1042021
Phase transitions of spectral initialization for high-dimensional non-convex estimation
YM Lu, G Li
Information and Inference: A Journal of the IMA 9 (3), 507-541, 2020
922020
Pessimistic q-learning for offline reinforcement learning: Towards optimal sample complexity
L Shi, G Li, Y Wei, Y Chen, Y Chi
International conference on machine learning, 19967-20025, 2022
762022
Is Q-learning minimax optimal? a tight sample complexity analysis
G Li, C Cai, Y Chen, Y Wei, Y Chi
Operations Research, 2023
662023
Subspace estimation from unbalanced and incomplete data matrices: ℓ2,∞ statistical guarantees
C Cai, G Li, Y Chi, HV Poor, Y Chen
The Annals of Statistics 49 (2), 944-967, 2021
662021
Settling the sample complexity of model-based offline reinforcement learning
G Li, L Shi, Y Chen, Y Chi, Y Wei
The Annals of Statistics 52 (1), 233-260, 2024
652024
Softmax policy gradient methods can take exponential time to converge
G Li, Y Wei, Y Chi, Y Gu, Y Chen
Conference on Learning Theory, 3107-3110, 2021
462021
Breaking the sample complexity barrier to regret-optimal model-free reinforcement learning
G Li, L Shi, Y Chen, Y Gu, Y Chi
Advances in Neural Information Processing Systems 34, 17762-17776, 2021
432021
The efficacy of pessimism in asynchronous Q-learning
Y Yan, G Li, Y Chen, J Fan
IEEE Transactions on Information Theory, 2023
402023
Active orthogonal matching pursuit for sparse subspace clustering
Y Chen, G Li, Y Gu
IEEE Signal Processing Letters 25 (2), 164-168, 2017
382017
Restricted isometry property of gaussian random projection for finite set of subspaces
G Li, Y Gu
IEEE Transactions on Signal Processing 66 (7), 1705-1720, 2017
342017
Sample-efficient reinforcement learning is feasible for linearly realizable MDPs with limited revisiting
G Li, Y Chen, Y Chi, Y Gu, Y Wei
Advances in Neural Information Processing Systems 34, 16671-16685, 2021
312021
Phase retrieval using iterative projections: Dynamics in the large systems limit
G Li, Y Gu, YM Lu
2015 53rd Annual Allerton Conference on Communication, Control, and …, 2015
282015
Minimax-optimal multi-agent RL in Markov games with a generative model
G Li, Y Chi, Y Wei, Y Chen
Advances in Neural Information Processing Systems 35, 15353-15367, 2022
22*2022
A non-asymptotic framework for approximate message passing in spiked models
G Li, Y Wei
arXiv preprint arXiv:2208.03313, 2022
222022
Towards faster non-asymptotic convergence for diffusion-based generative models
G Li, Y Wei, Y Chen, Y Chi
arXiv preprint arXiv:2306.09251, 2023
212023
Model-based reinforcement learning is minimax-optimal for offline zero-sum markov games
Y Yan, G Li, Y Chen, J Fan
arXiv preprint arXiv:2206.04044, 2022
212022
Regularization in Two-Layer Neural Networks
G Li, Y Gu, J Ding
IEEE Signal Processing Letters 29, 135-139, 2021
17*2021
The system can't perform the operation now. Try again later.
Articles 1–20