Luke Metz
Luke Metz
Google Brain
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Unsupervised representation learning with deep convolutional generative adversarial networks
A Radford, L Metz, S Chintala
ICLR 2016, 2015
87472015
Began: Boundary equilibrium generative adversarial networks
D Berthelot, T Schumm, L Metz
arXiv preprint arXiv:1703.10717, 2017
9372017
Unrolled generative adversarial networks
L Metz, B Poole, D Pfau, J Sohl-Dickstein
arXiv preprint arXiv:1611.02163, 2016
6602016
Adversarial spheres
J Gilmer, L Metz, F Faghri, SS Schoenholz, M Raghu, M Wattenberg, ...
arXiv preprint arXiv:1801.02774, 2018
2242018
Meta-Learning Update Rules for Unsupervised Representation Learning
L Metz, N Maheswaranathan, B Cheung, J Sohl-Dickstein
ICLR 2019, 2018
70*2018
Discrete Sequential Prediction of Continuous Actions for Deep RL
L Metz, J Ibarz, N Jaitly, J Davidson
arXiv preprint arXiv:1705.05035, 2017
522017
Guided evolutionary strategies: Augmenting random search with surrogate gradients
N Maheswaranathan, L Metz, G Tucker, D Choi, J Sohl-Dickstein
International Conference on Machine Learning, 2019
42*2019
Understanding and correcting pathologies in the training of learned optimizers
L Metz, N Maheswaranathan, J Nixon, D Freeman, J Sohl-Dickstein
International Conference on Machine Learning, 4556-4565, 2019
30*2019
Towards GAN Benchmarks Which Require Generalization
I Gulrajani, C Raffel, L Metz
ICLR 2019, 2018
192018
Learning to predict without looking ahead: World models without forward prediction
CD Freeman, L Metz, D Ha
arXiv preprint arXiv:1910.13038, 2019
122019
Using learned optimizers to make models robust to input noise
L Metz, N Maheswaranathan, J Shlens, J Sohl-Dickstein, ED Cubuk
ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning, 2019
92019
Learning an adaptive learning rate schedule
Z Xu, AM Dai, J Kemp, L Metz
arXiv preprint arXiv:1909.09712, 2019
62019
Using a thousand optimization tasks to learn hyperparameter search strategies
L Metz, N Maheswaranathan, R Sun, CD Freeman, B Poole, ...
arXiv preprint arXiv:2002.11887, 2020
52020
On linear identifiability of learned representations
G Roeder, L Metz, DP Kingma
arXiv preprint arXiv:2007.00810, 2020
32020
Meta-learning biologically plausible semi-supervised update rules
K Gu, S Greydanus, L Metz, N Maheswaranathan, J Sohl-Dickstein
bioRxiv, 2019
32019
Visualizing with t-SNE
L Metz
Indico. Web 25, 2015
32015
Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves
L Metz, N Maheswaranathan, CD Freeman, B Poole, J Sohl-Dickstein
arXiv preprint arXiv:2009.11243, 2020
22020
Training optimizer neural networks
J Nixon, JN Sohl-Dickstein, LS Metz, CD Freeman, N Maheswaranathan
US Patent App. 16/586,220, 2020
12020
Training Learned Optimizers with Randomly Initialized Learned Optimizers
L Metz, CD Freeman, N Maheswaranathan, J Sohl-Dickstein
arXiv preprint arXiv:2101.07367, 2021
2021
Parallel Training of Deep Networks with Local Updates
M Laskin, L Metz, S Nabarrao, M Saroufim, B Noune, C Luschi, ...
arXiv preprint arXiv:2012.03837, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–20