Blake Woodworth
Blake Woodworth
Inria (SIERRA)
Verified email at inria.fr - Homepage
Title
Cited by
Cited by
Year
Learning non-discriminatory predictors
B Woodworth, S Gunasekar, MI Ohannessian, N Srebro
Conference on Learning Theory, 1920-1953, 2017
2242017
Implicit regularization in matrix factorization
S Gunasekar, B Woodworth, S Bhojanapalli, B Neyshabur, N Srebro
2018 Information Theory and Applications Workshop (ITA), 1-10, 2018
2222018
Tight complexity bounds for optimizing composite objectives
BE Woodworth, N Srebro
Advances in neural information processing systems 29, 3639-3647, 2016
1682016
Kernel and rich regimes in overparametrized models
B Woodworth, S Gunasekar, JD Lee, E Moroshko, P Savarese, I Golan, ...
Conference on Learning Theory, 3635-3673, 2020
89*2020
Lower bounds for non-convex stochastic optimization
Y Arjevani, Y Carmon, JC Duchi, DJ Foster, N Srebro, B Woodworth
arXiv preprint arXiv:1912.02365, 2019
832019
Graph oracle models, lower bounds, and gaps for parallel stochastic optimization
B Woodworth, J Wang, A Smith, B McMahan, N Srebro
arXiv preprint arXiv:1805.10222, 2018
752018
Is local SGD better than minibatch SGD?
B Woodworth, KK Patel, S Stich, Z Dai, B Bullins, B Mcmahan, O Shamir, ...
International Conference on Machine Learning, 10334-10343, 2020
732020
Training well-generalizing classifiers for fairness metrics and other data-dependent constraints
A Cotter, M Gupta, H Jiang, N Srebro, K Sridharan, S Wang, B Woodworth, ...
International Conference on Machine Learning, 1397-1405, 2019
552019
Minibatch vs local sgd for heterogeneous distributed learning
B Woodworth, KK Patel, N Srebro
arXiv preprint arXiv:2006.04735, 2020
442020
Implicit bias in deep linear classification: Initialization scale vs training accuracy
E Moroshko, S Gunasekar, B Woodworth, JD Lee, N Srebro, D Soudry
arXiv preprint arXiv:2007.06738, 2020
252020
The complexity of making the gradient small in stochastic convex optimization
DJ Foster, A Sekhari, O Shamir, N Srebro, K Sridharan, B Woodworth
Conference on Learning Theory, 1319-1345, 2019
222019
Lower bound for randomized first order convex optimization
B Woodworth, N Srebro
arXiv preprint arXiv:1709.03594, 2017
192017
The gradient complexity of linear regression
M Braverman, E Hazan, M Simchowitz, B Woodworth
Conference on Learning Theory, 627-647, 2020
102020
On the implicit bias of initialization shape: Beyond infinitesimal mirror descent
S Azulay, E Moroshko, MS Nacson, B Woodworth, N Srebro, A Globerson, ...
arXiv preprint arXiv:2102.09769, 2021
82021
Mirrorless mirror descent: A more natural discretization of riemannian gradient flow
S Gunasekar, B Woodworth, N Srebro
arXiv preprint arXiv:2004.01025, 2020
62020
A field guide to federated optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
52021
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication
B Woodworth, B Bullins, O Shamir, N Srebro
arXiv preprint arXiv:2102.01583, 2021
52021
Guaranteed validity for empirical approaches to adaptive data analysis
R Rogers, A Roth, A Smith, N Srebro, O Thakkar, B Woodworth
International Conference on Artificial Intelligence and Statistics, 2830-2840, 2020
52020
The everlasting database: Statistical validity at a fair price
B Woodworth, V Feldman, S Rosset, N Srebro
arXiv preprint arXiv:1803.04307, 2018
22018
Mirrorless Mirror Descent: A Natural Derivation of Mirror Descent
S Gunasekar, B Woodworth, N Srebro
International Conference on Artificial Intelligence and Statistics, 2305-2313, 2021
12021
The system can't perform the operation now. Try again later.
Articles 1–20