Reza Babanezhad
Reza Babanezhad
Samsung AI lab
No verified email - Homepage
Title
Cited by
Cited by
Year
Stopwasting my gradients: Practical svrg
RB Harikandeh, MO Ahmed, A Virani, M Schmidt, J Konečný, S Sallinen
Advances in Neural Information Processing Systems, 2251-2259, 2015
1182015
Non-uniform stochastic average gradient method for training conditional random fields
M Schmidt, R Babanezhad, M Ahmed, A Defazio, A Clifton, A Sarkar
artificial intelligence and statistics, 819-828, 2015
722015
Faster stochastic variational inference using proximal-gradient methods with general divergence functions
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
arXiv preprint arXiv:1511.00146, 2015
252015
M-ADDA: Unsupervised domain adaptation with deep metric learning
IH Laradji, R Babanezhad
Domain Adaptation for Visual Understanding, 17-31, 2020
132020
A Generic Top-N Recommendation Framework For Trading-off Accuracy, Novelty, and Coverage
Z Zolaktaf, R Babanezhad, R Pottinger
2018 IEEE 34th International Conference on Data Engineering (ICDE), 149-160, 2018
92018
Convergence of proximal-gradient stochastic variational inference under non-decreasing step-size sequence
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
arXiv preprint arXiv:1511.00146, 2015
72015
Process patterns for web engineering
R Babanezhad, YM Bibalan, R Ramsin
2010 IEEE 34th Annual Computer Software and Applications Conference, 477-486, 2010
62010
Reducing the variance in online optimization by transporting past gradients
S Arnold, PA Manzagol, RB Harikandeh, I Mitliagkas, N Le Roux
Advances in Neural Information Processing Systems, 5391-5402, 2019
32019
Masaga: a linearly-convergent stochastic first-order method for optimization on manifolds
R Babanezhad, IH Laradji, A Shafaei, M Schmidt
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2018
32018
To Each Optimizer a Norm, To Each Norm its Generalization
S Vaswani, R Babanezhad, J Gallego, A Mishkin, S Lacoste-Julien, ...
arXiv preprint arXiv:2006.06821, 2020
12020
Semantics Preserving Adversarial Learning
OA Dia, E Barshan, R Babanezhad
arXiv preprint arXiv:1903.03905, 2019
12019
Semantics Preserving Adversarial Attacks
OA Dia, E Barshan, R Babanezhad
2019
Semantics Preserving Adversarial Learning
O Amadou Dia, E Barshan, R Babanezhad
arXiv, arXiv: 1903.03905, 2019
2019
Manifold Preserving Adversarial Learning.
OA Dia, E Barshan, R Babanezhad
CoRR, 2019
2019
Online variance-reducing optimization
N Le Roux, R Babanezhad, PA Manzagol
2018
Faster Stochastic Variational Inference using Proximal-Gradient Methods with General Divergence Functions
M Emtiyaz Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
arXiv, arXiv: 1511.00146, 2015
2015
MASAGA: A Stochastic Algorithm for Manifold Optimization
R Babanezhad, IH Laradji, A Shafaei, M Schmidt
Stochastic Variational Inference
R Babanezhad
Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields: Supplementary Material
M Schmidt, R Babanezhad, MO Ahemd, A Defazio, A Clifton, A Sarkar
The system can't perform the operation now. Try again later.
Articles 1–19