Jörn-Henrik Jacobsen
Jörn-Henrik Jacobsen
Apple AI/ML Research
Verified email at vectorinstitute.ai - Homepage
Title
Cited by
Cited by
Year
Why musical memory can be preserved in advanced Alzheimer’s disease
JH Jacobsen, J Stelzer, TH Fritz, G Chételat, R La Joie, R Turner
Brain: a Journal of Neurology 138 (8), 2438-2450, 2015
1712015
Invertible residual networks
J Behrmann, W Grathwohl, RTQ Chen, D Duvenaud, JH Jacobsen
International Conference on Machine Learning, 573-582, 2019
1392019
i-RevNet: Deep Invertible Networks
JH Jacobsen, A Smeulders, E Oyallon
International Conference on Learning Representations (ICLR), 2018
1392018
Residual flows for invertible generative modeling
RTQ Chen, J Behrmann, DK Duvenaud, JH Jacobsen
Advances in Neural Information Processing Systems, 9916-9926, 2019
622019
Excessive invariance causes adversarial vulnerability
JH Jacobsen, J Behrmann, R Zemel, M Bethge
arXiv preprint arXiv:1811.00401, 2018
592018
Structured Receptive Fields in CNNs
JH Jacobsen, J van Gemert, Z Lou, AWM Smeulders
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2610-2619, 2016
522016
Your classifier is secretly an energy based model and you should treat it like one
W Grathwohl, KC Wang, JH Jacobsen, D Duvenaud, M Norouzi, ...
arXiv preprint arXiv:1912.03263, 2019
482019
Flexibly fair representation learning by disentanglement
E Creager, D Madras, JH Jacobsen, MA Weis, K Swersky, T Pitassi, ...
arXiv preprint arXiv:1906.02589, 2019
482019
Shortcut Learning in Deep Neural Networks
R Geirhos, JH Jacobsen, C Michaelis, R Zemel, W Brendel, M Bethge, ...
arXiv preprint arXiv:2004.07780, 2020
292020
Fundamental tradeoffs between invariance and sensitivity to adversarial perturbations
F Tramèr, J Behrmann, N Carlini, N Papernot, JH Jacobsen
arXiv preprint arXiv:2002.04599, 2020
21*2020
How to train your neural ode: the world of jacobian and kinetic regularization
C Finlay, JH Jacobsen, L Nurbekyan, AM Oberman
International Conference on Machine Learning, 2020
19*2020
Hierarchical Attribute CNNs
JH Jacobsen, E Oyallon, S Mallat, AWM Smeulders
ICML 2017 Workshop on Principled Approaches to Deep Learning, 2017
17*2017
Dynamic steerable blocks in deep residual networks
JH Jacobsen, B De Brabandere, AWM Smeulders
arXiv preprint arXiv:1706.00598, 2017
172017
Out-of-distribution generalization via risk extrapolation (rex)
D Krueger, E Caballero, JH Jacobsen, A Zhang, J Binas, RL Priol, ...
arXiv preprint arXiv:2003.00688, 2020
122020
Understanding the Limitations of Conditional Generative Models
E Fetaya, JH Jacobsen, W Grathwohl, R Zemel
arXiv preprint arXiv:1906.01171, 2019
11*2019
Preventing gradient attenuation in lipschitz constrained convolutional networks
Q Li, S Haque, C Anil, J Lucas, RB Grosse, JH Jacobsen
Advances in neural information processing systems, 15390-15402, 2019
62019
Understanding and mitigating exploding inverses in invertible neural networks
J Behrmann, P Vicol, KC Wang, R Grosse, JH Jacobsen
arXiv preprint arXiv:2006.09347, 2020
3*2020
Cutting out the Middle-Man: Training and Evaluating Energy-Based Models without Sampling
W Grathwohl, KC Wang, JH Jacobsen, D Duvenaud, R Zemel
arXiv preprint arXiv:2002.05616, 2020
32020
Exchanging Lessons Between Algorithmic Fairness and Domain Generalization
E Creager, JH Jacobsen, R Zemel
arXiv preprint arXiv:2010.07249, 2020
2020
Scaling RBMs to High Dimensional Data with Invertible Neural Networks
W Grathwohl, X Li, K Swersky, M Hashemi, JH Jacobsen, M Norouzi, ...
2020
The system can't perform the operation now. Try again later.
Articles 1–20