Follow
Harm van Seijen
Harm van Seijen
Microsoft Research
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Hybrid reward architecture for reinforcement learning
H Van Seijen, M Fatemi, J Romoff, R Laroche, T Barnes, J Tsang
Advances in Neural Information Processing Systems 30, 2017
2222017
A theoretical and empirical analysis of Expected Sarsa
H Van Seijen, H Van Hasselt, S Whiteson, M Wiering
2009 ieee symposium on adaptive dynamic programming and reinforcement†…, 2009
2082009
Reducing network agnostophobia
AR Dhamija, M GŁnther, T Boult
Advances in Neural Information Processing Systems 31, 2018
1862018
True online TD (lambda)
H Seijen, R Sutton
International Conference on Machine Learning, 692-700, 2014
1132014
True online temporal-difference learning
H Van Seijen, AR Mahmood, PM Pilarski, MC Machado, RS Sutton
The Journal of Machine Learning Research 17 (1), 5057-5096, 2016
1022016
A Deeper Look at Planning as Learning from Replay
H van Seijen, RS Sutton
International Conference on Machine Learning, 2015
572015
Systematic generalisation with group invariant predictions
F Ahmed, Y Bengio, H van Seijen, A Courville
International Conference on Learning Representations, 2020
502020
Planning by prioritized sweeping with small backups
H Van Seijen, R Sutton
International Conference on Machine Learning, 361-369, 2013
48*2013
Exploiting Best-Match Equations for Efficient Reinforcement Learning.
H van Seijen, S Whiteson, H van Hasselt, M Wiering
Journal of Machine Learning Research 12 (6), 2011
262011
Multi-advisor reinforcement learning
R Laroche, M Fatemi, J Romoff, H van Seijen
arXiv preprint arXiv:1704.00756, 2017
252017
Using a logarithmic mapping to enable lower discount factors in reinforcement learning
H Van Seijen, M Fatemi, A Tavakoli
Advances in Neural Information Processing Systems 32, 2019
232019
On value function representation of long horizon problems
L Lehnert, R Laroche, H van Seijen
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
192018
Effective multi-step temporal-difference learning for non-linear function approximation
H van Seijen
arXiv preprint arXiv:1608.05151, 2016
172016
Efficient abstraction selection in reinforcement learning
H Van Seijen, S Whiteson, L Kester
Computational Intelligence 30 (4), 657-699, 2014
162014
Separation of concerns in reinforcement learning
H van Seijen, M Fatemi, J Romoff, R Laroche
arXiv preprint arXiv:1612.05159, 2016
122016
Learning invariances for policy generalization
R Tachet, P Bachman, H van Seijen
arXiv preprint arXiv:1809.02591, 2018
112018
Forward actor-critic for nonlinear function approximation in reinforcement learning
V Veeriah, H van Seijen, RS Sutton
Proceedings of the 16th Conference on Autonomous Agents and MultiAgent†…, 2017
92017
Switching between representations in reinforcement learning
H Seijen, S Whiteson, L Kester
Interactive Collaborative Information Systems, 65-84, 2010
92010
Dead-ends and secure exploration in reinforcement learning
M Fatemi, S Sharma, H Van Seijen, SE Kahou
International Conference on Machine Learning, 1873-1881, 2019
82019
Modular lifelong reinforcement learning via neural composition
JA Mendez, H van Seijen, E Eaton
arXiv preprint arXiv:2207.00429, 2022
72022
The system can't perform the operation now. Try again later.
Articles 1–20