Dilip Arumugam
Title
Cited by
Cited by
Year
State abstractions for lifelong reinforcement learning
D Abel, D Arumugam, L Lehnert, M Littman
International Conference on Machine Learning, 10-19, 2018
422018
Accurately and efficiently interpreting human-robot instructions of varying granularities
D Arumugam, S Karamcheti, N Gopalan, LLS Wong, S Tellex
arXiv preprint arXiv:1704.06616, 2017
402017
Grounding English Commands to Reward Functions.
J MacGlashan, M Babes-Vroman, Marie desJardins, ML Littman, ...
Robotics: Science and Systems, 2015
362015
Deep reinforcement learning from policy-dependent human feedback
D Arumugam, JK Lee, S Saskin, ML Littman
arXiv preprint arXiv:1902.04257, 2019
242019
State abstraction as compression in apprenticeship learning
D Abel, D Arumugam, K Asadi, Y Jinnai, ML Littman, LLS Wong
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 3134-3142, 2019
232019
Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications.
N Gopalan, D Arumugam, LLS Wong, S Tellex
Robotics: Science and Systems, 2018
212018
A tale of two draggns: A hybrid approach for interpreting action-oriented and goal-oriented instructions
S Karamcheti, EC Williams, D Arumugam, M Rhee, N Gopalan, LLS Wong, ...
arXiv preprint arXiv:1707.08668, 2017
152017
Grounding natural language instructions to semantic goal representations for abstraction and generalization
D Arumugam, S Karamcheti, N Gopalan, EC Williams, M Rhee, LLS Wong, ...
Autonomous Robots 43 (2), 449-468, 2019
122019
Toward good abstractions for lifelong learning
D Abel, D Arumugam, L Lehnert, ML Littman
Proceedings of the NIPS workshop on hierarchical reinforcement learning, 92, 2017
72017
Value preserving state-action abstractions
D Abel, N Umbanhowar, K Khetarpal, D Arumugam, D Precup, M Littman
International Conference on Artificial Intelligence and Statistics, 1639-1650, 2020
62020
Modeling latent attention within neural networks
C Grimm, D Arumugam, S Karamcheti, D Abel, LLS Wong, ML Littman
arXiv preprint arXiv:1706.00536, 2017
6*2017
Mitigating planner overfitting in model-based reinforcement learning
D Arumugam, D Abel, K Asadi, N Gopalan, C Grimm, JK Lee, L Lehnert, ...
arXiv preprint arXiv:1812.01129, 2018
32018
Randomized Value Functions via Posterior State-Abstraction Sampling
D Arumugam, B Van Roy
arXiv preprint arXiv:2010.02383, 2020
12020
An Information-Theoretic Perspective on Credit Assignment in Reinforcement Learning
D Arumugam, P Henderson, PL Bacon
arXiv preprint arXiv:2103.06224, 2021
2021
Deciding What to Learn: A Rate-Distortion Approach
D Arumugam, B Van Roy
arXiv preprint arXiv:2101.06197, 2021
2021
Flexible and Efficient Long-Range Planning Through Curious Exploration
A Curtis, M Xin, D Arumugam, K Feigelis, D Yamins
International Conference on Machine Learning, 2238-2249, 2020
2020
Interpreting human-robot instructions
S Tellex, D Arumugam, S Karamcheti, N Gopalan, LLS Wong
US Patent App. 16/806,706, 2020
2020
Reparameterized Variational Divergence Minimization for Stable Imitation
D Arumugam, D Dey, A Agarwal, A Celikyilmaz, E Nouri, B Dolan
arXiv preprint arXiv:2006.10810, 2020
2020
Interpreting human-robot instructions
S Tellex, D Arumugam, S Karamcheti, N Gopalan, LLS Wong
US Patent 10,606,898, 2020
2020
Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications
S Tellex, D Arumugam, N Gopalan, LLS Wong
US Patent App. 16/388,651, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–20