Michael B. Chang
Michael B. Chang
UC Berkeley, Swiss AI Lab IDSIA, Massachusetts Institute of Technology
Verified email at berkeley.edu - Homepage
Cited by
Cited by
A Compositional Object-Based Approach To Learning Physical Dynamics
MB Chang, T Ullman, A Torralba, JB Tenenbaum
International Conference on Learning Representations 5, 2016
Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions
S van Steenkiste, M Chang, K Greff, J Schmidhuber
International Conference on Learning Representations 6, 2018
Mcp: Learning composable hierarchical control with multiplicative compositional policies
XB Peng, M Chang, G Zhang, P Abbeel, S Levine
arXiv preprint arXiv:1905.09808, 2019
Entity Abstraction in Visual Model-Based Reinforcement Learning
R Veerapaneni*, JD Co-Reyes*, M Chang*, M Janner, C Finn, J Wu, ...
Conference on Robot Learning, 2019
Automatically composing representation transformations as a means for generalization
MB Chang, A Gupta, S Levine, TL Griffiths
International Conference on Learning Representations 7, 2018
Understanding visual concepts with continuation learning
WF Whitney, M Chang, T Kulkarni, JB Tenenbaum
arXiv preprint arXiv:1602.06822, 2016
Doing more with less: Meta-reasoning and meta-learning in humans and machines
TL Griffiths, F Callaway, MB Chang, E Grant, PM Krueger, F Lieder
Current Opinion in Behavioral Sciences 29, 24-30, 2019
Representational efficiency outweighs action efficiency in human program induction
S Sanborn, DD Bourgin, M Chang, TL Griffiths
Annual Meeting of the Cognitive Science Society (CogSci), 2018
Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions
M Chang, S Kaushik, SM Weinberg, TL Griffiths, S Levine
International Conference on Machine Learning 37, 2020
Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment
M Chang, S Kaushik, S Levine, TL Griffiths
International Conference on Machine Learning 139, 1452-1462, 2021
The system can't perform the operation now. Try again later.
Articles 1–10