Men also like shopping: Reducing gender bias amplification using corpus-level constraints J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang arXiv preprint arXiv:1707.09457, 2017 | 371 | 2017 |
Neural motifs: Scene graph parsing with global context R Zellers, M Yatskar, S Thomson, Y Choi Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018 | 275 | 2018 |
Quac: Question answering in context E Choi, H He, M Iyyer, M Yatskar, W Yih, Y Choi, P Liang, L Zettlemoyer arXiv preprint arXiv:1808.07036, 2018 | 247 | 2018 |
Gender bias in coreference resolution: Evaluation and debiasing methods J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang arXiv preprint arXiv:1804.06876, 2018 | 191 | 2018 |
Neural amr: Sequence-to-sequence models for parsing and generation I Konstas, S Iyer, M Yatskar, Y Choi, L Zettlemoyer arXiv preprint arXiv:1704.08381, 2017 | 187 | 2017 |
For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia M Yatskar, B Pang, C Danescu-Niculescu-Mizil, L Lee arXiv preprint arXiv:1008.1986, 2010 | 168 | 2010 |
Visualbert: A simple and performant baseline for vision and language LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang arXiv preprint arXiv:1908.03557, 2019 | 162 | 2019 |
Situation Recognition: Visual Semantic Role Labeling for Image Understanding M Yatskar, L Zettlemoyer, A Farhadi Conference on Computer Vision and Pattern Recognition, 2016 | 147 | 2016 |
Gender bias in contextualized word embeddings J Zhao, T Wang, M Yatskar, R Cotterell, V Ordonez, KW Chang arXiv preprint arXiv:1904.03310, 2019 | 107 | 2019 |
Don't take the easy way out: Ensemble based methods for avoiding known dataset biases C Clark, M Yatskar, L Zettlemoyer arXiv preprint arXiv:1909.03683, 2019 | 62 | 2019 |
Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations T Wang, J Zhao, M Yatskar, KW Chang, V Ordonez Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019 | 58* | 2019 |
A qualitative comparison of coqa, squad 2.0 and quac M Yatskar arXiv preprint arXiv:1809.10735, 2018 | 55 | 2018 |
See No Evil, Say No Evil: Description Generation from Densely Labeled Images M Yatskar, M Galley, L Vanderwende, L Zettlemoyer Lexical and Computational Semantics (* SEM 2014), 110, 2014 | 51 | 2014 |
Stating the obvious: Extracting visual common sense knowledge M Yatskar, V Ordonez, A Farhadi Proceedings of the 2016 Conference of the North American Chapter of the …, 2016 | 37 | 2016 |
Commonly uncommon: Semantic sparsity in situation recognition M Yatskar, V Ordonez, L Zettlemoyer, A Farhadi Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2017 | 21 | 2017 |
Robothor: An open simulation-to-real embodied ai platform M Deitke, W Han, A Herrasti, A Kembhavi, E Kolve, R Mottaghi, J Salvador, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020 | 12 | 2020 |
What Does BERT with Vision Look At? LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020 | 3 | 2020 |
Learning to relate literal and sentimental descriptions of visual properties M Yatskar, S Volkova, A Celikyilmaz, WB Dolan, L Zettlemoyer Proceedings of the 2013 Conference of the North American Chapter of the …, 2013 | 2 | 2013 |
Grounded Situation Recognition S Pratt, M Yatskar, L Weihs, A Farhadi, A Kembhavi European Conference on Computer Vision, 314-332, 2020 | 1 | 2020 |
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles C Clark, M Yatskar, L Zettlemoyer arXiv preprint arXiv:2011.03856, 2020 | | 2020 |