Follow
Yash Kant
Yash Kant
Verified email at mail.utoronto.ca - Homepage
Title
Cited by
Cited by
Year
Spatially Aware Multimodal Transformers for TextVQA
Y Kant, D Batra, P Anderson, A Schwing, D Parikh, J Lu, H Agrawal
ECCV, 2020, 2020
852020
Housekeep: Tidying Virtual Households using Commonsense Reasoning
Y Kant, A Ramachandran, S Yenamandra, I Gilitschenski, D Batra, A Szot, ...
ECCV, 2022, 2022
312022
Contrast and Classify: Training Robust VQA Models
Y Kant, A Moudgil, D Batra, D Parikh, H Agrawal
ICCV, 2021, 2021
262021
LaTeRF: Label and Text Driven Object Radiance Fields
A Mirzaei, Y Kant, J Kelly, I Gilitschenski
ECCV, 2022, 2022
192022
Automated Video Description for Blind and Low Vision Users
A Bodi, P Fazli, S Ihorn, YT Siu, AT Scott, L Narins, Y Kant, A Das, I Yoon
CHI Extended Abstracts, 2021, 1-7, 2021
132021
ICLR Reproducibility Challenge Report (Padam: Closing The Generalization Gap Of Adaptive Gradient Methods in Training Deep Neural Networks)
H Mittal, K Pandey, Y Kant
ICLR Reproducibility Challenge 2019, 2019
22019
Invertible Neural Skinning
Y Kant, A Siarohin, RA Guler, M Chai, J Ren, S Tulyakov, I Gilitschenski
CVPR, 2023, 2023
12023
iNVS: Repurposing Diffusion Inpainters for Novel View Synthesis
Y Kant, A Siarohin, M Vasilkovsky, RA Guler, J Ren, S Tulyakov, ...
SIGGRAPH Asia 2023, Conference Papers, 2023
2023
CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos
T Kuai, A Karthikeyan, Y Kant, A Mirzaei, I Gilitschenski
CVPR 2023 DynaVis Workshop, 2023
2023
Building Scalable Video Understanding Benchmarks through Sports
A Agarwal, A Zhang, K Narasimhan, I Gilitschenski, V Murahari, Y Kant
arXiv preprint arXiv:2301.06866, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–10