Follow
Sandro Pezzelle
Sandro Pezzelle
Assistant Professor at ILLC, University of Amsterdam
Verified email at uva.nl - Homepage
Title
Cited by
Cited by
Year
The LAMBADA dataset: Word prediction requiring a broad discourse context
D Paperno, G Kruszewski, A Lazaridou, QN Pham, R Bernardi, S Pezzelle, ...
arXiv preprint arXiv:1606.06031, 2016
4172016
Foil it! find one mismatch between image and language caption
R Shekhar, S Pezzelle, Y Klimovich, A Herbelot, M Nabi, E Sangineto, ...
arXiv preprint arXiv:1705.01359, 2017
1262017
Probing the mental representation of quantifiers
S Pezzelle, R Bernardi, M Piazza
Cognition 181, 117-126, 2018
302018
Refer, reuse, reduce: Generating subsequent references in visual and conversational contexts
E Takmaz, M Giulianelli, S Pezzelle, A Sinclair, R Fernández
arXiv preprint arXiv:2011.04554, 2020
272020
Generating image descriptions via sequential cross-modal alignment guided by human gaze
E Takmaz, S Pezzelle, L Beinborn, R Fernández
arXiv preprint arXiv:2011.04592, 2020
262020
“Look, some green circles!”: Learning to quantify from images
I Sorodoc, A Lazaridou, G Boleda Torrent, AGG Herbelot, S Pezzelle, ...
ACL 2016 5th Workshop on Vision and Language (VL’16): Proceedings of the …, 2016
212016
Be different to be better! A benchmark to leverage the complementarity of language and vision
S Pezzelle, C Greco, G Gandolfi, E Gualdoni, R Bernardi
Findings of the association for computational linguistics: EMNLP 2020, 2751-2767, 2020
202020
Is the red square big? MALeViC: Modeling adjectives leveraging visual contexts
S Pezzelle, R Fernández
arXiv preprint arXiv:1908.10285, 2019
192019
Linguistic issues behind visual question answering
R Bernardi, S Pezzelle
Language and Linguistics Compass 15 (6), elnc3.12417, 2021
162021
Vision and language integration: Moving beyond objects
R Shekhar, S Pezzelle, A Herbelot, M Nabi, E Sangineto, R Bernardi
Proceedings of the 12th International Conference on Computational Semantics …, 2017
152017
Word representation learning in multimodal pre-trained transformers: An intrinsic evaluation
S Pezzelle, E Takmaz, R Fernández
Transactions of the Association for Computational Linguistics 9, 1563-1579, 2021
132021
Comparatives, quantifiers, proportions: a multi-task model for the learning of quantities from vision
S Pezzelle, IT Sorodoc, R Bernardi
arXiv preprint arXiv:1804.05018, 2018
122018
Dealing with semantic underspecification in multimodal NLP
S Pezzelle
arXiv preprint arXiv:2306.05240, 2023
112023
Be precise or fuzzy: Learning the meaning of cardinals and quantifiers from vision
S Pezzelle, M Marelli, R Bernardi
arXiv preprint arXiv:1702.05270, 2017
102017
Less descriptive yet discriminative: Quantifying the properties of multimodal referring utterances via CLIP
E Takmaz, S Pezzelle, R Fernández
Proceedings of the workshop on cognitive modeling and computational …, 2022
92022
Learning quantification from images: A structured neural architecture
I Sorodoc, S Pezzelle, A Herbelot, M Dimiccoli, R Bernardi
Natural Language Engineering 24 (3), 363-392, 2018
92018
Building a bagpipe with a bag and a pipe: Exploring conceptual combination in vision
S Pezzelle, R Shekhar, R Bernardi
Proceedings of the 5th Workshop on Vision and Language, 60-64, 2016
72016
The LAMBADA dataset: Word prediction requiring a broad discourse context. arXiv 2016
D Paperno, G Kruszewski, A Lazaridou, QN Pham, R Bernardi, S Pezzelle, ...
arXiv preprint arXiv:1606.06031, 0
7
Quantifiers in a multimodal world: Hallucinating vision with language and sound
A Testoni, S Pezzelle, R Bernardi
Proceedings of the workshop on cognitive modeling and computational …, 2019
62019
The wisdom of masses: majority, subjectivity, and semantic similarity in the evaluation of VQA
S Jolly, S Pezzelle, T Klein, A Dengel, M Nabi
arXiv preprint arXiv:1809.04344, 2018
62018
The system can't perform the operation now. Try again later.
Articles 1–20