Suivre
Ethan Perez
Ethan Perez
Anthropic; New York University
Adresse e-mail validée de anthropic.com - Page d'accueil
Titre
Citée par
Citée par
Année
FiLM: Visual Reasoning with a General Conditioning Layer
E Perez, F Strub, H De Vries, V Dumoulin, A Courville
AAAI 2018, 2018
17832018
Retrieval-augmented generation for knowledge-intensive nlp tasks
P Lewis, E Perez, A Piktus, F Petroni, V Karpukhin, N Goyal, H Küttler, ...
NeurIPS 2020, 2020
17712020
Constitutional ai: Harmlessness from ai feedback
Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, ...
arXiv preprint arXiv:2212.08073, 2022
5392022
Eli5: Long form question answering
A Fan, Y Jernite*, E Perez*, D Grangier, J Weston, M Auli
Association for Computational Linguistics (ACL) 2019, 2019
3792019
True Few-Shot Learning with Language Models
E Perez, D Kiela, K Cho
NeurIPS 2021, 2021
3032021
Red teaming language models with language models
E Perez, S Huang, F Song, T Cai, R Ring, J Aslanides, A Glaese, ...
EMNLP 2022, 2022
2832022
Supervised multimodal bitransformers for classifying images and text
D Kiela, S Bhooshan, H Firooz, E Perez, D Testuggine
arXiv preprint arXiv:1909.02950, 2019
2182019
Language models (mostly) know what they know
S Kadavath, T Conerly, A Askell, T Henighan, D Drain, E Perez, ...
arXiv preprint arXiv:2207.05221, 2022
2132022
Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned
D Ganguli, L Lovitt, J Kernion, A Askell, Y Bai, S Kadavath, B Mann, ...
arXiv preprint arXiv:2209.07858, 2022
1962022
Feature-wise transformations
V Dumoulin, E Perez, N Schucher, F Strub, H Vries, A Courville, Y Bengio
Distill 3 (7), e11, 2018
184*2018
Unsupervised Question Decomposition for Question Answering
E Perez, P Lewis, W Yih, K Cho, D Kiela
EMNLP 2020, 2020
1522020
HoME: a Household Multimodal Environment
S Brodeur, E Perez*, A Anand*, F Golemo*, L Celotti, F Strub, J Rouat, ...
ICLR 2018 Workshop, 2017
1282017
Discovering language model behaviors with model-written evaluations
E Perez, S Ringer, K Lukošiūtė, K Nguyen, E Chen, S Heiner, C Pettit, ...
arXiv preprint arXiv:2212.09251, 2022
1192022
Language models don't always say what they think: unfaithful explanations in chain-of-thought prompting
M Turpin, J Michael, E Perez, S Bowman
Advances in Neural Information Processing Systems 36, 2024
1172024
The capacity for moral self-correction in large language models
D Ganguli, A Askell, N Schiefer, TI Liao, K Lukošiūtė, A Chen, A Goldie, ...
arXiv preprint arXiv:2302.07459, 2023
872023
Pretraining language models with human preferences
T Korbak, K Shi, A Chen, R Bhalerao, CL Buckley, J Phang, SR Bowman, ...
ICML 2023, 2023
742023
Learning Visual Reasoning Without Strong Priors
E Perez, H De Vries, F Strub, V Dumoulin, A Courville
ICML 2017 Workshop, 2017
722017
Training language models with language feedback at scale
J Scheurer, JA Campos, T Korbak, JS Chan, A Chen, K Cho, E Perez
arXiv preprint arXiv:2303.16755, 2023
542023
Case-based reasoning for natural language queries over knowledge bases
R Das, M Zaheer, D Thai, A Godbole, E Perez, JY Lee, L Tan, ...
EMNLP 2021, 2021
512021
Training Language Models with Language Feedback
J Scheurer, JA Campos, JS Chan, A Chen, K Cho, E Perez
ACL 2022 Workshop on Learning with Natural Language Supervision, 2022
50*2022
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20