Follow
Ariel Herbert-Voss
Ariel Herbert-Voss
Verified email at g.harvard.edu
Title
Cited by
Cited by
Year
Language models are few-shot learners
T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ...
Advances in neural information processing systems 33, 1877-1901, 2020
270422020
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford …, 2020
70562020
Evaluating large language models trained on code
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374, 2021
22332021
Extracting training data from large language models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
30th USENIX Security Symposium (USENIX Security 21), 2633-2650, 2021
13342021
Release strategies and the social impacts of language models
I Solaiman, M Brundage, J Clark, A Askell, A Herbert-Voss, J Wu, ...
arXiv preprint arXiv:1908.09203, 2019
4312019
Toward trustworthy AI development: mechanisms for supporting verifiable claims
M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ...
arXiv preprint arXiv:2004.07213, 2020
3462020
Language Models are Few-Shot Learners. 2020. doi: 10.48550
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arxiv, 5-7, 2005
1632005
Language models are few-shot learners. arXiv
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Computer Science, Computation and Language, 2005
1512005
Language models are few-shot learners. CoRR abs/2005.14165 (2020)
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
URL: https://arxiv. org/abs/2005.14165, 2005
742005
Language models are few-shot learners
B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, ...
arXiv preprint arXiv:2005.14165, 2020
642020
& Amodei, D.(2020)
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Language models are few-shot learners, 2005
612005
Evaluating large language models trained on code. arXiv 2021
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374 10, 2021
502021
Computing minimal interpolants in C1, 1 (Rd)
A Herbert-Voss, MJ Hirn, F McCollum
Rev. Mat. Iberoam 33 (1), 29-66, 2017
172017
Language models are few-shot learners.[Cs]
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Proceedings of 2020 Neural Information Processing Systems, 2020
122020
The wmdp benchmark: Measuring and reducing malicious use with unlearning
N Li, A Pan, A Gopal, S Yue, D Berrios, A Gatti, JD Li, AK Dombrowski, ...
arXiv preprint arXiv:2403.03218, 2024
82024
Computing minimal interpolants in
A Herbert-Voss, MJ Hirn, F McCollum
arXiv preprint arXiv:1411.5668, 2014
12014
2. A. Bordes, Y. Boureau, and J. Weston. Learning end-to-end goal-oriented dialog. In 5th
GS Shyam, A Askell, S Agarwal, A Herbert-Voss, G Krueger, T Henighan, ...
The system can't perform the operation now. Try again later.
Articles 1–17