Follow
Alethea Power
Alethea Power
Member of Technical Staff, OpenAI
Verified email at openai.com - Homepage
Title
Cited by
Cited by
Year
Evaluating large language models trained on code
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374, 2021
18632021
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
6932022
Gpt-4 technical report
J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ...
arXiv preprint arXiv:2303.08774, 2023
5102023
Grokking: Generalization beyond overfitting on small algorithmic datasets
A Power, Y Burda, H Edwards, I Babuschkin, V Misra
arXiv preprint arXiv:2201.02177, 2022
2042022
Evaluating large language models trained on code. arXiv 2021
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374 10, 2021
402021
The system can't perform the operation now. Try again later.
Articles 1–5