Sheng Shen
Sheng Shen
Verified email at - Homepage
Cited by
Cited by
Multitask prompted training enables zero-shot task generalization
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
ICLR 2022, 2021
Bloom: A 176b-parameter open-access multilingual language model
BS Workshop, TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, ...
arXiv preprint arXiv:2211.05100, 2022
Q-bert: Hessian based ultra low precision quantization of bert
S Shen, Z Dong, J Ye, L Ma, Z Yao, A Gholami, MW Mahoney, K Keutzer
AAAI 2020, 2019
How Much Can CLIP Benefit Vision-and-Language Tasks?
S Shen*, LH Li*, H Tan, M Bansal, A Rohrbach, KW Chang, Z Yao, ...
ICLR 2022, 2021
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Z Li*, E Wallace*, S Shen*, K Lin*, K Keutzer, D Klein, JE Gonzalez
ICML 2020, 2020
Crosslingual generalization through multitask finetuning
N Muennighoff, T Wang, L Sutawika, A Roberts, S Biderman, TL Scao, ...
ACL 2023, 2022
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
Z Yao, A Gholami, S Shen, K Keutzer, MW Mahoney
AAAI 2021, 2020
An annotated dataset of literary entities
D Bamman, S Popat, S Shen
NAACL 2019, 2019
Ermes: Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification
Z Chen*, S Shen*, Z Hu, X Lu, Q Mei, X Liu
WWW 2019, 2018
Through a gender lens: An empirical study of emoji usage over large-scale android users
Z Chen, X Lu, S Shen, W Ai, X Liu, Q Mei
arXiv preprint arXiv:1705.05546, 2017
Powernorm: Rethinking batch normalization in transformers
S Shen, Z Yao, A Gholami, M Mahoney, K Keutzer
ICML 2020, 2020
Learned token pruning for transformers
S Kim*, S Shen*, D Thorsley, A Gholami, W Kwon, J Hassoun, K Keutzer
KDD 2022, 2021
Pragmatically Informative Text Generation
S Shen, D Fried, J Andreas, D Klein
NAACL 2019, 2019
K-lite: Learning transferable visual models with external knowledge
S Shen, C Li, X Hu, Y Xie, J Yang, P Zhang, A Rohrbach, Z Gan, L Wang, ...
NeurIPS 2022, 2022
What Language Model to Train if You Have One Million GPU Hours?
T Le Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, ...
EMNLP 2022, 2022
An Effective Framework for Weakly-Supervised Phrase Grounding
Q Wang, H Tan, S Shen, M Mahoney, Z Yao
EMNLP 2020, 2020
Noisy Self-Knowledge Distillation for Text Summarization
Y Liu, S Shen, M Lapata
NAACL 2021, 2020
Reservoir Transformers
S Shen, A Baevski, AS Morcos, K Keutzer, M Auli, D Kiela
ACL 2021, 2020
Towards Release Strategy Optimization for Apps in Google Play
S Shen, X Lu, Z Hu, X Liu
Internetware 2017, 2017
On the Generation of Medical Question-Answer Pairs
S Shen, Y Li, N Du, X Wu, Y Xie, S Ge, T Yang, K Wang, X Liang, W Fan
AAAI 2020, 2018
The system can't perform the operation now. Try again later.
Articles 1–20