Suivre
An Yang
An Yang
Alibaba Group/Peking University
Adresse e-mail validée de alibaba-inc.com
Titre
Citée par
Citée par
Année
Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework
P Wang, A Yang, R Men, J Lin, S Bai, Z Li, J Ma, C Zhou, J Zhou, H Yang
ICML 2022, 2022
7652022
Qwen technical report
J Bai, S Bai, Y Chu, Z Cui, K Dang, X Deng, Y Fan, W Ge, Y Han, F Huang, ...
arXiv preprint arXiv:2309.16609, 2023
2552023
Enhancing pre-trained language representations with rich knowledge for machine reading comprehension
A Yang, Q Wang, J Liu, K Liu, Y Lyu, H Wu, Q She, S Li
ACL 2019 (Long Paper), 2019
1732019
M6: A chinese multimodal pretrainer
A Yang, J Lin, R Men, C Zhou, M Ding, Y Zhang, P Wang, A Wang, ...
122*2021
Interbert: Vision-and-language interaction for multi-modal pretraining
J Lin, A Yang, Y Zhang, J Liu, J Zhou, H Yang
arXiv preprint arXiv:2003.13198, 2020
722020
Chinese clip: Contrastive vision-language pretraining in chinese
A Yang, J Pan, J Lin, R Men, Y Zhang, J Zhou, C Zhou
arXiv preprint arXiv:2211.01335, 2022
582022
SciDTB: Discourse dependency treebank for scientific abstracts
A Yang, S Li
ACL 2018 (Short Paper), 2018
472018
M6-t: Exploring sparse expert models and beyond
A Yang, J Lin, R Men, C Zhou, L Jiang, X Jia, A Wang, J Zhang, J Wang, ...
arXiv preprint arXiv:2105.15082, 2021
452021
Machine reading comprehension: a literature review
X Zhang, A Yang, S Li, Y Wang
arXiv preprint arXiv:1907.01686, 2019
452019
A Robust Adversarial Training Approach to Machine Reading Comprehension
K Liu, X Liu, A Yang, J Liu, J Su, S Li, Q She
AAAI 2020, 2020
412020
Expertprompting: Instructing large language models to be distinguished experts
B Xu, A Yang, J Lin, Q Wang, C Zhou, Y Zhang, Z Mao
arXiv preprint arXiv:2305.14688, 2023
372023
Adaptations of ROUGE and BLEU to Better Evaluate Machine Reading Comprehension Task
A Yang, K Liu, J Liu, Y Lyu, S Li
MRQA Workshop@ACL 2018, 2018
362018
M6: Multi-Modality-to-Multi-Modality Multitask Mega-transformer for Unified Pretraining
A Yang, J Lin, R Men, C Zhou, Y Zhang, P Wang, J Zhou, J Tang, H Yang
KDD 2021, 2021
312021
Prompt Tuning for Generative Multimodal Pretrained Models
H Yang, J Lin, A Yang, P Wang, C Zhou, H Yang
ACL 2023 (Findings), 2022
272022
M6-10t: A sharing-delinking paradigm for efficient multi-trillion parameter pretraining
J Lin, A Yang, J Bai, C Zhou, L Jiang, X Jia, A Wang, J Zhang, Y Li, W Lin, ...
arXiv preprint arXiv:2110.03888, 2021
242021
Learning Relation Alignment for Calibrated Cross-modal Retrieval
S Ren, J Lin, G Zhao, R Men, A Yang, J Zhou, X Sun, H Yang
ACL 2021 (Long Paper), 2021
232021
Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation
P Wang, J Lin, A Yang, C Zhou, Y Zhang, J Zhou, H Yang
ACL 2021 (Findings), 2021
172021
Ofasys: A multi-modal multi-task learning system for building generalist models
J Bai, R Men, H Yang, X Ren, K Dang, Y Zhang, X Zhou, P Wang, S Tan, ...
arXiv preprint arXiv:2212.04408, 2022
62022
Domain ontology learning enhanced by optimized relation instance in dbpedia
L Xiao, C Ruan, A Yang, J Zhang, J Hu
LREC 2016, 1452-1456, 2016
62016
Transferring General Multimodal Pretrained Models to Text Recognition
J Lin, X Ren, Y Zhang, G Liu, P Wang, A Yang, C Zhou
ACL 2023 (Findings), 2022
42022
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20