Mo Yu
Titre
Citée par
Citée par
Année
A structured self-attentive sentence embedding
Z Lin, M Feng, CN Santos, M Yu, B Xiang, B Zhou, Y Bengio
arXiv preprint arXiv:1703.03130, 2017
11132017
Target-dependent twitter sentiment classification
L Jiang, M Yu, M Zhou, X Liu, T Zhao
Proceedings of the 49th annual meeting of the association for computational …, 2011
9972011
Comparative study of cnn and rnn for natural language processing
W Yin, K Kann, M Yu, H Schütze
arXiv preprint arXiv:1702.01923, 2017
4452017
Improving lexical embeddings with semantic knowledge
M Yu, M Dredze
Proceedings of the 52nd Annual Meeting of the Association for Computational …, 2014
3062014
R: Reinforced Reader-Ranker for Open-Domain Question Answering
S Wang, M Yu, X Guo, Z Wang, T Klinger, W Zhang, S Chang, G Tesauro, ...
arXiv preprint arXiv:1709.00023, 2017
1812017
Factor-based compositional embedding models
M Yu, M Gormley, M Dredze
NIPS Workshop on Learning Semantics, 95-101, 2014
168*2014
Improved neural relation detection for knowledge base question answering
M Yu, W Yin, KS Hasan, C Santos, B Xiang, B Zhou
arXiv preprint arXiv:1704.06194, 2017
1462017
Dilated recurrent neural networks
S Chang, Y Zhang, W Han, M Yu, X Guo, W Tan, X Cui, M Witbrock, ...
Advances in Neural Information Processing Systems, 77-87, 2017
1332017
Simple question answering by attentive convolutional neural network
W Yin, M Yu, B Xiang, B Zhou, H Schütze
arXiv preprint arXiv:1606.03391, 2016
1142016
Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering
S Wang, M Yu, J Jiang, W Zhang, X Guo, S Chang, Z Wang, T Klinger, ...
International Conference on Learning Representations (ICLR), 2018
992018
Image super-resolution via dual-state recurrent networks
W Han, S Chang, D Liu, M Yu, M Witbrock, TS Huang
Proceedings of the IEEE conference on computer vision and pattern …, 2018
952018
Leveraging sentencelevel information with encoder lstm for natural language understanding
G Kurata, B Xiang, B Zhou, M Yu
arXiv preprint arXiv:1601.01530, 2016
92*2016
End-to-end answer chunk extraction and ranking for reading comprehension
Y Yu, W Zhang, K Hasan, M Yu, B Xiang, B Zhou
arXiv preprint arXiv:1610.09996, 2016
81*2016
Diverse few-shot text classification with multiple metrics
M Yu, X Guo, J Yi, S Chang, S Potdar, Y Cheng, G Tesauro, H Wang, ...
arXiv preprint arXiv:1805.07513, 2018
742018
Learning composition models for phrase embeddings
M Yu, M Dredze
Transactions of the Association for Computational Linguistics 3, 227-242, 2015
722015
Accelerated mini-batch randomized block coordinate descent method
T Zhao, M Yu, Y Wang, R Arora, H Liu
Advances in neural information processing systems, 3329-3337, 2014
632014
A co-matching model for multi-choice reading comprehension
S Wang, M Yu, S Chang, J Jiang
arXiv preprint arXiv:1806.04068, 2018
512018
Evidence Integration for Multi-hop Reading Comprehension with Graph Neural Networks
L Song, Z Wang, M Yu, Y Zhang, R Florian, D Gildea
IEEE Transactions on Knowledge and Data Engineering, 2020
48*2020
Cross-lingual knowledge graph alignment via graph matching neural network
K Xu, L Wang, M Yu, Y Feng, Y Song, Z Wang, D Yu
arXiv preprint arXiv:1905.11605, 2019
412019
Dag-gnn: Dag structure learning with graph neural networks
Y Yu, J Chen, T Gao, M Yu
arXiv preprint arXiv:1904.10098, 2019
402019
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20