Follow
Raphael Shu
Raphael Shu
The University of Tokyo
Verified email at uaca.com - Homepage
Title
Cited by
Cited by
Year
Compressing word embeddings via deep compositional code learning
R Shu, H Nakayama
arXiv preprint arXiv:1711.01068, 2017
1192017
Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference
R Shu, J Lee, H Nakayama, K Cho
The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), 2020
78*2020
Generating diverse translations with sentence codes
R Shu, H Nakayama, K Cho
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
282019
Generating video description using sequence-to-sequence model with temporal attention
N Laokulrat, S Phan, N Nishida, R Shu, Y Ehara, N Okazaki, Y Miyao, ...
Proceedings of COLING 2016, the 26th International Conference on …, 2016
242016
Iterative refinement in the continuous space for non-autoregressive neural machine translation
J Lee, R Shu, K Cho
arXiv preprint arXiv:2009.07177, 2020
132020
Improving beam search by removing monotonic constraint for neural machine translation
R Shu, H Nakayama
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
112018
Later-stage minimum bayes-risk decoding for neural machine translation
R Shu, H Nakayama
arXiv preprint arXiv:1704.03169, 2017
112017
Residual stacking of rnns for neural machine translation
R Shu, A Miura
Proceedings of the 3rd Workshop on Asian Translation (WAT2016), 223-229, 2016
112016
An empirical study of adequate vision span for attention-based neural machine translation
R Shu, H Nakayama
arXiv preprint arXiv:1612.06043, 2016
42016
Graphplan: Story generation by planning with event graph
H Chen, R Shu, H Takamura, H Nakayama
arXiv preprint arXiv:2102.02977, 2021
32021
Reward optimization for neural machine translation with learned metrics
R Shu, KM Yoo, JW Ha
arXiv preprint arXiv:2104.07541, 2021
12021
Real-time Neural-based Input Method
J Yao, R Shu, X Li, K Ohtsuki, H Nakayama
arXiv preprint arXiv:1810.09309, 2018
12018
Discrete structural planning for neural machine translation
R Shu, H Nakayama
arXiv preprint arXiv:1808.04525, 2018
12018
Improving Noised Gradient Penalty with Synchronized Activation Function for Generative Adversarial Networks
R Yang, R Shu, H Nakayama
IEICE TRANSACTIONS on Information and Systems 105 (9), 1537-1545, 2022
2022
Federated Semi-Supervised Learning with Prototypical Networks
W Kim, K Park, K Sohn, R Shu, HS Kim
arXiv preprint arXiv:2205.13921, 2022
2022
Enabling Real-time Neural IME with Incremental Vocabulary Selection
J Yao, R Shu, X Li, K Ohtsuki, H Nakayama
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
2019
Discrete Structural Planning for Generating Diverse Translations
R Shu, H Nakayama
2018
Single-Queue Decoding for Neural Machine Translation
R Shu, H Nakayama
arXiv preprint arXiv:1707.01830, 2017
2017
Reducing Redundant Computations with Flexible Attention
R Shu, H Nakayama
CoRR, 2016
2016
The system can't perform the operation now. Try again later.
Articles 1–19