Kazuki Irie
Kazuki Irie
The Swiss AI Lab - IDSIA, Università della Svizzera italiana (USI) & SUPSI
Verified email at idsia.ch
Title
Cited by
Cited by
Year
Improved training of end-to-end attention models for speech recognition
A Zeyer, K Irie, R Schlüter, H Ney
arXiv preprint arXiv:1805.03294, 2018
2022018
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention--w/o Data Augmentation
C Lüscher, E Beck, K Irie, M Kitza, W Michel, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:1905.03072, 2019
1442019
Lingvo: a modular and scalable framework for sequence-to-sequence modeling
J Shen, P Nguyen, Y Wu, Z Chen, MX Chen, Y Jia, A Kannan, T Sainath, ...
arXiv preprint arXiv:1902.08295, 2019
882019
Language modeling with deep transformers
K Irie, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:1905.04226, 2019
722019
LSTM, GRU, highway and a bit of attention: an empirical overview for language modeling in speech recognition
K Irie, Z Tuske, T Alkhouli, R Schluter, H Ney
Interspeech, 2016, 3519-3523, 2016
612016
A Comparison of Transformer and LSTM Encoder Decoder Models for ASR
A Zeyer, P Bahar, K Irie, R Schlüter, H Ney
IEEE Automatic Speech Recognition and Understanding Workshop, Sentosa, Singapore, 2019
542019
On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition
K Irie, R Prabhavalkar, A Kannan, A Bruguier, D Rybach, P Nguyen
Proc. Interspeech 2019, 3800-3804, 2019
51*2019
The RWTH/UPB/FORTH system combination for the 4th CHiME challenge evaluation
T Menne, J Heymann, A Alexandridis, K Irie, A Zeyer, M Kitza, P Golik, ...
Universitätsbibliothek der RWTH Aachen, 2016
332016
Training language models for long-span cross-sentence evaluation
K Irie, A Zeyer, R Schlüter, H Ney
IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2019
202019
The RWTH ASR System for TED-LIUM Release 2: Improving Hybrid HMM with SpecAugment
W Zhou, W Michel, K Irie, M Kitza, R Schlüter, H Ney
ICASSP, Barcelona, Spain, 2020
192020
RADMM: Recurrent Adaptive Mixture Model with Applications to Domain Robust Language Modeling
K Irie, S Kumar, M Nirschl, H Liao
IEEE International Conference on Acoustics, Speech, and Signal Processing …, 2018
182018
On efficient training of word classes and their application to recurrent neural network language models
R Botros, K Irie, M Sundermeyer, H Ney
Sixteenth Annual Conference of the International Speech Communication …, 2015
172015
Prediction of LSTM-RNN Full Context States as a Subtask for N-gram Feedforward Language Models
K Irie, Z Lei, R Schlüter, H Ney
IEEE International Conference on Acoustics, Speech and Signal Processing …, 2018
132018
Investigation on log-linear interpolation of multi-domain neural network language model
Z Tüske, K Irie, R Schlüter, H Ney
2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016
132016
Bag-of-words input for long history representation in neural network-based language models for speech recognition
K Irie, R Schlüter, H Ney
Interspeech, 2015, 2015
132015
How Much Self-Attention Do We Need? Trading Attention for Feed-Forward Layers
K Irie, A Gerstenberger, R Schlüter, H Ney
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
72020
Investigations on byte-level convolutional neural networks for language modeling in low resource speech recognition
K Irie, P Golik, R Schlüter, H Ney
IEEE International Conference on Acoustics, Speech and Signal Processing …, 2017
52017
Automatic speech recognition based on neural networks
R Schlüter, P Doetsch, P Golik, M Kitza, T Menne, K Irie, Z Tüske, A Zeyer
International Conference on Speech and Computer, 3-17, 2016
52016
Linear transformers are secretly fast weight memory systems
I Schlag, K Irie, J Schmidhuber
arXiv preprint arXiv:2102.11174, 2021
42021
Investigation on Estimation of Sentence Probability by Combining Forward, Backward and Bi-directional LSTM-RNNs.
K Irie, Z Lei, L Deng, R Schlüter, H Ney
INTERSPEECH, 392-395, 2018
42018
The system can't perform the operation now. Try again later.
Articles 1–20