Shrikanth (Shri) Narayanan
Shrikanth (Shri) Narayanan
Niki & C. L. Max Nikias Chair in Engineering & Professor, University of Southern California
Verified email at sipi.usc.edu - Homepage
TitleCited byYear
Toward detecting emotions in spoken dialogs
CM Lee, SS Narayanan
IEEE transactions on speech and audio processing 13 (2), 293-303, 2005
9642005
Analysis of emotion recognition using facial expressions, speech and multimodal information
C Busso, Z Deng, S Yildirim, M Bulut, CM Lee, A Kazemzadeh, S Lee, ...
Proceedings of the 6th international conference on Multimodal interfaces …, 2004
8292004
Acoustics of children’s speech: Developmental changes of temporal and spectral parameters
S Lee, A Potamianos, S Narayanan
The Journal of the Acoustical Society of America 105 (3), 1455-1468, 1999
7831999
IEMOCAP: Interactive emotional dyadic motion capture database
C Busso, M Bulut, CC Lee, A Kazemzadeh, E Mower, S Kim, JN Chang, ...
Language resources and evaluation 42 (4), 335, 2008
7502008
Environmental sound recognition with time–frequency audio features
S Chu, S Narayanan, CCJ Kuo
IEEE Transactions on Audio, Speech, and Language Processing 17 (6), 1142-1158, 2009
5542009
A system for real-time twitter sentiment analysis of 2012 us presidential election cycle
H Wang, D Can, A Kazemzadeh, F Bar, S Narayanan
Proceedings of the ACL 2012 system demonstrations, 115-120, 2012
5452012
Method of using a natural language interface to retrieve information from one or more data resources
IZ E. Levin, S. Narayanan, R. Pieraccini
US Patent 6,173,279, 2009
5402009
The INTERSPEECH 2010 paralinguistic challenge
B Schuller, S Steidl, A Batliner, F Burkhardt, L Devillers, C Müller, ...
Eleventh Annual Conference of the International Speech Communication Association, 2010
4222010
The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing
F Eyben, KR Scherer, BW Schuller, J Sundberg, E André, C Busso, ...
IEEE Transactions on Affective Computing 7 (2), 190-202, 2015
3882015
The Vera am Mittag German audio-visual emotional speech database
M Grimm, K Kroschel, S Narayanan
2008 IEEE international conference on multimedia and expo, 865-868, 2008
3702008
Emotion recognition using a hierarchical binary decision tree approach
CC Lee, E Mower, C Busso, S Lee, S Narayanan
Speech Communication 53 (9-10), 1162-1171, 2011
3272011
An approach to real-time magnetic resonance imaging for speech production
S Narayanan, K Nayak, S Lee, A Sethy, D Byrd
The Journal of the Acoustical Society of America 115 (4), 1771-1776, 2004
3232004
System and method for providing a compensated speech recognition model for speech recognition
RC Rose, S Pathasarathy, AE Rosenberg, SS Narayanan
US Patent 7,451,085, 2008
3162008
Primitives-based evaluation and estimation of emotions in speech
M Grimm, K Kroschel, E Mower, S Narayanan
Speech Communication 49 (10-11), 787-800, 2007
3142007
Analysis of emotionally salient aspects of fundamental frequency for emotion detection
C Busso, S Lee, S Narayanan
IEEE transactions on audio, speech, and language processing 17 (4), 582-596, 2009
2682009
An articulatory study of fricative consonants using magnetic resonance imaging
SS Narayanan, AA Alwan, K Haker
The Journal of the Acoustical Society of America 98 (3), 1325-1347, 1995
2551995
Paralinguistics in speech and language—State-of-the-art and the challenge
B Schuller, S Steidl, A Batliner, F Burkhardt, L Devillers, C MüLler, ...
Computer Speech & Language 27 (1), 4-39, 2013
2392013
Emotion recognition based on phoneme classes
CM Lee, S Yildirim, M Bulut, A Kazemzadeh, C Busso, Z Deng, S Lee, ...
Eighth International Conference on Spoken Language Processing, 2004
2352004
Combining acoustic and language information for emotion recognition
CM Lee, SS Narayanan, R Pieraccini
7th International Conference on Spoken Language Processing, ICSLP2002 …, 2002
2222002
Emotion recognition system
SS Narayanan
US Patent 8,209,182, 2012
2172012
The system can't perform the operation now. Try again later.
Articles 1–20