Yu Ding
Yu Ding
Artificial Intelligence Expert, Netease Fuxi AI Lab, China
Verified email at corp.netease.com
Cited by
Cited by
Laughter animation synthesis
Y Ding, K Prepin, J Huang, C Pelachaud, T Artières
Proceedings of the 2014 international conference on Autonomous agents and …, 2014
Modeling multimodal behaviors from speech prosody
Y Ding, C Pelachaud, T Artieres
International Conference on Intelligent Virtual Agents, 217-228, 2013
Rhythmic body movements of laughter
R Niewiadomski, M Mancini, Y Ding, C Pelachaud, G Volpe
Proceedings of the 16th international conference on multimodal interaction …, 2014
Speech-driven eyebrow motion synthesis with contextual markovian models
Y Ding, M Radenen, T Artieres, C Pelachaud
2013 IEEE International Conference on Acoustics, Speech and Signal …, 2013
Laughing with a Virtual Agent.
F Pecune, M Mancini, B Biancardi, G Varni, Y Ding, C Pelachaud, G Volpe, ...
AAMAS, 1817-1818, 2015
Faceswapnet: Landmark guided many-to-many face reenactment
J Zhang, X Zeng, Y Pan, Y Liu, Y Ding, C Fan
arXiv preprint arXiv:1905.11805 2, 2019
Implementing and evaluating a laughing virtual character
M Mancini, B Biancardi, F Pecune, G Varni, Y Ding, C Pelachaud, G Volpe, ...
ACM Transactions on Internet Technology (TOIT) 17 (1), 1-22, 2017
Real-time visual prosody for interactive virtual agents
H Van Welbergen, Y Ding, K Sattler, C Pelachaud, S Kopp
International Conference on Intelligent Virtual Agents, 139-151, 2015
Freenet: Multi-identity face reenactment
J Zhang, X Zeng, M Wang, Y Pan, L Liu, Y Liu, Y Ding, C Fan
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
Laugh when you’re winning
M Mancini, L Ach, E Bantegnie, T Baur, N Berthouze, D Datta, Y Ding, ...
International Summer Workshop on Multimodal Interfaces, 50-79, 2013
Vers des agents conversationnels animés socio-affectifs
M Ochs, Y Ding, N Fourati, M Chollet, B Ravenet, F Pecune, N Glas, ...
Proceedings of the 25th Conference on l'Interaction Homme-Machine, 69-78, 2013
Inverse kinematics using dynamic joint parameters: inverse kinematics animation synthesis learnt from sub-divided motion micro-segments
J Huang, M Fratarcangeli, Y Ding, C Pelachaud
The Visual Computer 33 (12), 1541-1553, 2017
Lip animation synthesis: a unified framework for speaking and laughing virtual agent.
Y Ding, C Pelachaud
AVSP, 78-83, 2015
A multifaceted study on eye contact based speaker identification in three-party conversations
Y Ding, Y Zhang, M Xiao, Z Deng
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems …, 2017
Perception of intensity incongruence in synthesized multimodal expressions of laughter
R Niewiadomski, Y Ding, M Mancini, C Pelachaud, G Volpe, A Camurri
2015 International Conference on Affective Computing and Intelligent …, 2015
Upper body animation synthesis for a laughing character
Y Ding, J Huang, N Fourati, T Artieres, C Pelachaud
International Conference on Intelligent Virtual Agents, 164-173, 2014
Prior aided streaming network for multi-task affective recognitionat the 2nd abaw2 competition
W Zhang, Z Guo, K Chen, L Li, Z Zhang, Y Ding
arXiv preprint arXiv:2107.03708, 2021
Low-level characterization of expressive head motion through frequency domain analysis
Y Ding, L Shi, Z Deng
IEEE Transactions on Affective Computing 11 (3), 405-418, 2018
Perceptual enhancement of emotional mocap head motion: An experimental study
Y Ding, L Shi, Z Deng
2017 Seventh International Conference on Affective Computing and Intelligent …, 2017
Lol—laugh out loud
F Pecune, B Biancardi, Y Ding, C Pelachaud, M Mancini, G Varni, ...
Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015
The system can't perform the operation now. Try again later.
Articles 1–20