Emotion Detection from Speech Signals
Despite the great progress made in artificial intelligence, we are still far from having a natural interaction between man and machine, because the machine does not understand the emotional state of the speaker. Speech emotion detection has been drawing increasing attention, which aims to recognize emotion states from speech signal. The task of speech emotion recognition is very challenging, because it is not clear which speech features are most powerful in distinguishing between emotions. We utilize deep neural networks to detect emotion status from each speech segment in an utterance and then combine the segment-level results to form the final emotion recognition results. The system produces promising results on both clean speech and speech in gaming scenario.
Speaker Details
Kun Han is a Ph.D. candidate in Department of Computer Science and Engineering at The Ohio State University. He is interested in machine learning on speech processing. In particular, his research focuses on classification based speech separation. He is doing internship in Microsoft Research this summer, working on speech emotion detection.
- Series:
- Microsoft Research Talks
- Date:
- Speakers:
- Kun Han
- Affiliation:
- Ohio State University
-
-
Jeff Running
-
-
Series: Microsoft Research Talks
-
Decoding the Human Brain – A Neurosurgeon’s Experience
Speakers:- Pascal Zinn,
- Ivan Tashev
-
-
-
-
Galea: The Bridge Between Mixed Reality and Neurotechnology
Speakers:- Eva Esteban,
- Conor Russomanno
-
Current and Future Application of BCIs
Speakers:- Christoph Guger
-
Challenges in Evolving a Successful Database Product (SQL Server) to a Cloud Service (SQL Azure)
Speakers:- Hanuma Kodavalla,
- Phil Bernstein
-
Improving text prediction accuracy using neurophysiology
Speakers:- Sophia Mehdizadeh
-
-
DIABLo: a Deep Individual-Agnostic Binaural Localizer
Speakers:- Shoken Kaneko
-
-
Recent Efforts Towards Efficient And Scalable Neural Waveform Coding
Speakers:- Kai Zhen
-
-
Audio-based Toxic Language Detection
Speakers:- Midia Yousefi
-
-
From SqueezeNet to SqueezeBERT: Developing Efficient Deep Neural Networks
Speakers:- Sujeeth Bharadwaj
-
Hope Speech and Help Speech: Surfacing Positivity Amidst Hate
Speakers:- Monojit Choudhury
-
-
-
-
-
'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project
Speakers:- Peter Clark
-
Checkpointing the Un-checkpointable: the Split-Process Approach for MPI and Formal Verification
Speakers:- Gene Cooperman
-
Learning Structured Models for Safe Robot Control
Speakers:- Ashish Kapoor
-