Software Listens In: Emotional Intelligence Through Affective Computing and Mobile Sensing

As an essential approach to understanding human interactions, emotion classification is a vital component in behavioral studies and health care, as well as in the design of context-aware systems. Speech contains rich information about emotion, but the classification performance is still short of what is desired for the algorithms to be used in real systems. Also, the impact of noise is not well studied, especially for emotion sensing in noisy mobile environments. In this talk, I will present an emotion classification system using support vector machines with a threshold-based fusion mechanism, which provides the functionality to effectively increase the accuracy of the emotion classification at the expense of rejecting some samples as unclassified. A novel noise-resilient pitch detection algorithm called BaNa is adopted in the system, and will be briefly introduced. The emotion classification system is evaluated on 1) a standard emotional speech database, 2) noisy speech data, and 3) data from real users. This talk will also envision the broad applications that can be enabled by emotional intelligence. The challenges for mobile emotion sensing design will also be discussed.

Speaker Bios

Na Yang is a doctoral candidate in the Department of Electrical and Computer Engineering at the University of Rochester. Her research interests lie in the areas of affective computing, speech signal processing, mobile sensing, and sensor networks. She has effectively collaborated with researchers from both engineering and psychology in conducting user studies on college students and on interactions among family members. She interned at Microsoft Research in 2011, and the ‘Listen-n-feel’ mobile emotion sensor she developed at Microsoft was presented at the Microsoft Research TechFest, and won the Award for Business Value on Microsoft’s Garage Science Fair in 2012.

Date:
Haut-parleurs:
Na Yang
Affiliation:
University of Rochester