Emotion Recognition takes an image with faces as an input, and returns the confidence across a set of emotions for each face in the image, as well as bounding box for the face (using MS Face API). The algorithm infers emotions from appearance using a custom Deep Convolution Network. To improve labeling, we labeled each image with more than 10 taggers using crowd source, which allowed us to learn probability distribution for each image.
We are also sharing publically an enhanced version of FER emotion dataset called FER+: https://github.com/Microsoft/FERPlus (opens in new tab) for the research community.
People
Cha Zhang
Principal Researcher