Purdue University Graduate School
Purdue_University_Thesis_Final.pdf (4.83 MB)

Feature Fusion Deep Learning Method for Video and Audio Based Emotion Recognition

Download (4.83 MB)
posted on 2021-12-20, 18:24 authored by Yanan SongYanan Song
In this thesis, we proposed a deep learning based emotion recognition system in order to improve the successive classification rate. We first use transfer learning to extract visual features and use Mel frequency Cepstral Coefficients(MFCC) to extract audio features, and then apply the recurrent neural networks(RNN) with attention mechanism to process the sequential inputs. After that, the outputs of both channels are fused into a concatenate layer, which is processed using batch normalization, to reduce internal covariate shift. Finally, the classification result is obtained by the softmax layer. From our experiments, the video and audio subsystem achieve 78% and 77% respectively, and the feature fusion system with video and audio achieves 92% accuracy based on the RAVDESS dataset for eight emotion classes. Our proposed feature fusion system outperforms conventional methods in terms of classification prediction.


Degree Type

  • Master of Science in Electrical and Computer Engineering


  • Electrical and Computer Engineering

Campus location

  • Hammond

Advisor/Supervisor/Committee Chair

Lizhe Tan

Additional Committee Member 2

Colin Elkin

Additional Committee Member 3

Xiaoli Yang

Additional Committee Member 4

Chenn Zhou