EEG emotion recognition framework based on invariant wavelet scattering convolution network

Document Type

Article

Publication Date

Winter 1-24-2024

Abstract

EEG signals for real-time emotion identification are crucial for affective computing and human-computer interaction. The current emotion recognition models, which rely on a small number of emotion classes and stimuli like music and images in controlled lab conditions, have poor ecological validity. Furthermore, identifying relevant EEG signal features is crucial for efficient emotion identification. According to the complexity, non-stationarity, and variation nature of EEG signals, which make it challenging to identify relevant features to categorize and identify emotions, a novel approach for feature extraction and classification concerning EEG signals is suggested based on invariant wavelet scattering transform (WST) and support vector machine algorithm (SVM). The WST is a new time-frequency domain equivalent to a deep convolutional network. It produces scattering feature matrix representations that are stable against time-warping deformations, noise-resistant, and time-shift invariant existing in EEG signals. So, small, difficult-to-measure variations in the amplitude and duration of EEG signals can be captured. As a result, it addresses the limitations of the previous feature extraction approaches, which are unstable and sensitive to time-shift variations. In this paper, the zero, first, and second order features from DEAP datasets are obtained by performing the WST with two deep layers. Then, the PCA method is used for dimensionality reduction. Finally, the extracted features are fed as inputs for different classifiers. In the classification step, the SVM classifier is utilized with different classification algorithms such as k-nearest neighbours (KNN), random forest (RF), and AdaBoost classifier. This research employs a principal component analysis (PCA) approach to reduce the high dimensionality of scattering characteristics and increase the computational efficiency of our classifiers. The proposed method is performed across four different emotional classification models based on valence, arousal, dominance, and liking dimensions on the DEAP dataset. It achieves over 98% for two emotional classes and over 97% for three, four, and eight emotional classes. The results unequivocally demonstrate the efficacy of the proposed WST, PCA, and SVM-based emotion recognition approach for EEG signal emotion recognition.

Share

COinS