Background

Emotion detection using physiological signals has gained increasing attention in various fields, including healthcare, human-computer interaction, and affective computing. This technology aims to recognize and interpret human emotions by analyzing data from sensors that measure physiological responses such as heart rate, skin conductance, and brain activity. The ability to accurately detect emotions can potentially enhance personalized experiences, improve mental health monitoring, and enable more intuitive human-machine interfaces.

Current approaches to emotion detection face challenges in achieving robust and generalizable results across different individuals and contexts. Machine learning models often struggle with overfitting to specific datasets, while deep learning techniques may be heavily dependent on large amounts of labeled data. Additionally, the complex and multifaceted nature of human emotions presents difficulties in developing accurate and reliable detection methods that can account for individual variations and contextual factors. As research in this field progresses, there is a growing need for innovative approaches that can overcome these limitations and provide more effective emotion detection capabilities.

Summary

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
According to an aspect of the present disclosure, a system for emotion detection using physiological signals is provided. The system includes a data gathering module configured to record physiological signal data from a subject when exposed to emotion-stimulating stimuli. The system also includes a feature extraction module configured to apply Local Binary Pattern (LBP) to the recorded signal data for each of multiple color channels to extract features. The system further includes a feature ranking and selection module configured to rank the extracted features based on Fisher’s Discriminant Ratio (FDR) and select a subset of the ranked features. Additionally, the system includes a classification module configured to train a Support Vector Machine (SVM) with a linear kernel using the selected subset of features.

According to other aspects of the present disclosure, the system may include sensors for recording physiological signal data, among other features. The physiological signal data may be recorded using sensors including electrocardiogram (ECG), blood volume pulse (BVP), galvanic skin response (GSR), respiration (RSP), skin temperature, and electromyography (EMG) sensors. The feature extraction module may apply LBP to signal plots of each color channel separately, extracting 256 features per channel. The feature extraction module may create a combined feature vector by horizontally concatenating the features from each channel, resulting in a feature vector of size 768. The feature ranking and selection module may use forward feature selection to select the subset of the most relevant features for emotion classification.

According to another aspect of the present disclosure, the system may further include a deep learning module configured to extract features using at least one of a Long Short-Term Memory (LSTM) network and a one-dimensional Convolutional Neural Network (1D CNN). The deep learning module may be configured to capture temporal dependencies and significant spatial patterns from the physiological signals.
According to other aspects of the present disclosure, the system may include an ensemble module configured to combine predictions from both the SVM classification and the deep learning module to make a final decision on emotion classification. The system may also include a performance evaluation module configured to evaluate the performance of the trained ensemble model using classification metrics such as F1 Score or accuracy.
According to yet another aspect of the present disclosure, a method for emotion detection using physiological signals is provided. The method includes recording physiological signal data from a subject when exposed to emotion-stimulating stimuli, applying Local Binary Pattern (LBP) to the recorded signal data for each of multiple color channels to extract features, ranking the extracted features based on Fisher’s Discriminant Ratio (FDR), selecting a subset of the ranked features, and training a Support Vector Machine (SVM) with a linear kernel using the selected set of features.

According to other aspects of the present disclosure, the method may include one or more of the following steps. The method may include extracting features using at least one of a Long Short-Term Memory (LSTM) network and a one-dimensional Convolutional Neural Network (1D CNN). The method may also include combining predictions from both the SVM classification and the deep learning feature extraction to make a final decision on emotion classification. Additionally, the method may include evaluating the performance of the trained ensemble model using classification metrics.

The foregoing general description of the illustrative embodiments and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.