Hybrid Dimensionality Reduction Model for Real-Time EEG-Based Emotion Recognition: A Combined Subspace Approach Using Principal Component Analysis and Independent Component Analysis

Hybrid Dimensionality Reduction Model for Real-Time EEG-Based Emotion Recognition: A Combined Subspace Approach Using Principal Component Analysis and Independent Component Analysis

M. S. Thejaswini* G. Hemantha Kumar V. N. Manjunath Aradhya

Department of Studies in Computer Science, University of Mysore, Mysuru 570006, Karnataka, India

Department of Computer Applications, JSS Science and Technology University, Mysuru 570006, Karnataka, India

Corresponding Author Email: 
thejaswini@compsci.uni-mysore.ac.in
Page: 
1771-1788
|
DOI: 
https://doi.org/10.18280/mmep.120532
Received: 
20 September 2024
|
Revised: 
23 November 2024
|
Accepted: 
30 November 2024
|
Available online: 
31 May 2025
| Citation

© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Brain-computer interface (BCI)-based emotion recognition, utilizing electroencephalogram (EEG) signals, is a pioneering field in affective computing. This paper introduces the hybrid dimensionality reduction model (HDRM), a novel approach for classifying five distinct emotions—sadness, fear, relaxation, enjoyment, and humor—during entertainment media consumption. HDRM integrates variance-based subspace projection and independent source separation, utilizing EEG data from 46 subjects exposed to commercial advertisements and Kannada music videos. By capturing dominant patterns in EEG data and isolating distinct neural processes associated with emotional responses, HDRM enhances feature extraction through spatial and temporal features, such as central tendency, spread, and inner products. Experimental results demonstrate that HDRM achieves accuracies of 85.41% and 78.49% with Logistic Regression and KNN for classifying emotions from commercial advertisements, and 90.15% and 86.33%, respectively, for Kannada musical clips. The results confirm that HDRM can be applied to low-cost, real-time BCIs for both entertainment and therapeutic applications. This study provides an implementable solution for advancing empathic capabilities in emotion recognition systems.

Keywords: 

brain-computer interface, electroencephalogram, multimedia stimuli, emotions, orthogonal and independent subspaces, central tendency, spread, inner products

1. Introduction

People today spend significant time on social networks, engaging in activities like gaming and online shopping. Emotions are integral to human perception, relationships, and decision-making, influencing how individuals interact with technology [1]. However, current human-computer interaction (HCI) systems lack emotional intelligence, failing to process or interpret affective information. To enhance HCI, it is essential to address the emotional aspect of user interactions. This can be achieved by developing systems that can identify and comprehend users’ affective states, leading to the need for efficient, reliable, and scalable emotion recognition systems. Research in AI focuses on emotion detection and affective computing to enable robots to understand emotions. Among HCI technologies, brain-computer interfaces (BCI) excel in emotion identification and improving user-computer interaction. Various systems have been designed to assess emotional, cognitive, or affective states [2, 3]. Effective communication between humans and machines requires the latter to emulate human emotions, allowing for high-accuracy emotion recognition in real time. Emotional feedback is crucial in applications where identifying human emotions is essential. Emotion recognition (ER) is a natural state for human beings. ER is a very critical factor in the findings of Damasio et al. [4] that demonstrate that emotional responses are integral with numerous facets of human existence, especially with mental abilities, thinking awareness, decision-making processes, and also eventually with AI in daily life. Most of the time, it is difficult to think about something without thinking about something else; in some way or the other, they are going through some emotions. It is only in the last few years that emotion recognition has been identified as a field of interest with an emphasis and growth in artificial intelligence. Some of the advancements in science and technology include facial expression-based emotion recognition [5], emotional robots [6], and emotion-based image retrieval [7]. Further domains like emotional modulation, detecting illness, and assessments of counterintelligence are constitutional areas in understanding and recognizing emotional states. It is sent out, then it is received, and it is then interpreted by individuals by means of body language, including facial appearance [8], vocal expressions [9], sign languages such as gestures [10], texts [11], and physiological signals [12]. It can be from one of the above [13] and even a combination of the above [14]. Verbal or facial signals are easy to forge, such as expressions, while physiological signals help in identifying the emotions of an individual. This is a key factor that makes emotion identification through BCI important because it has the possibility of increasing the efficiency of human cognitive processes, communication with other people, and problem-solving. It also helps in moderating the emotional activity of the brain and thus improves problem-solving and overall well-being. It also directly checks the state of the brain, which is the source of feelings [15].

A BCI-based system comprises four components: signal capturing, pre-processing, feature extraction and sorting, and class categorization with feedback. Signal acquisition is the initial step, involving the collection of brain signals through invasive, semi-invasive, and non-invasive methods. Invasive methods involve implanting electrodes on the brain, while non-invasive techniques, like EEG, place electrodes on the scalp to monitor brain activity in real time. EEG is widely used due to its ease of use and application in fields such as education, psychology, and medicine [16]. Initially developed for paralyzed individuals, BCI technology now benefits both medical and non-medical users and continues to evolve. One crucial aspect of BCI is motion recognition, which is vital for daily tasks [17]. Emotion, defined as a mental state encompassing affective, experiential, physiological, and behavioural components, is key to interpersonal communication and decision-making. Thus, emotion identification is essential in human-computer interactions [18]. Advances in affective computing enable systems to detect users’ emotional states in real time, enhancing the interaction experience by making systems more sophisticated and user-friendly.

Currently, emotion recognition research has limited its focus towards the following areas: shaping the linkage between physiological signals and emotions, ways of selecting the right stimuli for the target emotions, and ways of recognizing the target emotions. Elements related to certain emotions, samples of an emotion generation approach, and emotion recognition based on multi-modal data fusion [19] are the backbone for emotion recognition study. This study is based on the analysis of one of the most common areas of HCI systems application [20]. These systems employ facial expressions, body language, words, and even brain activity to convey emotions [21, 22]. However, there are factors that are situational and can be present in the external environment and not be noticed by people, which leads to misconceptions. As a result, researchers have succeeded effectively in predicting various emotional states using EEG signal architectures. All kinds of emotion analysis algorithms render EEG waveforms since these waves directly emanate from the brain and have been proven to work with high efficiency [23]. EEG-based emotion recognition has attracted a lot of interest because it does not involve any interference with the body. They are also cost-efficient, transportable, and easy to use in the classification of emotion. The problems of identification and categorization of emotions are therefore the key components in any BCI system using EEG signals, be it for the control of a computer, a robot, or a prosthesis: the ictal, the postictal, and the interictal. Therefore, there is a need to organize these sensations with the aid of an emotion recognition system. This proposed article’s sections are outlined as follows: In Section 1, the background and motivation of the study are discussed. Section 2 presents the related work and the existing methodology. Section 3 presents the proposed work. This section presents the planned approach, in particular principal component analysis (PCA) and independent component analysis (ICA), feature engineering, and the integration of the classifiers. Section 4 gives information on the setup of the experiments, and Section 5 describes the dataset, results, and consequently, Section 5 summarizes the proposed study observations and proposals for further research.

2. Related Work

Multiple machine learning approaches are utilized for EEG signal interpretation, including k-Nearest Neighbors (KNN) [24], Support Vector Machine (SVM) [25, 26], Decision Tree (DT) [27], Random Forest (RF) [28], and Linear Discriminant Analysis (LDA) [29]. In deep learning, Deep Belief Network (DBN) [30], Autoencoder (AE) [31], Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM) models [32-35] show promising results, another author proposed an EEG-based emotion recognition method using deep neural networks on the DEAP dataset, achieving 87.99% accuracy for valence and 88.63% for arousal classification. Their approach demonstrated the effectiveness of deep learning for automatic emotion detection from brain signals [36] Subject-dependent evaluations have demonstrated significant performance in EEG-based emotion recognition. For instance, a CNN-KAN model applied to the SEED dataset achieved an average accuracy of 97.45% in a three-class emotion classification task [37]. A hybrid CNN-LSTM model reached 85.2% binary accuracy for cross-subject emotion recognition on DEAP [38]. The DEAP dataset records 81% accuracy, with LUMED achieving 81.8%, and SEED-trained models 58%. Yin et al. [39] proposed ERDL, combining Graph Convolutional Neural Network (GCNN) and LSTM, achieving 90.45% and 90% on DEAP. Tuncer et al. [40] introduced Tetromino, a game-based feature generation method using Discrete Wavelet Transform (DWT) and minimum Redundancy Maximum Relevance (mRMR), achieving 100% on DREAMER and GAMEEMO and 99%+ on DEAP. Ahmed and Sabur [41] used SVM and MSVM, achieving 89% and 96.71%, respectively. These studies highlight the importance of balancing data dimensions for emotion recognition. In BCI, PCA [42] and ICA [43-45] balance EEG data size for feature extraction and selection. Recent advances in EEG analysis include improved artifact removal [46], optimized channel selection [47], real-time processing frameworks [48], and hybrid feature fusion techniques [49]. EEG signals link emotional states to brain activity, detecting fine changes with high temporal accuracy [50]. However, EEG signals have drawbacks, including time asymmetry, low signal-to-noise ratio, and uncertainty regarding specific brain region responses [51]. Thus, EEG-based emotion recognition remains a research challenge. Various techniques span from data collection to feature extraction and classification. The high number of electrode locations results in high-dimensional data, complicating analysis. Dimensional reduction acts as a compressor, removing unnecessary information while preserving critical brain activity patterns, enhancing analysis speed and reliability [52, 53]. This study utilizes widely used techniques involving subspaces from PCA and ICA, extracting features from a genuine dataset collected from an entertainment application. We aim to assess the efficacy of various dimensionality reduction approaches and feature engineering on distinct EEG datasets. Subspaces from PCA and ICA are commonly used for data analysis and interpretation and have been combined in numerous studies for EEG emotion recognition. Their combination leverages the strengths of both methods, enhancing feature selection and data understanding in complex datasets. Recent studies further explore dimensionality reduction techniques, such as the use of t-SNE [54], Autoencoders [55], and LDA [56], to optimize EEG feature extraction for emotion classification. These methods have proven effective in handling high-dimensional data, addressing issues related to computational complexity, and improving emotion recognition accuracy in real-time scenarios [57]. Furthermore, studies on hybrid approaches, combining PCA with ICA, have highlighted improved feature selection and dimensionality reduction [58], enabling better generalization across diverse datasets. Moreover, innovative methods, such as deep neural networks applied to temporal-spatial EEG analysis, show substantial promise in advancing the robustness and accuracy of emotion detection systems [59]. Finally, integrating multimodal data, including facial expressions and physiological signals, into EEG-based emotion recognition systems, is gaining traction, with studies showing enhanced emotional state classification in complex environments [60, 61]. Our proposed architecture employs orthogonal subspace projection and independent subspace separation to extract essential features while reducing the vast EEG data for classification. While existing emotion recognition studies based on EEG signals have demonstrated low overall performance, they face significant limitations, including high computational load, poor adaptability to diverse datasets, and challenges in distinguishing signal from noise. For instance, Koelstra et al. [62] reported average classification rates of 55.7% for arousal and 58.8% for valence, underscoring issues with noise interference and limited feature representation that restrict practical usability. Similarly, Huang et al. [63] found a trade-off between computational complexity and accuracy, achieving 66.05% for valence and 82.46% for arousal, which limits real-time applicability. Jirayucharoensak et al. [64] noted moderate accuracy rates of 75.9% for valence and 79.3% for arousal using deep learning networks, highlighting challenges in scalability and noise handling across datasets. To address these gaps, this study proposes the hybrid dimensionality reduction model (HDRM), which combines principal component analysis (PCA) and independent component analysis (ICA) for enhanced dimension reduction and feature extraction. By overcoming computational inefficiencies, improving adaptability across diverse datasets, and effectively distinguishing between signal and noise, HDRM offers a scalable and robust solution. Its superior performance demonstrates significant potential for real-world applications, particularly in entertainment and therapeutic contexts.

3. Contribution

In this work, a new HDRM is proposed, which is a combination of orthogonal and independent subspaces for feature extraction of EEG data. Spatial features are made up of components from orthogonal subspaces to capture patterns of sources’ variability, and temporal features are the mean and standard deviations. It is noted that the inner product of components from both subspaces integrates these spatial attributes to create enhanced features for the subsequent stages, by enhancing the recognition capabilities of EEG signals for emotion detection. This integrative approach reveals a high level of effectiveness for practical use in emotion recognition and brain-computer interface applications.

Integration of subspaces for emotion recognition: We created a focused dataset that includes the first channel of prefrontal cortex and the second channel of left hemisphere. In particular, these channels play a role in engaging the target affective states implied by the proposed shift in paradigm. On this designed dataset, we will evaluate the usefulness of the proposed model to showcase its effectiveness. This provides the background for the current work, attempting to take the best from orthogonal and independent subspace in feature selection process, and then in the feature engineering step to improve accuracy in emotions classification.

Statistical analysis and feature engineering: As a part of signal processing, we also conducted a statistical analysis where the features are the mean value, standard deviation, and the inner product after all three-dimensional EEG data analysis i.e., after applying PCA’s orthogonal subspaces and ICA’s independent subspaces. This analysis evaluated the activity of the brain to emotionally evocative material. We observed an increased average in certain areas; in the front, the alpha bands of the EEG signal showed that the patients exhibited quiet attention after PCA. A composed and higher SD heaved that the activation hierarchies were more complex with an increase in the variability of the network. To the extent that it was possible given the timing of emotional responses, these results provided information regarding the temporal patterning of feelings and between subject variability of emotion. We also calculated the inner product, which includes the multiplication of the first several principal components from PCA and the independent components from ICA in order to describe the interactions between different regions of the brain.

Combined application of orthogonal and independent subspaces in EEG research: This study explored the synergistic application of orthogonal and independent subspaces on a custom EEG dataset, followed by advanced feature engineering. The results showed that integrating the leading principal components from orthogonal subspaces with independent components significantly outperformed using either method alone. This fusion enriched the feature space and enhanced classification accuracy. While orthogonal subspaces maximize variance, independent subspaces emphasize statistical independence. By combining these properties, we improved feature discriminative power, similar to how interaction terms in regression models capture additional variance overlooked by individual terms.

4. Proposed Methodology

Our research introduces a novel feature generation algorithm with reduced dimensionality for effective emotion recognition from EEG data across two experimental paradigms, focusing primarily on subspace-based methods. Previous studies using either PCA or ICA alone achieved limited accuracy in emotion identification. By integrating these subspace techniques, we significantly improved recognition accuracy. PCA captures orthogonal subspaces related to variance, while ICA isolates independent subspaces tied to distinct neural sources, resulting in a more comprehensive feature set. After extracting features, we applied Pearson correlation to select key components and merged them into a unified feature vector. Basic feature engineering, including central tendency (mean), standard deviation (spread), and inner product, provided insights into brain activity linked to emotions [65-68].

Inspired by the concept of product features (such as inner products or product matrices) in voice recognition, where multiplying filter bank outputs captures interactions across spectral bands [69], we adopted a similar approach in EEG-based emotion recognition. Inner products in wavelet packet analysis extract discriminative features by projecting signals onto orthogonal basis functions, as demonstrated for vibration of EEG signals [70]. This method enhanced fault detection in vibration analysis, and we adapted this concept for EEG data analysis. In our study, the product terms are derived by multiplying the top orthogonal subspaces (from PCA) with independent subspaces (from ICA). These product terms reflect the interactions between different brain regions or processes uncovered by PCA and ICA. This approach offers a more comprehensive understanding of brain activity related to emotions by combining the strengths of variance capture from PCA and statistical independence from ICA. This feature engineering technique, involving inner products, enriches the feature space and enhances the discriminative power of emotion recognition models. By applying machine learning classifiers on these enhanced feature vectors, we observed a significant improvement in emotion detection accuracy compared to using PCA or ICA individually.

To evaluate the standalone effectiveness of the statistical features—mean, standard deviation, and inner product—without dimensionality reduction techniques such as PCA and ICA, a baseline analysis was conducted using raw EEG data. In this setup, the raw EEG signals were directly processed to extract the aforementioned statistical features without applying PCA or ICA for subspace projection. These baseline features were then used as inputs for the same classifiers—Logistic Regression, KNN, and Random Forest—employed in the proposed HDRM framework. The results of this baseline analysis revealed lower classification accuracies compared to the PCA-ICA-enhanced features. Specifically, for commercial advertisements, baseline features achieved maximum accuracies of 71.32% with Logistic Regression and 68.14% with KNN, while Kannada musical clips resulted in accuracies of 75.45% and 70.81%, respectively. These values, while reasonably high, indicate that dimensionality reduction through PCA and ICA provides an additional layer of feature refinement, enabling more robust classification performance.

The comparative analysis highlights that PCA and ICA not only enhance discriminative power but also reduce redundancy and noise in the EEG signals, capturing subtle neural patterns associated with emotional states. While raw statistical features demonstrate a degree of effectiveness, their performance lacks the precision and reliability achieved through the hybrid subspace projection approach. The integration of PCA and ICA allows for better representation of spatial and temporal variations, enriching the statistical feature set and improving the signal-to-noise ratio. These findings emphasize the importance of combining dimensionality reduction techniques with feature engineering to address the high-dimensional and noisy nature of EEG data. Future studies could further explore advanced feature selection and dimensionality reduction strategies to refine this hybrid model and evaluate its adaptability across different datasets and experimental paradigms. Figures 1 and 2 provide a detailed representation of the phases of the proposed architecture and the experimental flow of the study, showcasing how the combination of subspace projections (PCA) and independent components (ICA) contributes to a more robust EEG-based emotion recognition system.

Figure 1. Proposed architecture of emotion classification

Figure 2. Flow diagram of the proposed work

4​.1 Subspaces in EEG signals and emotion analysis

EEG signals are inherently complex and high-dimensional, capturing electrical activity from various brain regions over time. Different emotions are encoded within distinct patterns of neural activity across multiple spatial locations and frequency bands. Identifying the most relevant features from this data is essential for accurate classification of emotional states. Dimensionality reduction techniques like principal component analysis (PCA) and independent component analysis (ICA) have been widely employed to project EEG data into lower-dimensional subspaces, isolating key components of brain activity related to emotional responses [71, 72].

4.2 PCA: Orthogonal subspaces in EEG emotion analysis

PCA reduces the dimensionality of EEG data by projecting it onto orthogonal subspaces, or principal components, that capture the maximum variance. Each principal component represents an uncorrelated aspect of the brain’s electrical activity. In emotion analysis, PCA can capture distinct patterns of brain activity associated with different emotions. For example, emotional states like sadness or fear may correspond to activity in the frontal regions, while relaxation may be linked to posterior regions [73-75]. PCA allows us to reduce noise and redundant information, focusing on the most relevant emotional signals.

 $Z=XV$             (1)

where, Z is the transformed data, X is the original EEG data, and V represents the eigenvectors (principal components).

4.3 ICA: Independent subspaces in EEG emotion analysis

ICA takes EEG signals a step further by identifying independent subspaces that correspond to statistically independent neural processes. This separation is particularly useful when analyzing mixed EEG signals, where different neural sources contribute simultaneously. ICA isolates independent components, such as those related to emotional processing in the amygdala or cognitive control in the prefrontal cortex. ICA also helps to remove artifacts and noise, enhancing the accuracy of emotion recognition by isolating neural activity from non-neural artifacts.

$X=A$                   (2)

where, X is the observed EEG signal, A is the mixing matrix, and S represents the independent source signals.

4.4 Combined power of orthogonal and independent subspaces in EEG emotion analysis

Integration of the orthogonal components of PCA and the independent components of ICA is an appealing combination for analyzing high-dimensional EEG data for the purpose of emotion recognition. Thus, our approach not only succeeds in reducing the dimensionality of the data but also preserves significant neural information, making us provide a sound framework for the actual emotions’ classification. It offers an elegant method of dimension reduction. PCA’s orthogonal subspace methods enable treatment of the excessive dimensionality associated with EEG data in terms of maximal variance in the signal. However, this variance-based decomposition may not accurately partition the real sources arising for the emotion related activations in the brain.

On the other hand, ICA separates independent subspaces and then separate the mixed EEG signals into independent neural components. When PCA and ICA are both used, we are able to have better feature representation as both PCA captures the global variance while the ICA captures independent neural activity. This combined strategy helps us to screenings out the most influent emotional features from the EEG signals in a structured and easily interpreted way. To attain a high level of descriptive and explanatory power, we begin by employing PCA to minimize dimensionality and extract the most significant/ relevant subspaces where substantial variability of the brain related to emotions may be seen. ICA is then performed on these principal components to further eliminate the interference between independent neural sources obtained and to clearly differentiate between the overlapping neural activities from various areas. Essentially this composes a hybrid form of model with significance to the spatial as well as temporal scenarios of the brain activity touching on the feelings. For instance, Hu and Zhang in their studies of EEG emotion recognition established that both PCA and ICA improved the accuracy when done consecutively than when done separately. Their study emphasized that PCA is capable of decreasing the dimensionality of the collected EEG data for tracking important neural signals and ICA for filtering best affective processes from cognitive or motor interferences that boost the S/N ratio. Specifically, several papers, for example, Blanco-Rios et al. [74] have utilized PCA in conjunction with tree-based models to enhance real-time emotion detection, indicating that PCA effectively isolates key components of brain activity related to emotions. However, these studies concerned with the employing PCA and ICA each at a time rather than combining the merits of the two.

On the other hand, the proposed method is about combining orthogonal subspaces from PCA together with independent subspaces from ICA for the unification of emotion analysis. This new combination enables us to utilize both global variance and neural independence for the most accurate modelling of brain activity. Not only does our hybrid model enable more effective separation of emotion-specific brain activity patterns but also better classification with consideration of all the abovementioned factors increasing classification accuracy while preserving emotionally relevant characteristics of data, and reducing the dimensionality of independent neural sources. This combined method gives a more thorough and a precise analysis of EEG data as opposed to a singular method hence making it different from the previous research works. Also, we have learned from our experiments that with the help of this approach that uses two subspaces, the influence of some non-neural noise, for example, movement of muscles or rapid closure of the eyes is minimized. ICA, in particular, demonstrated good results in the selection of these artifacts as independent FROM which the noise can be filtered. This adds the stability into our model because the model is less prone to interference from outside forces and can therefore be used in real time.

4.5 Feature engineering

EEG, for short, is an array of electrical interference recorded from the scalp of the head. Actually, PCA and ICA can be considered as organizers that bifurcate it into several factors as the patterns of brain activity. This indicates that extracting the important features from the raw EEG signals is a very important aspect in emotion sharing. Therefore, when the extracted data of the brain activity is subjected to simple statistical measures like mean, standard deviation (SD), and inner products after undergoing techniques like PCA or ICA, there is exposure to something that was not visible initially. For instance, average activity in various areas, such as a higher number in the prefrontal cortex alpha band, may represent the subject’s state of mind during processing, which may signify calmness. On the other hand, the standard deviation is helpful if one wants to assess the degree of variability of the signal for the corresponding component; in other words, the size or interpersonal variability in response to emotions may be considered. Another level of information is given by derivatives of these components, which determine the coupling between two regions. Therefore, by including these features in addition to the main raw EEG signals, better insight into the workings of the brain when responding emotionally can be gained. This enhancement gives information that can be used to decide emotions as per the brain activity; it is like having a guide map of how the content is laid down so as to detect the subject’s state of mind. This enriched data improves the development of emotion recognition models, which can be applied to BCI, adaptive learning systems, and to read deception besides [74-78].

The idea for our proposed work on using EEG data for emotion analysis arose from the feature extraction study of speech data using LDA [79]. In this study, the authors showed how to obtain a filter bank simply using phonetically labelled speech data to compute the within-class and across-class covariance matrices and then multiply the result to obtain a product matrix. It was then used to determine a set of waveform sample matrices that could be distinguished most easily as belonging to or not to a specific phoneme. From their method of using covariance matrices to improve features' discriminant properties, we transplanted the same idea into our study, focusing on the product of PCA and ICA components to synthesize a new feature set for EEG data. Interaction terms, used in regression and classification models, are intended to estimate additional variations not covered by single terms. For instance, in polynomial regression, interaction terms (like xx2) help model non-linear relationships. Similarly, the inner product $IPi,k=Pi,k\cdot Ii,kI{{P}_{\left( i,k \right)}}={{P}_{\left( i,k \right)}}\times {{I}_{\left( i,k \right)IPi}},k=Pi,kIi,k$ between the PCA and ICA components for each subject can capture complex, non-linear interactions between these features. This combination leverages the variance-maximizing properties of PCA and the independence-maximizing properties of ICA, enhancing the overall discriminative power of the features. Such interactions are essential for modeling relationships that are not apparent through linear combinations alone, providing a richer representation of the underlying data structure. This concept aligns with the notion that incorporating psychoacoustic findings into feature extraction can lead to improved recognition performance [79, 80]. Below, Table 1 presents a detailed breakdown of the proposed work notions across each phase.

Table 1. Mathematical notations and equations of the proposed study

Component

Description

Matrix Size

Notation

Original Data (X)

Data matrix with subjects and features

       $X\in {{R}^{n\times m}}$

 

Orthogonal Subspaces (P)

Top 10 orthogonal com- ponents for each subject

       $P\in {{R}^{n\times 10}}$

   $P=\text{PCA}\left( X,10 \right)$

Independent Subspaces (I)

Top 10 independent components for each subject

       $I\in {{R}^{n\times 10}}$

$I=\text{ICA}\left( X,10 \right)$

Combined Features (C)

Concatenated orthogonal and independent components

       $C\in {{R}^{n\times 20}}$

        $C=\left[ P,I \right]$

Mean Calculation (m)

Mean of 20 features for each subject

      $m\in {{R}^{n\times 1}}$

${{m}_{i}}=\frac{1}{20}\sum _{j=1}^{20}{{C}_{ij}}$

Standard Deviation Calculation (s)

Standard deviation of 20 features for each subject

       $s\in {{R}^{n\times 1}}$

${{s}_{i}}=\sqrt{\frac{1}{20}\sum _{j=1}^{20}{{\left( {{C}_{ij}}-{{m}_{i}} \right)}^{2}}}$

PCA and ICA Inner Products (IP)

Inner products between orthogonal and independent components for each subject

$IP\in {{R}^{n\times 10}}$

$I{{P}_{i,k}}={{P}_{i,k}}\cdot {{I}_{i,k}}$

Table 2. Contrast between the proposed work and the inspired method

Comparative Aspect

Proposed Work

Inspired Work [79]

Inner Product/Matrix

Inner product: $I{{P}_{i,k}}-{{P}_{i,k}}\cdot {{I}_{i,k}}$

Product Matrix: $\sum _{ac}^{-1}\cdot {{\sum }_{ac}}$

Operation

Element-wise multiplication of PCA and ICA components.

Matrix multiplication of within-class and across-class covariance matrices.

Purpose

Creates new features by combining PCA and ICA components.

Used in LDA to derive a transformation matrix.

Application

Enhances feature discriminability for emotion analysis in EFC data.

Enhances feature discriminability for phoneme classification in speech recognition.

-n: Number of subjects (rows in X, P, I, C, m, s, IP).

-m: Number of features in the original data matrix X.

-i: Index representing a specific subject (ranging from 1 to n).

j: Index representing a specific feature or column in the combined features matrix $C$ (ranging from 1 to 20).

k: Index representing a specific principal or independent component (ranging from 1 to 10). By incorporating these product terms, we aimed to capture the strengths of both orthogonal and independent subspaces: the orthogonal subspaces’ ability to capture the most variance in the data and the independent subspaces’ ability to identify statistically independent sources. Inner products (product terms) between the orthogonal and independent components create enriched features that blend variability information with independent sources, leading to a more informative and compact representation of the data while retaining significant interactions between different feature aspects. This can improve classification performance. This approach, inspired by the method of combining covariance matrices in LDA, led to improved feature extraction and emotion classification in our EEG data analysis. Table 2 describes how this inspiration was employed in the proposed work [81].

5. Emotion Recognition Based on Several Classifiers

In the final step, the normalized features are fed into various classifiers, including logistic regression, k-nearest neighbor (KNN), and random forests, for identifying specific effects from EEG data. Each classifier is evaluated based on accuracy in detecting emotional states, aiding in the selection of the most suitable algorithms for real-time emotion detection from EEG signals. In references [82, 83], KNN operates as a case-based learning system, collecting categorization information in the form of instances. Although KNN is simple and effective, it is slower for dynamic web mining applications. To enhance its efficiency, it’s suggested to create an inductive learning model from a representative subset of the training dataset [84]. Logistic regression predicts the likelihood of an input belonging to a specific category, yielding a probability value h(x) in the range of [0, 1], calculated using the logistic function:

$h\left( x \right)=\frac{1}{1+{{e}^{-\left( {{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\ldots +{{\beta }_{n}}{{x}_{n}} \right)}}}$               (3)

$z~=\text{ }\!\!~\!\!\text{ }{{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\ldots +{{\beta }_{n}}{{x}_{n}}$ random forest, a supervised learning algorithm, enhances accuracy by utilizing multiple decision trees, with the final classification determined by majority voting:

$f~\left( X \right)=\text{ }\!\!~\!\!\text{ mode }\!\!~\!\!\text{ }\left( f1\left( X \right),~f2\left( X \right),~.~.~.,~fT~\left( X \right) \right)$               (4)

In this equation, f (X) represents the predicted class label for input X, and f1(X), f2(X), ..., fT(X) are predictions from each tree. Random forest is robust against overfitting and adept at handling large datasets with missing or imbalanced data, making it suitable for various applications, including bioinformatics and finance [80, 81].

6. Dataset Description

This section outlines the design of our experiment, the dataset used, and the results obtained. Entertainment-based videos were utilized as external stimuli, as they are effective in eliciting emotional responses in humans. Audiovisual stimuli, in particular, have been shown to enhance emotional states in both psychological and physiological studies. The custom dataset was recorded using a 2-channel BIOPAC EEG device while subjects engaged with two types of entertainment stimuli: commercial advertisements and Kannada musical clips. EEG signals were recorded from the left and prefrontal cortex areas of the brains of 46 healthy participants, comprising students and employees aged between 20 and 40 years from the University of Mysore, Karnataka, India. The dataset was collected at a sampling frequency of 2000 Hz, and EEG electrodes were positioned on the prefrontal cortex, corresponding to channel 1 (ch1) and channel 2 (ch2). During the experiment, participants were exposed to different audiovisual inputs designed to elicit natural emotional reactions.

To capture diverse emotional responses, the stimuli were carefully chosen to represent distinct emotional categories. Short video clips were used for commercial advertisements to evoke specific emotions such as fear, relaxation, enjoyment, and sadness. These advertisements were selected based on prior research in affective computing and psychological studies, ensuring their effectiveness in eliciting targeted emotional states. Similarly, Kannada musical clips were chosen for their cultural relevance, emotional depth, and resonance with participants who were native Kannada speakers. The stimuli selection process considered factors such as tempo, lyrical content, and validation in previous emotion-inducing experiments to maximize emotional engagement. Variability in individual responses to the same stimuli is an inherent challenge in emotion recognition studies, influenced by factors such as cultural background, prior experiences, and emotional sensitivity. To address this, the experiment maintained standardized conditions, including consistent lighting, sound levels, and seating arrangements, to minimize external influences. Participants were briefed before the experiment to ensure a uniform understanding of the task and stimuli.

The recorded dataset underwent preprocessing, including baseline correction and artifact removal, to enhance signal reliability and reduce noise. Feature extraction was performed using Principal Component Analysis (PCA) and Independent Component Analysis (ICA), and statistical feature engineering was employed to improve the accuracy of emotion recognition. Statistical analyses, such as Pearson correlation, were applied to identify patterns common across participants while accounting for individual variability. The dataset has been made publicly available on Kaggle [85, 86].

6.1 Performance on commercial advertisement

This section examines the emotional impact of different commercial advertisements, designed to evoke feelings of fear, sadness, and humor. Each clip, lasting 10 seconds, was crafted to elicit a specific emotional response from the audience, highlighting the effectiveness of emotional advertising in engaging viewers. EEG data was collected from 22 healthy subjects during the experiment, with each emotion class associated with a dataset of dimensions 22×20,000. PCA and ICA were employed to extract the top 20 components, reducing the final data matrix to 66×20. Further feature engineering, including mean, standard deviation, and inner product calculations, reduced the data to essential features (66×2 for mean and SD, and 66×10 for inner product). These features were then used in three classifiers for analysis. The results below summarize the implementation of PCA, ICA, and feature engineering across both channels (Ch1 and Ch2), along with visualizations and tables depicting the performance across experiments. Figures 3-6 display the top ten PCA and ICA components for EEG channels 1 and 2, with amplitude on the y-axis and samples (1-20,000) on the x-axis. Figures 7 and 8 compare combined PCA (solid lines) and ICA (dashed lines) components for the same channels. PCA captures broad variance and patterns, while ICA isolates independent sources. Together, these methods preserve key variance and independence, enhancing feature extraction for emotion recognition and neural analysis.

Figure 3. PCA components for Ch1 data commercial advertisement

Figure 4. ICA components for Ch1 data commercial advertisement

Figure 5. PCA components for Ch2 data commercial advertisement

Figure 6. ICA components for Ch2 data commercial advertisement

Figure 7. PCA_ICA components for Ch1 data commercial advertisement

Figure 8. PCA_ICA components for C21 data commercial advertisement

Table 3. Ch1 results in accuracy for different classifiers for experiment one

Experiments

Logistic Regression (%)

Random Forest (%)

KNN (%)

PCA

45

50

50

ICA

43

70

53

Feature engineering (mean)

85

51.13

78

Feature engineering (inner product)

83.33

64.9

53.13

Table 4. Ch2 results in accuracy for different classifiers for experiment one

Experiments

Logistic Regression (%)

Random Forest (%)

KNN (%)

PCA

33

31.3

49.3

ICA

40

34.5

56

Feature engineering (mean)

65.4

50.16

83.14

Feature engineering (inner product)

78.3

78

6.49

Table 3 shows classification accuracy for Channel 1 on the commercial advertisement dataset (80/20 split; 51 training, 15 testing subjects). Logistic Regression reached 45% with PCA, 43% with ICA, and up to 85% and 83.33% with combined PCA-ICA mean and inner product features. Random Forest achieved 50% (PCA), 70% (ICA), 51.13% (mean), and 64.9% (inner products). KNN recorded 50% (PCA), 53% (ICA), 78% (mean), and 53.13% (inner products). PCA-ICA combinations significantly enhanced Logistic Regression and variably benefited other classifiers.

Table 4 summarizes classification accuracy for Channel 2. Logistic Regression achieved 33% with PCA, 40% with ICA, and up to 65.4% and 57.3% with combined PCA-ICA mean and inner product features. Random Forest recorded 31.3% (PCA), 34.5% (ICA), 50.16% (mean), and 78% (inner products). KNN achieved 49.3% (PCA), 56% (ICA), 83.14% (mean), and 71.49% (inner products). Combining PCA and ICA significantly improved KNN accuracy and showed variable gains for Logistic Regression and Random Forest.

6.2 Performance on Kannada musical clips

The second experiment involved playing fragments of Kannada songs (a language spoken in Southern India) and the experiment involved 24 participants. Each clip lasted 30 seconds where samples dimension is 72*60000 in each channel, and we aimed for specific feelings. These three songs elicit emotions categorized as relaxed, enjoyment, and sad. The type of music we used for the intervention was Kannada music, as all the participants were from Karnataka, India, and Kannada is their spoken language. Emotions are physiological, but new studies say that language has more to do with them than previously believed. This is especially the case with the language that has a stronger bond with our personal self, that is, the mother tongue. For the proposed analysis, we have used raw EEG signals, meaning that for both channels Ch1 and Ch2 we used raw EEG signals. Therefore, the size of original EEG data was employed with computing PCA and ICA with the top 10 components from both methods. The dimension of the data is 7220 after applying PCA and ICA, respectively, followed by feature engineering, which gave a final feature of two matrices of 72×2 (1(mean) + 1(SD)) and inner product final features (7210). Only these essential features were forwarded to three different classifiers. Below, the tabulated results show the implementation results observed in each experiment. Below figures depict the visualizations, and tables show the experimental results obtained from PCA, ICA, and feature engineering (combined PCA + ICA components) of both channels. Figures 9-12 show the top ten PCA and ICA components for EEG channels 1 and 2 during exposure to Kannada musical clips, with samples (1-60,000) on the x-axis and amplitude on the y-axis. PCA captures broad variance, while ICA isolates independent sources. Figures 13 and 14 compare PCA (solid lines) and ICA (dashed lines) components, highlighting PCA’s general features and ICA’s specific neural activity. Together, they enhance feature extraction for emotion recognition and neural analysis.

Figure 9. PCA components for Ch1 data Kannada musical clips

Figure 10. ICA components for Ch1 data Kannada musical clips

Figure 11. PCA components for Ch2 data Kannada musical clips

Figure 12. ICA components for Ch2 data Kannada musical clips

Figure 13. PCA_ICA components for Ch1 Kannada musical clips

Figure 14. PCA_ICA components for Ch2 Kannada musical clips

Table 5 summarizes classification accuracy for Channel 1. Logistic Regression achieved 31.33% with PCA and 43% with ICA, improving to 72.54% with mean and SD features and 67.45% with inner products. Random Forest showed 50% with PCA and 74% with ICA, increasing to 75% with mean and SD features and 58.23% with inner products. KNN achieved 65.3% with PCA and 37.9% with ICA, reaching 53.44% with mean and SD features and 72.32% with inner products.

Table 5. Ch1 results in accuracy for different classifiers for Kannada musical clips

Experiments

Logistic Regression (%)

Random Forest (%)

KNN (%)

PCA

31.33

50

65.3

ICA

43

74

37.9

Feature Engineering

72.54 (mean and SD)

67.45 (inner product)

75 (mean and SD)

58.23 (inner product)

53.44 (mean and SD)

72.32 (inner product)

Table 6. Ch2 results in accuracy for different classifiers for Kannada musical clips

Experiments

Logistic Regression (%)

Random Forest (%)

KNN (%)

PCA

40

33

37.3

ICA

37.3

76

67.49

Feature Engineering

83.33 (mean and SD)

90.33 (inner product)

86.33 (mean and SD)

62.37 (inner product)

65.32 (mean and SD)

53.37 (inner product)

In Table 6, classification accuracy for Channel 2 is summarized. Logistic Regression achieved 40% with PCA and 37.3% with ICA, improving to 83.3% with mean/SD features and 90.33% with inner products. Random Forest showed 33% with PCA, 76% with ICA, 86.33% with mean/SD, and 62.37% with inner products. KNN recorded 37.3% with PCA, 67.49% with ICA, 65.32% with mean/SD, and 53.37% with inner products.

Table 7 summarizes the highest classification accuracy achieved across experiments, where commercial advertisements (CA) on channel 1 (Ch1) achieved 85% (Logistic Regression) and 78% (KNN) with mean and standard deviation (M+SD) features, while channel 2 (Ch2) reached 83.14% (KNN) with M+SD, and Kannada Music Videos (KM) on Ch1 achieved 75.12% (KNN) with M+SD, with Ch2 reaching 90.15% (Logistic Regression) using inner product (IP) features. Features included top 10 PCA, ICA, M+SD, and IP, normalized using min-max scaling, and classifiers (Logistic Regression, KNN, and Random Forest) were evaluated with an 80:20 train-test split, demonstrating an effective EEG-based emotion classification pipeline. Precision, recall, and F1-score results for each feature method and classifier show that for CA Ch1, KNN with M+SD achieved precision (76.5%), recall (75%), and F1-score (75.8%), while for Ch2, KNN with M+SD achieved precision (81.5%), recall (80%), and F1-score (80.7%). For KM Ch1, KNN with M+SD achieved precision (73.5%), recall (72%), and F1-score (72.7%), and for Ch2, Logistic Regression with IP achieved precision (88%), recall (86%), and F1-score (87%). These additional metrics provide a more comprehensive evaluation, particularly in unbalanced datasets where accuracy alone might not reflect classifier performance, with recall measuring the ability to identify true positive cases and F1-score balancing precision and recall for a better overall classifier assessment.

Table 7. Summary of classifier performance in EEG-based emotion recognition: Key findings

Dataset and Channels

Classifier

Highest Accuracy (HA)

Precision, Recall, F1-Score (M+SD)

Precision, Recall, F1-Score (IP)

CA Ch1

L R

85% (M+SD)

Precision: 82%, Recall: 80%, F1: 81%

 

CA Ch1

KNN

78% (M+SD), 64.9% (IP)

Precision: 75%, Recall: 72%, F1: 73.5%

Precision: 70%, Recall: 68%, F1: 69%

CA Ch2

KNN

83.14% (M+SD), 71.49% (IP)

Precision: 80%, Recall: 78%, F1: 79%

Precision: 76%, Recall: 73%, F1: 74%

KM Ch1

KNN

75.12% (M+SD), 67.45% (IP)

Precision: 72%, Recall: 70%, F1: 71%

Precision: 68%, Recall: 66%, F1: 67%

KM Ch 2

LR

86.33% (M+SD), 90.15% (IP)

Precision: 88%, Recall: 85%, F1: 86%

Precision: 83%, Recall: 80%, F1: 81%

7. Discussion

The Hybrid Dimensionality Reduction Model (HDRM) proposed in this study demonstrates strong performance in emotion recognition from EEG signals, particularly in applications involving entertainment media such as Kannada musical clips and commercial advertisements. By integrating Principal Component Analysis (PCA) and Independent Component Analysis (ICA), HDRM effectively captures spatial and temporal distinctions, resulting in improved accuracy, computational efficiency, and scalability. The model achieved a maximum classification rate of 90.15% and 85% for Kannada musical clips and commercial advertisements, respectively, outperforming prior methods in precision and speed.

HDRM’s ability to leverage orthogonal and independent subspaces enhances dimensionality reduction and feature extraction, constructing a more discriminative feature space for emotion recognition. This approach addresses key challenges in EEG-based emotion recognition, including high sensitivity to noise, low cross-platform robustness, and computational complexity.

As shown in Table 8, HDRM significantly outperforms existing deep learning models, including Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory (Bi-LSTM) models, on a custom dataset of Kannada musical clips, achieving an accuracy of 90.15%. By focusing on data captured from two-channel electrodes targeting the prefrontal cortex and the left brain, HDRM allows for precise feature extraction from these regions. Compared to complex multi-modal fusion approaches, HDRM delivers comparable or superior performance using this targeted EEG-only method, demonstrating its suitability for real-time monitoring applications where computational complexity must remain low.

While methods like Power Spectral Density (PSD) and Differential Entropy (DE) exhibit limited performance, HDRM leverages PCA and ICA for feature engineering, extracting more discriminative spatial-temporal features. HDRM’s feature extraction capabilities also surpass those of resource-intensive techniques such as CNNs, Bi-LSTMs, and Transformer-based architectures. This balance between accuracy and efficiency positions HDRM as a leading method for brain-computer interface applications and highlights its potential for scaling into real-time emotion recognition systems.

Table 8. Comparison of results with state of art methods

Authors (Year)

Methods

Datasets

Results

Relevance

Zheng and Lu [87]

Deep Neural Networks (DNN)

 SEED

86.0

Highlights the application of DNNs in achieving high accuracy in emotion recognition tasks.

Li et al. [88]

CNN, DSCNN, Bi-LSTM (spatio-temporal features)

SEED,

90.4

Introduces a model that considers hemispheric asymmetry, leading to improved performance.

Liu et al. [89]

Multimodal Deep Learning (EEG and Eye Tracking)

DEAP

83.5

Combines EEG data with eye-tracking information to enhance emotion recognition accuracy.

Proposed Method

HDRM: Spatial and temporal features (PCA, ICA, Feature Engineering)

Custom dataset (Kannada musical clips and Commercial advertisements)

90.15% (Kannada clips), 85% (Commercial ads)

HDRM outperforms several models in accuracy and computational efficiency. Its focus on feature engineering and real-time emotion recognition offers significant advantages over deep learning methods and multi-modal fusion approaches.

Table 9. Proposed algorithm with descriptions

Steps

Description

1. Input

 

2. Output

Augmented data matrix final_features

3. Initialization

Clear command window and workspace

4. Data Preparation

Load dataset into matrix $X$

Preprocess EEG signals with filter methods

5. Component Selection

Perform PCA on $X$ to extract top $k$ components, forming matrix $P\in {{R}^{n\times k}}$

Perform ICA on $X$ to extract top $k$ components, forming matrix $I\in {{R}^{n\times k}}$

6. Feature Engineering

Compute Mean:

For each observation in combined matrix $C-\left[ P\mid I \right]$ :

"mean_features" $=\frac{1}{2k}\sum _{j=1}^{2k}{{C}_{ij}},i=1,\ldots ,n$

Compute Standard Deviation:

For each observation in combined matrix $C$ :

"std_features" $-\sqrt{\frac{1}{2k-1}\sum _{j=1}^{2k}{{\left( {{C}_{j}}-\text{ }\!\!~\!\!\text{ ''mean }\!\!\_\!\!\text{ features'' }\!\!~\!\!\text{ } \right)}^{2}}},i=1,\ldots ,n$

Compute Inner Products:

Compute inner product for each pair of PCA and ICA components:

"inner_products" $=\left[ {{P}_{i,k}}\cdot {{I}_{i,k}} \right]$

7. Feature Aggregation and Output Generation

Load combined mean and standard deviation features from mean_std_file

Load inner product features from inner_product_file

Normalize resulting matrix using Min-Max scaling

Display augmented data matrix final_features

Print matrix dimensions.

Table 10. Experimental setup of the proposed study

Aspect

Details

Programming Language with version

MATLAB - version 2018

EEG Data Sampling Rate

2000 Hz

Preprocessing

High-pass filter with a cutoff frequency of 1 Hz to re- move slow-moving artifacts, using a one-pass, zero- phase, non-causal high-pass filter with the windowed time-domain (firwin) method.

Filter Characteristics

Hamming window, passband ripple: 0.0194, stop- band attenuation: 53 dB, filter length: 423 samples (3.305 seconds), transition bandwidth: 1.00 Hz with a -6 dB cutoff frequency at 0.50 Hz.

Feature Extraction

PCA and ICA individually applied to a dataset of 66 subjects with 20,000 features (commercial advertisement) and 72 subjects with 60,000 features (Kannada musical clips).

Selected Components

Top 10 orthogonal and independent components from both PCA and ICA were selected and combined to form a feature matrix of dimension 66×20 (commercial ads) and 72×20 (Kannada musical clips).

Feature Engineering

Computed mean, standard deviation, and inner product for each observation across the 20 features in both experiments.

Correlation Analysis

Employed to identify and remove highly correlated features (correlation threshold: 0.9).

Final Feature Set

Consisted of 66×2 (mean and SD), 66×10 (inner product) (commercial ads) and 72×2 (mean and SD), 72×10 (inner product) (Kannada musical clips) features, with statistical summaries (mean, standard deviation, and inner product) combined with PCA and ICA components of top 10.

Normalization

Min-max scaling.

Classification

k-Nearest Neighbors (k-NNs), logistic regression, and Random Forest classifiers with an 80:20 train-test split.

Evaluation

Multiple classifiers were assessed based on their ability to accurately classify emotions, demonstrating the efficacy of the preprocessing and classification pipeline.

Despite its strengths, certain limitations must be acknowledged. The use of a 2-channel BIOPAC system, while cost-effective, provides lower spatial resolution compared to multi-channel systems, potentially limiting the capture of fine-grained neural features. Additionally, while preprocessing mitigates noise, residual artifacts in EEG data may impact model accuracy. Addressing these gaps presents opportunities for future work.

The provided algorithm in Table 9 describes a feature selection and enhancement process applied to a given data matrix for improved analysis and machine learning. initially, it clears the command environment to avoid conflicts from prior computations. the data is loaded into a matrix, and a subset of key components is selected based on the number of components retained from both PCA and ICA. feature engineering computes the mean and standard deviation for each observation in this subset, providing insights into central tendencies and data dispersion. inner products are also calculated to represent the interaction between components. these engineered features are aggregated into two new matrices and final features, enhancing the data’s descriptive capacity. finally, the algorithm outputs the augmented data matrix and its size, ensuring the process’s correctness and preparing the data for further analysis. this approach improves data and achieves better results Table 10 has the detailed explanation of proposed algorithm.

8. Conclusion

This study introduced the Hybrid Dimensionality Reduction Model (HDRM) for emotion recognition from EEG signals, demonstrating its ability to outperform existing models in both accuracy and computational efficiency. By integrating PCA and ICA, HDRM creates a more discriminative feature space, improving classification performance, particularly in entertainment-based applications using Kannada musical clips and commercial advertisements.

The framework’s success in combining PCA and ICA establishes a new benchmark for feature selection and dimensionality assistance, making it suitable for real-time applications in entertainment, healthcare, and adaptive human-computer interfaces. HDRM effectively addresses key challenges in EEG-based emotion recognition, such as sensitivity to noise and computational complexity, while maintaining high performance.

Future work could focus on enhancing the system by incorporating multi-channel EEG setups to improve spatial resolution and signal-to-noise ratio. Advanced noise suppression techniques could further reduce the impact of environmental artifacts, and expanding the dataset to include diverse participants and stimuli would improve the model’s generalizability. Additionally, integrating HDRM with more sophisticated machine learning algorithms or extending its application to other physiological signals could amplify its utility.

HDRM’s scalability, efficiency, and strong classification capabilities highlight its potential for broader applications in emotion-aware technologies, therapeutic interventions, entertainment, and adaptive human-computer interfaces. Its success paves the way for modern humanized interfaces and the fusion of psychological intelligence with artificial systems.

  References

[1] Lerner, J.S., Li, Y., Valdesolo, P., Kassam, K.S. (2015). Emotion and decision making. Annual Review of Psychology, 66(1): 799-823. https://doi.org/10.1146/annurev-psych-010213-115043

[2] Yin, G., Sun, S., Yu, D., Li, D., Zhang, K. (2020). An efficient multimodal framework for large-scale emotion recognition by fusing music and electrodermal activity signals. arXiv Preprint arXiv: 2008.09743.

[3] Yutian, Z., Shan, H., Jianing, Z., Ci'en, F. (2024). Design and implementation of an emotion analysis system based on EEG signals. arXiv Preprint arXiv: 2405.16121. https://doi.org/10.48550/arXiv.2405.16121

[4] Damasio, A.R., Grabowski, T.J., Bechara, A., Damasio, H., Ponto, L.L., Parvizi, J., Hichwa, R.D. (2000). Subcortical and cortical brain activity during the feeling of self-generated emotions. Nature Neuroscience, 3(10): 1049-1056. https://doi.org/10.1038/79871

[5] Zhu, Y., Fan, H., Yuan, K. (2019). Facial expression recognition research based on deep learning. arXiv Preprint arXiv: 1904.09737. https://doi.org/10.48550/arXiv.1904.09737

[6] Fonseca, A.F. (2018). Representing pictures with emotions. arXiv Preprint arXiv: 1812.02523. https://doi.org/10.48550/arXiv.1812.02523

[7] Zhao, S., Yao, X., Yang, J., Jia, G., Ding, G., Chua, T. S., Schuller, B.W., Keutzer, K. (2021). Affective image content analysis: Two decades review and new perspectives. arXiv Preprint arXiv: 2106.16125. https://doi.org/10.48550/arXiv.2106.16125

[8] Ekman, P., Friesen, W.V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2): 124-129. https://doi.org/10.1037/h0030377

[9] Mehrabian, A. (2017). Nonverbal Communication. Routledge.

[10] Leong, S.C., Tang, Y.M., Lai, C.H., Lee, C.K.M. (2023). Facial expression and body gesture emotion recognition: A systematic review on the use of visual data in affective computing. Computer Science Review, 48: 100545. https://doi.org/10.1016/j.cosrev.2023.100545

[11] Pekrun, R. (2022). Emotions in reading and learning from texts: Progress and open problems. Discourse Processes, 59(1-2): 116-125. https://doi.org/10.1080/0163853X.2021.1938878

[12] Picard, R.W. (2010). Affective computing: From laughter to IEEE. IEEE Transactions on Affective Computing, 1(1): 11-17. https://doi.org/10.1109/T-AFFC.2010.10

[13] Cannon, W.B. (1914). The interrelations of emotions as suggested by recent physiological researches. The American Journal of Psychology, 25(2): 256-282. https://doi.org/10.2307/1413414

[14] Fussell, S.R. (Ed.). (2002). The Verbal Communication of Emotions: Interdisciplinary Perspectives. Psychology Press.

[15] Kessous, L., Castellano, G., Caridakis, G. (2010). Multimodal emotion recognition in speech-Based interaction using facial expression, body gesture and acoustic analysis. Journal on Multimodal User Interfaces, 3: 33-48. https://doi.org/10.1007/s12193-009-0025-5

[16] Keller, S.M., Samarin, M., Meyer, A., Kosak, V., Gschwandtner, U., Fuhr, P., Roth, V. (2018). Computational EEG in personalized medicine: A study in parkinson's disease. arXiv Preprint arXiv:1812.06594. https://doi.org/10.48550/arXiv.1812.06594

[17] Pfurtscheller, G., Neuper, C. (2001). Motor imagery and direct brain-computer communication. Proceedings of the IEEE, 89(7): 1123-1134. https://doi.org/10.1109/5.939829

[18] Kreibig, S.D. (2010). Autonomic nervous system activity in emotion: A review. Biological Psychology, 84(3): 394-421. https://doi.org/10.1016/j.biopsycho.2010.03.010

[19] Zhang, L., Wang, Y., Zhang, J. (2020). Emotion recognition using multi-Modal data fusion. International Journal of Cognitive Informatics and Natural Intelligence, 14(3): 21-32. https://doi.org/10.1016/j.inffus.2020.01.011

[20] Hsu, L., Chen, Y.J. (2021). An EEG study on students' learning in practical and theory-Based hospitality courses. International Journal of Adult Education and Technology (IJAET), 12(1): 40-60. https://doi.org/10.4018/IJAET.2021010103

[21] Bandara, D.S.V., Arata, J., Kiguchi, K. (2018). A noninvasive brain–computer interface approach for predicting motion intention of activities of daily living tasks for an upper-limb wearable robot. International Journal of Advanced Robotic Systems, 15(2): 1-13. https://doi.org/10.1177/1729881418767310

[22] Yi, X., Ge, L., Liu, H. (2015). Autonomic nervous system’s response in positive and negative emotion and the applications. Advances in Psychological Science, 23(1): 72-84. https://doi.org/10.3724/SP.J.1042.2015.00072

[23] Li, X., Zhang, Y., Tiwari, P., Song, D., Hu, B., Yang, M., Zhao, Z., Kumar, N., Marttinen, P. (2022). EEG based emotion recognition: A tutorial and review. ACM Computing Surveys, 55(4): 1-57. https://doi.org/10.1145/3524499

[24] Joshi, S., Joshi, F. (2022). Human emotion classification based on EEG signals using recurrent neural network and KNN. arXiv Preprint arXiv:2205.08419. https://doi.org/10.47164/ijngc.v14i2.691

[25] Ginting, A.S., Simanjuntak, R.M., Lumbantoruan, N., Sitanggang, D. (2024). EEG signal classification using K-Nearest neighbor method to measure impulsivity level. Jurnal Sisfokom (Sistem Informasi Dan Komputer), 13(2): 261-266. https://doi.org/10.32736/sisfokom.v13i2.2154

[26] Huy, N.H., Frenzel, S., Bandt, C. (2013). Two-Step linear discriminant analysis for classification of eeg data. In Data Analysis, Machine Learning and Knowledge Discovery. Cham: Springer International Publishing, pp. 51-59. https://doi.org/10.1007/978-3-319-01595-8_6

[27] Awan, A.J., Rashid, N., Iqbal, J., Ishfaque, A. (2013). Evaluation of ANN, LDA and decision trees for EEG based brain computer interface. https://www.academia.edu/9005069/Evaluation_of_ANN_LDA_and_Decision_Trees_for_EEG_Based_Brain_Computer_Interface.

[28] Veeramallu, G.K.P., Anupalli, Y., Jilumudi, S.K., Bhattacharyya, A. (2019). EEG based automatic emotion recognition using EMD and random forest classifier. In 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kanpur, India, pp. 1-6. https://doi.org/10.1109/ICCCNT45670.2019.8944903

[29] Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., Arnaldi, B. (2007). A review of classification algorithms for EEG-Based brain-computer interfaces. Journal of Neural Engineering, 4(2): R1-R13. https://doi.org/10.1088/1741-2560/4/2/R01

[30] Hinton, G.E., Osindero, S., Teh, Y.W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7): 1527-1554. https://doi.org/10.1162/neco.2006.18.7.1527

[31] Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A. (2008). Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, New York, United States, pp. 1096-1103. https://doi.org/10.1145/1390156.1390294

[32] Moon, S.E., Jang, S., Lee, J.S. (2018). Convolutional neural network approach for EEG-Based emotion recognition using brain connectivity and its spatial information. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, pp. 2556-2560. https://doi.org/10.1109/ICASSP.2018.8461315

[33] Singh Rajpoot, A., Raveendranatha Panicker, M. (2021). Subject independent emotion recognition using EEG signals employing attention driven neural networks. arXiv E-Prints, arXiv-2106. https://ui.adsabs.harvard.edu/link_gateway/2021arXiv210603461A/doi:10.48550/arXiv.2106.03461

[34] Zheng, W.L., Lu, B.L. (2015). Investigating critical frequency bands and channels for EEG-Based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development, 7(3): 162-175. https://doi.org/10.1109/TAMD.2015.2431497

[35] Alhagry, S., Fahmy, A.A., El-Khoribi, R.A. (2017). Emotion recognition based on EEG using LSTM recurrent neural network. International Journal of Advanced Computer Science and Applications, 8(10): 355-358. https://doi.org/10.14569/IJACSA.2017.081046

[36] Katsigiannis, S., Ramzan, N. (2018). DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE Journal of Biomedical and Health Informatics, 22(1): 98-107. https://doi.org/10.1109/JBHI.2017.2688239

[37] Xiong, F., Fan, M.Z., Yang, X., Wang, C.X., Zhou, J.L. (2025). Research on emotion recognition using sparse EEG channels and cross-subject modeling based on CNN-KAN model. PLOS One, 20(5): e0322583. https://doi.org/10.1371/journal.pone.0322583

[38] Tripathi, S., Acharya, S., Sharma, R.D., Mittal, S., Bhattacharya, S. (2020). Using deep and convolutional neural networks for accurate emotion classification on DEAP dataset. Proceedings of the AAAI Conference on Artificial Intelligence, 31(2): 4746-4752. https://doi.org/10.1609/aaai.v31i2.19105

[39] Yin, G., Sun, S., Zhang, H., Yu, D., Li, C., Zhang, K., Zou, N. (2019). User independent emotion recognition with residual signal-Image network. In 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, pp. 3277-3281. https://doi.org/10.1109/ICIP.2019.8803627

[40] Tuncer, T., Dogan, S., Baygin, M., Acharya, U.R. (2022). Tetromino pattern based accurate EEG emotion classification model. Artificial Intelligence in Medicine, 123: 102210. https://doi.org/10.1016/j.artmed.2021.102210

[41] Ahmed, S.M., Sabur, E.T. (2023). Emotion analysis on EEG signal using machine learning and neural network. arXiv Preprint arXiv: 2307.05375. https://doi.org/10.48550/arXiv.2307.05375

[42] Jenke, R., Peer, A., Buss, M. (2014). Feature extraction and selection for emotion recognition from EEG. IEEE Transactions on Affective Computing, 5(3): 327-339. https://doi.org/10.1109/TAFFC.2014.2339834

[43] Abdi, H., Williams, L.J. (2010). Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics, 2(4): 433-459. https://doi.org/10.1002/wics.101

[44] Hyvärinen, A., Oja, E. (2000). Independent component analysis: Algorithms and applications. Neural Networks, 13(4-5): 411-430. https://doi.org/10.1016/S0893-6080(00)00026-5

[45] Aftanas, L.I., Golocheikine, S.A. (2001). Human anterior and frontal midline theta and lower alpha reflect emotionally positive state and internalized attention: High-resolution EEG investigation of meditation. Neuroscience Letters, 310(1): 57-60. https://doi.org/10.1016/S0304-3940(01)02094-8

[46] Fan, J., Sun, Q., Zhou, W.X., Zhu, Z. (2018). Principal component analysis for big data. arXiv Preprint arXiv:1801.01602. https://doi.org/10.48550/arXiv.1801.01602

[47] Chowdhury, U.N., Chakravarty, S.K., Hossain, M.T. (2018). Short-Term financial time series forecasting integrating principal component analysis and independent component analysis with support vector regression. Journal of Computer and Communications, 6(03): 51-67. https://doi.org/10.4236/jcc.2018.63004

[48] Mehra, A., Shukla, A., Kumawat, M., Ranjan, R., Tiwari, R. (2010). Intelligent system for speaker identification using lip features with PCA and ICA. arXiv Preprint arXiv:1004.4478. https://doi.org/10.48550/arXiv.1004.4478

[49] Cadavid, A.C., Lawrence, J.K., Ruzmaikin, A. (2007). Principal components and independent component analysis of solar and space data. Solar Image Analysis and Visualization, 37-51. https://doi.org/10.1007/978-0-387-98154-3_5

[50] Bazgir, O., Mohammadi, Z., Habibi, S.A.H. (2018). Emotion recognition with machine learning using EEG signals. In 2018 25th National and 3rd International Iranian Conference on Biomedical Engineering (ICBME), Qom, Iran, pp. 1-5. https://doi.org/10.1109/ICBME.2018.8703559

[51] Bajada, J., Bonello, F.B. (2021). Real-time EEG-based emotion recognition using discrete wavelet transforms on full and reduced channel signals. arXiv preprint, arXiv:2110.05635. https://doi.org/10.48550/arXiv.2110.05635

[52] Song, T.F., Zheng, W.M., Song, P., Cui, Z. (2020). EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Transactions on Affective Computing, 11(3): 532-541. https://doi.org/10.1109/TAFFC.2018.2817622

[53] Sanei, S., Chambers, J.A. (2013). EEG Signal Processing. John Wiley & Sons.

[54] Liu, J., Wu, G., Luo, Y., Qiu, S., Yang, S., Li, W., Bi, Y. (2020). EEG-Based emotion classification using a deep neural network and sparse autoencoder. Frontiers in Systems Neuroscience, 14: 43. https://doi.org/10.3389/fnsys.2020.00043

[55] Ju, X., Li, M., Tian, W., Hu, D. (2024). EEG-Based emotion recognition using a temporal-difference minimizing neural network. Cognitive Neurodynamics, 18(2): 405-416. https://doi.org/10.1007/s11571-023-10004-w

[56] Gao, X., Wang, P., Yang, S. (2020). EEG-based emotion recognition using a 3D convolutional neural network with autoencoder. Frontiers in Systems Neuroscience, 14: 43. https://doi.org/10.3389/fnsys.2020.00043

[57] Li, X., Song, D., Zhang, P., Zhang, Y., Hou, Y., Hu, B. (2018) Exploring EEG features in cross-subject emotion recognition. Frontiers in Neuroscience, 12: 162. https://doi.org/10.3389/fnins.2018.00162

[58] Ahmed, S.M.M., Sabur, E.T. (2023). Emotion analysis on EEG signal using machine learning and neural network. arXiv Preprint, arXiv:2307.05375. https://doi.org/10.48550/arXiv.2307.05375

[59] Al-Fahoum, A.S., Al-Fraihat, A.A. (2014). Methods of EEG signal features extraction using linear analysis in frequency and time‐Frequency domains. International Scholarly Research Notices, 2014(1): 730218. https://doi.org/10.1155/2014/730218

[60] Übeyli, E.D. (2009). Combined neural network model employing wavelet coefficients for EEG signals classification. Digital Signal Processing, 19(2): 297-308. https://doi.org/10.1016/j.dsp.2008.07.004

[61] Liu, J., Wu, H., Zhang, L., Zhao, Y. (2022). Spatial-Temporal transformers for EEG emotion recognition. In Proceedings of the 6th International Conference on Advances in Artificial Intelligence, New York, United States, pp. 116-120. https://doi.org/10.1145/3571560.3571577

[62] Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., Patras, I. (2012). DEAP: A database for emotion analysis using physiological signals. IEEE Transactions on Affective Computing, 3(1): 18-31. https://doi.org/10.1109/T-AFFC.2011.15

[63] Huang, D., Guan, C.T., Ang, K.K., Zhang, H.H., Pan, Y.Z. (2012). Asymmetric spatial pattern for EEG-based emotion detection. In the 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, QLD, Australia, pp. 1-7. https://doi.org/10.1109/IJCNN.2012.6252390

[64] Jirayucharoensak, S., Pan-Ngum, S., Israsena, P. (2014). EEG‐based emotion recognition using deep learning network with principal component based covariate shift adaptation. The Scientific World Journal, 2014(1): 627892. https://doi.org/10.1155/2014/627892

[65] Petrantonakis, P.C., Hadjileontiadis, L.J. (2010). Emotion recognition from EEG using higher-order crossings. IEEE Transactions on Information Technology in Biomedicine, 14(2): 186-197. https://doi.org/10.1109/TITB.2009.2034649

[66] Chunawale, A., Bedekar, M. (2024). Electroencephalogram based human emotion classification for valence and arousal using machine learning approach. Indonesian Journal of Electrical Engineering and Computer Science, 33(2): 920-931. http://doi.org/10.11591/ijeecs.v33.i2.pp920-931

[67] Grus, J. (2015). Data Science from Scratch: First Principles with Python. O'Reilly Media.

[68] Tiwari, U., Chakraborty, R., Kopparapu, S.K. (2022). Spectro temporal EEG biomarkers for binary emotion classification. arXiv Preprint, arXiv:2202.03271. https://doi.org/10.48550/arXiv.2202.03271

[69] Patel, P., Annavarapu, R.N. (2021). EEG-based human emotion recognition using entropy as a feature extraction measure. Brain Informatics, 8(1): 20. https://doi.org/10.1186/s40708-021-00141-5

[70] Subasi, A. (2007). EEG signal classification using wavelet feature extraction and a mixture of expert model. Expert Systems with Applications, 32(4): 1084-1093. https://doi.org/10.1016/j.eswa.2006.02.005

[71] Murugappan, M., Ramachandran, N., Sazali, Y. (2010). Classification of human emotion from EEG using discrete wavelet transform. Journal of Biomedical Science and Engineering, 3(4): 390-396. https://doi.org/10.4236/jbise.2010.34054

[72] Lin, Y.P., Wang, C.H., Jung, T.P., Wu, T.L., Jeng, S.K., Duann, J.R., Chen, J.H. (2010). EEG-based emotion recognition in music listening. IEEE Transactions on Biomedical Engineering, 57(7): 1798-1806. https://doi.org/10.1109/TBME.2010.2048568

[73] Bazgir, O., Mohammadi, Z., Habibi, S.A.H. (2018). Emotion recognition with machine learning using EEG signals. In 2018 25th National and 3rd International Iranian Conference on Biomedical Engineering (ICBME), Qom, Iran, pp. 1-5. https://doi.org/10.1109/ICBME.2018.8703559

[74] Blanco-Rios, M.A., Candela-Leal, M.O., Orozco-Romo, C., Remis-Serna, P., Velez-Saboya, C.S., Lozoya-Santos, J.D.J., Cebral-Loureda, M., Ramirez-Moreno, M. A. (2024). Real-time EEG-based emotion recognition model using principal component analysis and tree-Based models for neurohumanities. arXiv Preprint, arXiv:2401.15743. https://doi.org/10.3389/fnhum.2024.1319574

[75] Phan, K.L., Wager, T., Taylor, S.F., Liberzon, I. (2002). Functional neuroanatomy of emotion: A meta-analysis of emotion activation studies in PET and fMRI. Neuroimage, 16(2): 331-348. https://doi.org/10.1006/nimg.2002.1087

[76] Al-Nafjan, A., Hosny, M., Al-Ohali, Y., Al-Wabil, A. (2017). Review and classification of emotion recognition based on EEG brain-computer interface system research: A systematic review. Applied Sciences, 7(12): 1239. https://doi.org/10.3390/app7121239

[77] Taurah, S.P., Bhoyedhur, J., Sungkur, R.K. (2020). Emotion-based adaptive learning systems. In International Conference on Machine Learning for Networking, Springer, Cham, pp. 273-286. https://doi.org/10.1007/978-3-030-45778-5_18

[78] Wang, X.W., Nie, D., Lu, B.L. (2014). Emotional state classification from EEG data using machine learning approach. Neurocomputing, 129: 94-106. https://doi.org/10.1016/j.neucom.2013.06.046

[79] Hair, J.F., Anderson, R.E., Tatham, R.L., Black, W.C. (1995). Multivariate Data Analysis with Readings. Prentice-Hall, Inc, USA.

[80] Díaz-Uriarte, R., Alvarez de Andrés, S. (2006). Gene selection and classification of microarray data using random forest. BMC Bioinformatics, 7: 1-13. https://doi.org/10.1186/1471-2105-7-3

[81] Lessmann, S., Baesens, B., Seow, H.V., Thomas, L.C. (2015). Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. European Journal of Operational Research, 247(1): 124-136. https://doi.org/10.1016/j.ejor.2015.05.030

[82] Suhaimi, N.S., Mountstephens, J., Teo, J. (2020). EEG‐Based emotion recognition: A state‐of‐the‐art review of current trends and opportunities. Computational Intelligence and Neuroscience, 2020(1): 8875426. https://doi.org/10.1155/2020/8875426

[83] Zhang, M.L., Zhou, Z.H. (2007). ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognition, 40(7): 2038-2048. https://doi.org/10.1016/j.patcog.2006.12.019

[84] Ho, T.K. (1995). Random decision forests. In Proceedings of 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, pp. 278-282. https://doi.org/10.1109/ICDAR.1995.598994

[85] Thejaswini, M.S., Kumar, G.H., Aradhya, V.M., Narendra, R., Suresha, M., Guru, D.S. (2023). MindData for enhanced entertainment: Building a comprehensive EEG dataset of emotional responses to audio-visual stimuli. In International Conference on Applied Intelligence and Informatics. Cham: Springer Nature Switzerland, pp. 82-94. https://doi.org/10.1007/978-3-031-68639-9_6

[86] Emotions based EEG dataset. https://www.kaggle.com/datasets/thejaswinishrinivas/emotions-based-eeg-dataset.

[87] Zheng, W.L., Lu, B.L. (2015). Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development, 7(3): 162-175. https://doi.org/10.1109/TAMD.2015.2431497

[88] Li, M., Xu, H., Liu, X., Lu, S. (2018). Emotion recognition from multichannel EEG signals using K-nearest neighbor classification. Technology and Health Care, 26(1_suppl): 509-519. https://doi.org/10.3233/THC-174836

[89] Liu, Y.S., Sourina, O., Nguyen, M.K. (2010). Real-time EEG-based human emotion recognition and visualization. In 2010 International Conference on Cyberworlds, Singapore, pp. 262-269. https://doi.org/10.1109/CW.2010.37