Performance Analysis of Hybrid – BCI Signals Using CNN for Motor Movement Classification

Performance Analysis of Hybrid – BCI Signals Using CNN for Motor Movement Classification

Shelishiyah R* Deepa Beeta Thiyam

Department of Biomedical Engineering, Vel Tech Rangarajan Dr Sagunthala R & D Institute of Science and Technology, Chennai 600062, India

Corresponding Author Email: 
thiyamdeepabeeta@veltech.edu.in
Page: 
2143-2152
|
DOI: 
https://doi.org/10.18280/ts.410442
Received: 
18 November 2023
|
Revised: 
8 February 2024
|
Accepted: 
7 April 2024
|
Available online: 
31 August 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The design of a hybrid brain-computer interface (BCI) system is an upgradation of the existing BCI systems. Contemporary studies that combine two modalities for a good BCI show that electroencephalogram (EEG) and functional near infra-red spectroscopy (fNIRS) were more convenient. Using this hybrid system various multi-class classification problems have been solved with better ease. The motor imagery and motor execution tasks performed for the Right/Left Arm and Hand were taken from CORE dataset which consists of 15 male subjects. In most cases, feature extraction was done after good pre-processing and channel selections to obtain good results. Deep learning methods like Convolutional Neural Networks and Thin ICA were used for feature extraction and classification of EEG signals. CNN was used for feature extraction and classification in fNIRS with minimal pre-processing and data augmentation. Comparison of performance was done with CNN and a combination of LSTM-CNN classifiers. The proposed CNN model showed 98.3% accuracy with minimal pre-processing and with no channel selection algorithms. Evaluation metrics like Accuracy, Precision, Recall, F1 score and confusion matrix are used to evaluate the classification accuracy. This concludes that the proposed CNN model can classify contralateral and ipsilateral data with lower computational load and with good accuracy.

Keywords: 

time series CNN, hybrid BCI, thin – ICA, EEG, fNIRS, motor movement, Arm-Hand classification

1. Introduction

Brain Computer interface (BCI) is quite common in integrating the brain with different actuators using a computer. This captures the mind of various researchers to make it practically feasible. Many non-invasive brain reading techniques like EEG, fMRI, fNIRS are commonly used for this application [1, 2]. However, each of these modalities have their own merits and demerits. This lead to use of Hybrid BCI which emerged by combination of two modalities that can complement each other’s limitations [3]. This can be applied for many applications like Alzheimer, mental arithmetic, motor imagery, motor executions, which are further converted into command signals to be used for clinical and non-clinical applications [4-7]. The various limitation in this hybrid BCI set up is now very much overcome by upgradation of acquisition devices [8].

Recent works show methods of decoding Motor imagery and motor execution tasks acquired from EEG and fNIRS modalities [9-11]. EEG has a good temporal resolution while fNIRS has a good spatial resolution despite its slow response due to hemodynamic activity. Also, fNIRS is better than fMRI in terms of cost and acquisition complexity [12, 13].

The application of Hybrid BCI with EEG and fNIRS combination for classifying motor imagery and motor tasks has been done often. Different types of feature extraction techniques and classifiers had been applied to achieve a good classification accuracy. Machine learning classifiers like Linear Discriminant Analysis (LDA), Tree classifiers, K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) are most commonly employed for classification of motor imagery or motor tasks especially for EEG or fNIRS modalities alone [14-26]. The same has been applied for Hybrid datasets either as a single model or a combination of 2 models or applying feature selections [27, 28]. However, a good accuracy could not be achieved. A deep neural network was also used by fully connected layers, but achieved a lesser accuracy [29-31]. With an aim to increase accuracy of machine learning algorithms in hybrid datasets, some works had applied channel selection before extracting features [32-35]. This has indeed been found to increase the machine learning model accuracy from approximately 92% to 98%. The performance of the models can also vary with the choice of feature extraction algorithms like Common Spatial Pattern (CSP), Regularized Common Spatial Pattern (RCSP), Principal Component Analysis (PCA) [36, 37]. Linear and non-linear features are also considered [38-40].

In current literature, there are limited data for ipsilateral limb movements since they are more spatially localized signals which can lead to generalizations and misclassifications [41, 42]. Besides, since both hand movements are performed, the number of trials and the time of task performance is also limited. Also, increase in accuracy has always been met with complex methods. Channel selection methods could contribute to increase the accuracy but compensates on spatial information. Machine learning models without channel selection yield less than 98% accuracy. The objectives of current study are to provide a clear classification of ipsilateral (spatially localized)/contralateral (spatially distinct) hand movements and to apply deep learning methods to improve the classification accuracy with reduced pre-processing steps thereby reducing the computational load. The dataset used for this study consists of Motor Imagery (MI) and Motor Execution for the Right arm, Left arm, Right hand, and Left hand. This makes classification complex since the cortical locations for these movements lie adjacent to each other which makes the chances of misclassification more prominent. Our proposed model harnesses the advantages of CNN with a higher number of filters to address the classification problem in a more simplified manner. MATLAB R2019b was used for data preparations, pre-processing and feature extraction (of EEG alone with Thin-ICA) while CNN was applied using Python Version 3.11. The paper is further divided into Methodology which consists of dataset description, data augmentation, pre-processing, feature extraction and classification. This is followed by Results and Discussions showing performance comparisons with another proposed model and also with previous works and then the Conclusion.

2. Methodology

The EEG-fNIRS dataset underwent augmentation, pre-processing and other feature extraction techniques and was given to the classifier. The motor execution data taken from EEG and fNIRS were processed and classified as shown in Figure 1. Figure 1(a) shows the methodology flow for training signals and Figure 1(b) shows the process done for testing signals. In the proposed methodology, besides improving accuracy with simpler modules, other considerations such as model architectural complexity and generalization of the model were also made.

Figure 1. Block diagrams of methodology

2.1 Dataset

The dataset provided by Buccino et al. [10] has a simultaneous recording of both EEG and fNIRS taken during motor execution. 15 healthy male subjects aged between 22 to 54 participated in the experiment and were given 4 different upper limb tasks. The motor tasks assigned were flexion of Left/Right Arm/Hand. The acquisition system had 21 channels for EEG and 34 channels for fNIRS with a sampling frequency of 250 Hz and 10.42 Hz respectively. Each trial of tasks lasts for 12 seconds with 6 seconds rest and 6 seconds of movement with 25 such trials for one class. The 6 seconds of movement performance is augmented for further processing.

2.2 Data augmentation

Current literature suggests, the choice of intervals would prove a better classification. Various data augmentation methods are available which can be used based on data [43-51]. Many researchers have suggested different methods of augmentation. Two types of augmentation–time slicing and overlap were applied for both EEG and fNIRS in the current dataset. The stepwise algorithm for time slicing was adapted from the study of Naseer and Hong [52]. Since the dataset contains 6 seconds of task performance time slicing was done. An overlap of 2 seconds was done to increase the data size.

Figure 2. Data augmentation process

According to the data, for each trial, the task was performed for only 6 seconds. In time slicing, the 6 seconds task time was sliced into 3 different time intervals–[2 5], [3 6] and [4 7]. The interval begins one second after the cue for task, since fNIRS has hemodynamic delay. Also, the number of samples that would be present for these three seconds was calculated as 750 for EEG and 30 for fNIRS (for a single trial) using their sampling frequencies. The dataset contains events and timepoints for rest and task cues. The time point of one second after the task cue was fetched and 750 samples for EEG and 30 samples for fNIRS are retrieved from the samples data. This is repeated for [3 6] and [4 7] intervals by considering 2 seconds and 3 seconds after cue for tasks respectively. The chosen intervals have an overlap of 2 seconds. This was continued for 25 trials for each class, for 15 subjects. The block diagram can be seen in Figure 2. Hence, the total number of samples obtained for a single subject was 2,21,400 for EEG and 9000 for fNIRS.

2.3 Pre-processing

2.3.1 Preprocessing for fNIRS

EEG and fNIRS signals were pre-processed simultaneously. The given fNIRS signals were obtained as wavelength information at W1 = 760 nm (red) and W2 = 850 nm (Infra-red) which needs to be converted to Optical Density Information [53-56]. This task is done using Modified Beer-Lamberts law which converts wavelengths to changes in optical densities which is a response of changes in oxy and de-oxy hemoglobin concentration across a certain cortical region in response to an activity [57, 58]. It should be noted that fNIRS are generated only during activity (beta band of EEG) and motor imagery frequencies (the mu band). The Differential Path length Factor (DPF) is given as 5.5 as suggested by Duncan et al. [59]. These concentration changes were band-filtered with an (Infinite Impulse Response) IIR filter having a band frequency of [0.01 0.1] Hz and order of 4 [60].

2.3.2 Pre-processing for EEG

The pre-processing for EEG was done with an IIR band pass filter of order 5 since the signal to noise ratio was better [57]. The µ(mu) and β(beta) band EEG frequencies were band filtered using the cut off frequencies as 8 Hz to 30 Hz. This gives both the motor imagery and the motor execution information.

A gaussian transformation was used for normalization of the signals in both EEG and fNIRS. This was done by subtracting the mean(μ) and dividing by its standard deviation (σ) as given in Eq. (1).

$x_{n o r m}(t)=\frac{x(t)-\mu}{\sigma}$     (1)

2.4 Multi-class CSP

The four classes were considered as a two-class problem – Right Hand/Left Hand and Right Arm/Left Arm. The CSP filters were computed for these two class problems by solving the Rayleigh’s eigen value problem using Eq. (2).

$J(b)=\frac{b^T \Sigma_1 b}{b^T \Sigma_2 b}$      (2)

where, $\Sigma_1$ and $\Sigma_2$ denote the covariance matrix of class 1 and 2 while b is the spatial filter obtained by solving the generalised Eigan value decomposition. In this work we have considered 2 filters for 4 class hence, N=8.

2.5 Thin-ICA CSP

The thin ICA method first computes the second and fourth-order statistics and then its independent components as done by Thiyam et al. [39]. The choice of independent components can be changed as required. The EEG data were first considered as a binary problem wherein one set was Right hand/Left hand and another set was Right Arm and Left arm. The given EEG signal is represented by x(t) which is given in the below linear Eq. (3).

x(t)=A s(t)+w(t)      (3)

where, $A \in R^{m \times n}$ is the mixing matrix, $w(t) \epsilon R^m$ is the zero mean gaussian noise and $s(t) \epsilon R^n$ is the independent source vector. The output is estimated as in Eq. (4).

$y(t)=U^T z(t)$     (4)

where, $U(U=W A)$ is the orthogonal matrix that was obtained after pre-whitening and $z(t)$ represents the pre-whitened observations, which is given as $z(t)=U s(t)+W n(t) \epsilon R^N$.

The contrast function which estimates the second and higher-order statistics is given in Eq. (5).

$\begin{aligned} & \varphi_{\Theta}(U)=\gamma_4 \sum_{n=1}^N \sum_{i=1}^P \mid \operatorname{Cum}\left(y_i\left(t_n\right), \ldots,\left.y_i\left(t_n\right)\right|^2\right. \\ & +\gamma_2 \sum_{n=1}^N \sum_{i=1}^P \sum_{\tau \in T} \mid \operatorname{Cum}\left(y_i\left(t_n+\tau\right),\left.y_i\left(t_n\right)\right|^2\right.\end{aligned}$      (5)

where, N denote the number of splits and P denote the number of independent components to extract. Here we set N = 3 (three splits in data), P = 20 and T = {1, 2, …, 6} (delays).

It is important that the independent components extracted should belong to MI and motor execution tasks. To ensure this, the CSP matrix is used to initialize the unmixing matrix in Thin ICA algorithm where the EEG signals were filtered using the obtained spatial filters. The parameters in Thin-ICA could be tuned, so 2 and 5 features were taken for each class (8 and 20 independent components) with 3 splits. The features were obtained from the log variances of these filtered signals to train the classifier given as in Eq. (6).

Feature $_i=\log \left(\operatorname{var}\left(u_i^T z\right)\right)$      (6)

The extracted features of EEG were fed as input to CNN for further classification. On the other hand, the pre-processed fNIRS were directed to CNN for feature extraction and classification.

2.6 Convolutional neural network (CNN)

Figure 3. Percentage of positive and negative values in the dataset

Pre-processed fNIRS data (34 channels) was merged with the EEG features (21 channels) by zero-padding the EEG data to match the dimensions of fNIRS data. The dataset was checked for the total negative and positive values in order to choose an appropriate activation function. It is found that, negative values were populated more than the positive values as in Figure 3.

The architecture of this model is shown in Figure 4. In one-dimensional CNN (1D-CNN), 5 convolutional layers are used with a kernel of size of 3, which was followed by 5 dense layers. The first three layers have 128 filters and the last two layers comprise 64 filters.

ELU activation function was considered to perform better in literature whenever there was a higher population of negative data, especially in the case of speech signals [31, 61-63]. The same was tested for this dataset and also compared with the performance of ReLU. The generalization of training and testing data with faster convergence was better with ELU. Hence the ELU activation function was chosen throughout owing to a high negative population along with Adam optimizer [64].

Pooling layers down sample the input with a window size of 1. Dropout layers are added after every dense layer from first to third, so that overfitting problems can be avoided. The SoftMax activation function was used in the last dense layer to provide a distinctive classification of 4 different classes of motor execution.

2.7 Long – Short Term Memory (LSTM)

This is a type of RNN which uses memory cells to filter previous state and their information which will be un-used in future. Five layers of LSTM is used with 256 filters in first layer, 128 filters in second layer, 64 filters in third and fourth layer and 32 filters in the fifth layer. Elu and SoftMax activation functions are used with Adam optimizer. Two dense layers are used to finally classify the outputs.

2.8 CNN + LSTM

The architecture of this model is seen in Figure 5. This model was built with two layers of CNN and two layers of LSTM. CNN has 128 and 64 filters while LSTM has 64 filters in its two layers. Four dense layers follow the LSTM with Elu and SoftMax activations.

Figure 4. Architecture of CNN model

Figure 5. Architecture of CNN + LSTM model

Batch normalizations and dropouts were added to stabilize learning and to avoid overfitting. Compilation was done by Adam optimizer.

3. Results and Discussion

The dataset for Hybrid–BCI was taken from https://core.ac.uk/display/150054285. The dataset consists of a simultaneous recording of EEG and fNIRS during an upper limb motor execution task. MATLAB R2019b, Python version 3.11 (Jupyter notebook and Tensor flow) was used for data processing and classification.

In pre-processing stage, the signal noise was found to be higher with higher frequencies which distorted the signals as shown in Figure 6 for HbO (Oxy-hemoglobin) data which would be similar for HbR (de-oxy hemoglobin) data.

Figure 6. Comparison of signal distortions between 2 Band pass filters for fNIRS

However, the frequency response of different filter bands as in Figure 7, show that a band of [0.01 Hz–0.1 Hz] falls in-phase. So, the band pass filter for fNIRS is set within this frequency band.

While pre-processing EEG signals, it was observed that 5th order IIR butter-worth have a constant group delay as seen in Figure 8.

The data for training and testing are taken as per Table 1. 15 subjects were recruited for collecting the data for motor execution of Right/Left Hand and Right/Left Arm. Initially works were done to see which time interval had good information by time slicing method and [2 4] interval was chosen and data was presented to CNN.

However, accuracy suffered here since the quantity of input data was not sufficient. Hence the data was augmented in three intervals and overlapped as in the methodology. The dataset after augmentation was split into training and testing data in a 60:40 ratio. The total EEG data after augmentation in three intervals and was 1,12,6500 and for fNIRS was 46,500. The data was mixed and fed to the CNN model.

Table 1. Dataset split-up

Total No of Subjects

Training Data Size

Testing Data Size

15

1,22,700 (60%)

81,800 (40%)

(a) [0.01 Hz–0.5 Hz]

(b) [0.01 Hz–0.2 Hz]

(c) [0.01 Hz–0.1 Hz]

(d) [0.01 Hz–0.09 Hz]

Figure 7. Comparison of frequency response of different band pass filters for fNIRS

Figure 8. Group delay for IIR butter-worth fifth order filter band pass filter

(a) fNIRS-CNN model

(b) EEG-CNN model

Figure 9. Confusion matrix

EEG signals have good temporal resolution while fNIRS has good spatial resolution [63]. The time series CNN models are better applicable for spatial resolution than temporal resolutions. However, since hybrid BCI has both these modules, the CNN model could be applied better on FNIRS than on EEG [63]. Owing to this, after pre-processing, fNIRS data was directly fed into the CNN model and was found to produce 99.9% accuracy. However, when EEG was fed to the CNN model after pre-processing, the performance was very poor, since CNN was not capable of extracting features in temporal resolution as seen in Figure 9.

In order to rectify this, EEG was given to Thin ICA to extract features and these features were fed to CNN for further classification with 5-fold cross validation. This gave better results with 98.3% accuracy. The depth of CNN was tested with 2, 3 and 5 layers and was seen that, lower depth compromised on the generalization of training and testing data as seen in Figure 10. A 6-layer CNN had an accuracy of 72% which is way lower than 5-layer model. Hence a 5-layer depth was retained for the proposed model. This was also compared with LSTM and LSTM + CNN models, where LSTM gave a poor accuracy of 26% while the combination of LSTM and CNN gave 98%.

(a) 2-layer CNN

(b) 3-layer CNN

(c) 5-layer CNN

Figure 10. Training and testing accuracy

Figure 11. Confusion matrix for 5-layer CNN model

Figure 12. Confusion matrix for hybrid CNN model

Figure 13. Comparison of current work with existing literature (for Hybrid BCI-EEG+fNIRS)

Figure 11 shows the confusion matrix for 5-layer CNN and Figure 12 shows the confusion matrix of Hybrid CNN.

The difference in accuracy of proposed model 1 and 2 was 0.3%. The architectural complexity of CNN model was seen to have 5 layers while CNN + LSTM model had 2 CNN and 2 LSTM layers. Figure 13 gives a comparison of the accuracies of previous and current work, wherein the current model shows better performance.

The proposed models were also evaluated with Precision, Recall and F1 score. The results as in Figure 14 shows that, the two proposed models have shown similar performance.

Figure 14. Comparison of performance metrices

4. Conclusion

A hybrid BCI constructed with EEG and fNIRS has become common in obtaining good classification. However, the computational complexity is still higher with the selection of good channels before feature extraction and the selection of good features after feature extraction. In the proposed system CNN was applied to bypass these in fNIRS and Thin-ICA for feature extraction in EEG. The classification accuracy of the CNN model was 98.3% and that of the CNN + LSTM model was 98%. Although the performance metrices were similar for both the proposed models, the generalization and accuracy were better for CNN. Comparatively, we conclude that the performance of CNN model was better in terms of accuracy, complexity, performance and generalization with that of other previous works. Also, this model implies a lesser computational load for both EEG and fNIRS signals and had performed much better than the previous models which used additional pre-processing methods like channel selections, noise removal and feature selections. Other feature extraction methods can be implemented in future for the same or different dataset to verify the consistency of the model performance. Thus, we propose a less complex, generalized CNN model which can give a good classification accuracy for multi-class, contralateral and ipsilateral hand movements.

Nomenclature

b

Spatial filter

J(b)

Spatial filter matrix

A

Mixing matrix

s

Independent source

w

Zero mean Gaussian noise

U

Orthogonal matrix

z

Pre-whitened observations

N

Number of splits

P

Number of independent components

T

Delays

Cum

Cumulative

var

Variance

log

Logarithm

$\boldsymbol{u}_i^T z$

Filtered signal

Greek symbols

µ

Mean

σ

Standard deviation

Σ

Covariance matrix

φ

Contrast function

$\gamma$

Order of the statistic

τ

Delay

Subscripts

i and n

Iterations

norm

Normalization

Superscripts

T

Transpose

  References

[1]  Müller-Putz, G., Leeb, R., Tangermann, M., Höhne, J., Kübler, A., Cincotti, F., Mattia, D., Rupp, R., Müller, K.R., Millan, J.D.R. (2015). Towards noninvasive hybrid brain–computer interfaces: Framework, practice, clinical application, and beyond. Proceedings of the IEEE, 103(6): 926-943. https://doi.org/10.1109/JPROC.2015.2411333

[2]  Visani, E., Canafoglia, L., Gilioli, I., et al. (2015). Hemodynamic and EEG time-courses during unilateral hand movement in patients with cortical myoclonus. An EEG-fMRI and EEG-TD-fNIRS study. Brain Topography, 28: 915-925. https://doi.org/10.1007/s10548-014-0402-6

[3]  Amiri, S., Fazel-Rezai, R., Asadpour, V. (2013). A review of hybrid brain-computer interface systems. Advances in Human-Computer Interaction, 2013: 187024. https://doi.org/10.1155/2013/187024

[4]  Cicalese, P.A., Li, R., Ahmadi, M.B., Wang, C., Francis, J.T., Selvaraj, S., Paul, E.S., Zhang, Y. (2020). An EEG-fNIRS hybridization technique in the four-class classification of Alzheimer’s disease. Journal of Neuroscience Methods, 336: 108618. https://doi.org/10.1016/j.jneumeth.2020.108618.

[5]  Khan, M.J., Hong, K.S. (2017). Hybrid EEG–fNIRS-based eight-command decoding for BCI: Application to quadcopter control. Frontiers in Neurorobotics, 11: 6. https://doi.org/10.3389/fnbot.2017.00006

[6]  Cao, L., Li, J., Ji, H., Jiang, C. (2014). A hybrid brain computer interface system based on the neurophysiological protocol and brain-actuated switch for wheelchair control. Journal of Neuroscience Methods, 229: 33-43. https://doi.org/10.1016/j.jneumeth.2014.03.011

[7]  Yin, X., Xu, B., Jiang, C., Fu, Y., Wang, Z., Li, H., Shi, G. (2015). A hybrid BCI based on EEG and fNIRS signals improves the performance of decoding motor imagery of both force and speed of hand clenching. Journal of Neural Engineering, 12(3): 036004. https://doi.org/10.1088/1741-2560/12/3/036004

[8]  Ahn, S., Jun, S.C. (2017). Multi-modal integration of EEG-fNIRS for brain-computer interfaces–current limitations and future directions. Frontiers in Human Neuroscience, 11: 503. https://doi.org/10.3389/fnhum.2017.00503

[9]  Lachert, P., Janusek, D., Pulawski, P., Liebert, A., Milej, D., Blinowska, K.J. (2017). Coupling of Oxy-and Deoxyhemoglobin concentrations with EEG rhythms during motor task. Scientific Reports, 7(1): 15414. https://doi.org/10.1038/s41598-017-15770-2

[10]  Buccino, A.P., Keles, H.O., Omurtag, A. (2016). Hybrid EEG-fNIRS asynchronous brain-computer interface for multiple motor tasks. PloS ONE, 11(1): e0146610. https://doi.org/10.1371/journal.pone.0146610

[11]  Liu, Z., Shore, J., Wang, M., Yuan, F., Buss, A., Zhao, X. (2021). A systematic review on hybrid EEG/fNIRS in brain-computer interface. Biomedical Signal Processing and Control, 68: 102595. https://doi.org/10.1016/j.bspc.2021.102595

[12]  Khan, M.J., Ghafoor, U., Hong, K.S. (2018). Early detection of hemodynamic responses using EEG: A hybrid EEG-fNIRS study. Frontiers in Human Neuroscience, 12: 479. https://doi.org/10.3389/fnhum.2018.00479

[13]  Isa, N.E.M., Amir, A., Ilyas, M.Z., Razalli, M.S. (2017). The performance analysis of K-nearest neighbors (K-NN) algorithm for motor imagery classification based on EEG signal. MATEC Web of Conferences, 140: 01024. https://doi.org/10.1051/matecconf/201714001024

[14]  Afrakhteh, S., Amirkhani, A., Mosavi, M.R., Ayatollahi, A. (2016). Classification of two motor imagery based on EEG signals in brain computer interface systems using LDA, SVM and GMM methods. In 1st International Conference on Applications of Research in Science and Engineering, Tehran, Iran, pp. 1-12.

[15]  Ma, Y., Ding, X., She, Q., Luo, Z., Potter, T., Zhang, Y. (2016). Classification of motor imagery EEG signals with support vector machines and particle swarm optimization. Computational and Mathematical Methods in Medicine, 2016: 4941235. http://doi.org/10.1155/2016/4941235

[16]  Páez-Amaro, R.T., Moreno-Barbosa, E., Hernández-López, J.M., Zepeda-Fernández, C.H., Rebolledo-Herrera, L.F., Celis-Alonso, B.D. (2022). EEG motor imagery classification using machine learning techniques. Revista Mexicana de fíSica, 68(4): 1102. tps://doi.org/10.31349/revmexfis.68.041102

[17]  Bhattacharyya, S., Khasnobish, A., Konar, A., Tibarewala, D.N., Nagar, A.K. (2011). Performance analysis of left/right hand movement classification from EEG signal by intelligent algorithms. In 2011 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), Paris, France, pp. 1-8. https://doi.org/10.1109/CCMB.2011.5952111

[18]  Khan, G.H., Hashmi, M.A., Awais, M.M., Khan, N.A., Ahmad, R.B. (2020). High Performance Multi-class Motor Imagery EEG Classification. In Proceedings of the 13th International Joint Conference on Biomedical Engineering Systems and Technologies, Biosignals, pp. 149-155. https://doi.org/10.5220/0008864501490155

[19]  Isa, N.M., Amir, A., Ilyas, M.Z., Razalli, M.S. (2019). Motor imagery classification in brain computer interface (BCI) based on EEG signal by using machine learning technique. Bulletin of Electrical Engineering and Informatics, 8(1): 269-275. https://doi.org/10.11591/eei.v8i1.1402

[20]  Narayan, Y. (2021). Motor-imagery EEG signals classification using SVM, MLP and LDA classifiers. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(2): 3339-3344.

[21]  Ahn, M., Jun, S.C. (2015). Performance variation in motor imagery brain–computer interface: A brief review. Journal of Neuroscience Methods, 243: 103-110. https://doi.org/10.1016/j.jneumeth.2015.01.033

[22]  Rahman, M.A., Uddin, M.S., Ahmad, M. (2019). Modeling and classification of voluntary and imagery movements for brain–computer interface from fNIR and EEG signals through convolutional neural network. Health Information Science and Systems, 7(1): 22. https://doi.org/10.1007/s13755-019-0081-5

[23]  Khan, M.J., Hong, K.S., Naseer, N., Bhutta, M.R. (2014). A hybrid EEG-fNIRS BCI: Motor imagery for EEG and mental arithmetic for fNIRS. In 2014 14th International Conference on Control, Automation and Systems (ICCAS 2014), Gyeonggi-do, Korea (South), pp. 275-278. https://doi.org/10.1109/ICCAS.2014.6988001

[24]  Hirsch, G., Dirodi, M., Xu, R., Reitner, P., Guger, C. (2020). Online classification of motor imagery using EEG and fNIRS: A hybrid approach with real time human-computer interaction. In HCI International 2020-Posters: 22nd International Conference, HCII 2020, Copenhagen, Denmark, pp. 231-238. https://doi.org/10.1007/978-3-030-50726-8_30

[25]  Wang, H., Zhang, Y., Waytowich, N.R., et al. (2016). Discriminative feature extraction via multivariate linear regression for SSVEP-based BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 24(5): 532-541. https://doi.org/10.1109/TNSRE.2016.2519350

[26]  Alhudhaif, A. (2021). An effective classification framework for brain-computer interface system design based on combining of fNIRS and EEG signals. PeerJ Computer Science, 7: e537. https://doi.org/10.7717/peerj-cs.537

[27]  Hong, K.S., Khan, M.J. (2017). Hybrid brain–computer interface techniques for improved classification accuracy and increased number of commands: A review. Frontiers in Neurorobotics, 11: 275683. https://doi.org/10.3389/fnbot.2017.00035

[28]  Chiarelli, A.M., Croce, P., Merla, A., Zappasodi, F. (2018). Deep learning for hybrid EEG-fNIRS brain-computer interface: Application to motor imagery classification. Journal of Neural Engineering, 15(3): 036028. https://doi.org/10.1088/1741-2552/aaaf82

[29]  Zhang, Z., Duan, F., Sole-Casals, J., Dinares-Ferran, J., Cichocki, A., Yang, Z., Sun, Z. (2019). A novel deep learning approach with data augmentation to classify motor imagery signals. IEEE Access, 7: 15945-15954. https://doi.org/10.1109/ACCESS.2019.2895133

[30]  Saadati, M., Nelson, J., Ayaz, H. (2020). Multimodal fNIRS-EEG classification using deep learning algorithms for brain-computer interfaces purposes. In Advances in Neuroergonomics and Cognitive Engineering: Proceedings of the AHFE 2019 International Conference on Neuroergonomics and Cognitive Engineering, and the AHFE International Conference on Industrial Cognitive Ergonomics and Engineering Psychology, Washington DC, USA, pp. 209-220. https://doi.org/10.1007/978-3-030-20473-0_21

[31]  Hasan, M.A., Khan, M.U., Mishra, D. (2020). A computationally efficient method for hybrid EEG-fNIRS BCI based on the Pearson correlation. BioMed Research International, 2020: 1838140. https://doi.org/10.1155/2020/1838140

[32]  Kwon, J., Shin, J., Im, C.H. (2020). Toward a compact hybrid brain-computer interface (BCI): Performance evaluation of multi-class hybrid EEG-fNIRS BCIs with limited number of channels. PloS ONE, 15(3): e0230491. https://doi.org/10.1371/journal.pone.0230491

[33]  Li, R., Potter, T., Huang, W., Zhang, Y. (2017). Enhancing performance of a hybrid EEG-fNIRS system using channel selection and early temporal features. Frontiers in Human Neuroscience, 11: 462. https://doi.org/10.3389/fnhum.2017.00462

[34]  Ge, S., Yang, Q., Wang, R., et al. (2017). A brain-computer interface based on a few-channel EEG-fNIRS bimodal system. IEEE Access, 5: 208-218. https://doi.org/10.1109/ACCESS.2016.2637409

[35]  Hong, K.S., Khan, M.J., Hong, M.J. (2018). Feature extraction and classification methods for hybrid fNIRS-EEG brain-computer interfaces. Frontiers in Human Neuroscience, 12: 246. https://doi.org/10.3389/fnhum.2018.00246

[36]  Wahid, M.F., Tafreshi, R. (2021). Improved motor imagery classification using regularized common spatial pattern with majority voting strategy. IFAC-PapersOnLine, 54(20): 226-231. https://doi.org/10.1016/j.ifacol.2021.11.179

[37]  Maher, A., Qaisar, S.M., Salankar, N., et al. (2023). Hybrid EEG-fNIRS brain-computer interface based on the non-linear features extraction and stacking ensemble learning. Biocybernetics and Biomedical Engineering, 43(2): 463-475. https://doi.org/10.1016/j.bbe.2023.05.001

[38]  Xu, T., Zhou, Z., Yang, Y., Li, Y., Li, J., Bezerianos, A., Wang, H. (2023). Motor Imagery Decoding Enhancement Based on Hybrid EEG-fNIRS Signals. IEEE Access, 11: 65277-65288. https://doi.org/10.1109/ACCESS.2023.3289709

[39]  Thiyam, D.B., Cruces, S., Rajkumar, E.R. (2016). ThinICA-CSP algorithm for discrimination of multiclass motor imagery movements. In 2016 IEEE Region 10 Conference (TENCON), Singapore, pp. 2483-2486. https://doi.org/10.1109/TENCON.2016.7848480

[40]  Robinson, N., Vinod, A.P. (2016). Noninvasive brain-computer interface: Decoding arm movement kinematics and motor control. IEEE Systems, Man, and Cybernetics Magazine, 2(4): 4-16. https://doi.org/10.1109/MSMC.2016.2576638

[41]  Kowalski, M., Gramfort, A. (2010). A priori par normes mixtes pour les problèmes inverses: Application à la localisation de sources en M/EEG. Traitement du Signal, 27(1): 51-76. https://doi.org/10.3166/TraitementduSignal.vol(27)51-76

[42]  Wickramaratne, S.D., Mahmud, M.S. (2021). Conditional-GAN based data augmentation for deep learning task classifier improvement using fNIRS data. Frontiers in Big Data, 4: 659146. https://doi.org/10.3389/fdata.2021.659146

[43]  Lee, H.K., Lee, J.H., Park, J.O., Choi, Y.S. (2021). Data-driven data augmentation for motor imagery brain-computer interface. In 2021 International Conference on Information Networking (ICOIN), Jeju Island, Korea (South), pp. 683-686. https://doi.org/10.1109/ICOIN50884.2021.9333908

[44]  Lashgari, E., Liang, D., Maoz, U. (2020). Data augmentation for deep-learning-based electroencephalography. Journal of Neuroscience Methods, 346: 108885. https://doi.org/10.1016/j.jneumeth.2020.108885

[45]  Hur, J., Yang, J., Doh, H., Ahn, W.Y. (2022). Mapping fNIRS to fMRI with Neural Data Augmentation and Machine Learning Models. arXiv preprint arXiv:2206.06486. https://doi.org/10.48550/arXiv.2206.06486

[46]  Freer, D., Yang, G.Z. (2020). Data augmentation for self-paced motor imagery classification with C-LSTM. Journal of Neural Engineering, 17(1): 016041. https://doi.org/10.1088/1741-2552/ab57c0

[47]  George, O., Smith, R., Madiraju, P., Yahyasoltani, N., Ahamed, S.I. (2022). Data augmentation strategies for EEG-based motor imagery decoding. Heliyon, 8(8): e10240. https://doi.org/10.1016/j.heliyon.2022.e10240

[48]  [Zhang, K., Xu, G., Han, Z., et al. (2020). Data augmentation for motor imagery signal classification based on a hybrid neural network. Sensors, 20(16): 4485. https://doi.org/10.3390/s20164485

[49]  Shelishiyah, R., Beeta, T.D. (2023). A Comparative Performance Study on the Time Intervals of Hybrid Brain–Computer Interface Signals. SN Computer Science, 4(6): 771. https://doi.org/10.1007/s42979-023-02255-5

[50]  Scholkmann, F., Kleiser, S., Metz, A.J., Zimmermann, R., Pavia, J.M., Wolf, U., Wolf, M. (2014). A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology. Neuroimage, 85: 6-27. https://doi.org/10.1016/j.neuroimage.2013.05.004

[51]  Moro, S.B., Bisconti, S., Muthalib, M., et al. (2014). A semi-immersive virtual reality incremental swing balance task activates prefrontal cortex: A functional near-infrared spectroscopy study. Neuroimage, 85: 451-460. https://doi.org/10.1016/j.neuroimage.2013.05.031

[52]  Naseer, N., Hong, K.S. (2015). Decoding answers to four-choice questions using functional near infrared spectroscopy. Journal of Near Infrared Spectroscopy, 23(1): 23-31. 

[53]  Naseer, N., Hong, M.J., Hong, K.S. (2014). Online binary decision decoding using functional near-infrared spectroscopy for the development of brain–computer interface. Experimental Brain Research, 232: 555-564. https://doi.org/10.1007/s00221-013-3764-1

[54]  Gemignani, J., Gervain, J. (2021). Comparing different pre-processing routines for infant fNIRS data. Developmental Cognitive Neuroscience, 48: 100943. https://doi.org/10.1016/j.dcn.2021.100943

[55]  Holper, L., Wolf, M. (2011). Single-trial classification of motor imagery differing in task complexity: A functional near-infrared spectroscopy study. Journal of Neuroengineering and Rehabilitation, 8: 34. https://doi.org/10.1186/1743-0003-8-34

[56]  Mirbagheri, M., Jodeiri, A., Hakimi, N., Zakeri, V., Setarehdan, S.K. (2019). Accurate stress assessment based on functional near infrared spectroscopy using deep learning approach. In 2019 26th National and 4th International Iranian Conference on Biomedical Engineering (ICBME), Tehran, Iran, pp. 4-10. https://doi.org/10.1109/ICBME49163.2019.9030394 

[57]  Shelishiyah, R., Dharan, M.B., Kumar, T.K., Musaraf, R., Beeta, T.D. (2022). Signal Processing for Hybrid BCI Signals. In Journal of Physics: Conference Series, 2318(1): 012007. https://doi.org/10.1088/1742-6596/2318/1/012007

[58]  Eastmond, C., Subedi, A., De, S., Intes, X. (2022). Deep learning in fNIRS: A review. Neurophotonics, 9(4): 041411. https://doi.org/10.1117/1.NPh.9.4.041411

[59]  Duncan, A., Meek, J.H., Clemence, M., Elwell, C.E., Tyszczuk, L., Cope, M., Delpy, D. (1995). Optical pathlength measurements on adult head, calf and forearm and the head of the newborn infant using phase resolved optical spectroscopy. Physics in Medicine & Biology, 40(2): 295. https://doi.org/10.1088/0031-9155/40/2/007

[60]  Ortega, P., Faisal, A. (2021). HemCNN: Deep Learning enables decoding of fNIRS cortical signals in hand grip motor tasks. In 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), Italy, pp. 718-721. https://doi.org/10.1109/NER49283.2021.9441323

[61]  Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S. (2017). Self-normalizing neural networks. Part of Advances in Neural Information Processing Systems 30 (NIPS 2017).

[62]  Kingma, D.P., Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. https://doi.org/10.48550/arXiv.1412.6980

[63]  Jehosheba Margaret, M., Masoodhu Banu, N.M. (2023). Performance analysis of EEG based emotion recognition using deep learning models. Brain-Computer Interfaces, 10(2-4): 79-98. https://doi.org/10.1080/2326263X.2023.2206292

[64]  Ali, M., Kim, K., Kallu, K., Zafar, A., Lee, S. (2023). OptEF-BCI: An optimization-based hybrid EEG and fNIRS–Brain computer interface. Bioengineering, 10(5): 608. https://doi.org/10.3390/bioengineering10050608