Jaw-Operated Human Computer Interface Based on EEG Signals via Artificial Neural Networks

Jaw-Operated Human Computer Interface Based on EEG Signals via Artificial Neural Networks

Muhammet Serdar Bascil 

Department of Electrical and Electronics Engineering, Yozgat Bozok University, Yozgat 66200, Turkey

Corresponding Author Email: 
serdar.bascil@bozok.edu.tr
Page: 
21-27
|
DOI: 
https://doi.org/10.18280/ria.340103
Received: 
18 September 2019
|
Revised: 
28 November 2019
|
Accepted: 
5 December 2019
|
Available online: 
29 Feburary 2020
| Citation

© 2020 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Man and machine interfaces help paralyzed peoples to communicate and control their environment. This work purpose to introduce and shows a novel kind of machine interface using the horizontal signs of conscious jaw motions on brain signals stored in electroencephalogram (EEG). Electrical functions of the brain are extricated and transformed to control commands. Jaw-Machine Interface (JMI) serve a new functionality for tetraparesis to run peripheral devices with the help of a computer using only horizontal jaw motions. In this study, mean absolute deviation (MAD) and entropy (S) values are derived of EEG and hemispherical designs are valued and examined as offline analysis approach. Principle component analysis (PCA) is used to reduce redundant information from data and two types of artificial neural networks which are Multilayer Neural Network with Levenberg Marquardt training algorithm (MLNN+LM) and Probabilistic Neural Network (PNN) via k-fold method are run to find out horizontal jaw patterns on brain waves.

Keywords: 

EEG, Jaw-Machine interface (JMI), PCA, MLNN+LM, PNN

1. Introduction

Man and machine interactions enable handicapped persons to control and communicate environmental devices such as a computer, a cursor, a wheelchair, a tv etc. [1]. Patients suffer from the tetraparesis do not use both of the arms and legs [2]. Jaw-machine interface (JMI) is a jaw-operated system to manage assistive devices by voluntary jaw motions to help tetraparesis and makes their life easier.

In the literature, some research utilizes EEG based on jaw motions. But all of them focus to extract dominantly appearing and recognizing changings on both EEG and electromiyogram (EMG) signals. A wearable head band was designed by Wei et al., to detect facial movements on EMG and electooculogram (EOG) signals. They tried to control a wheelchair with 5 different stimulates named movement patterns [3]. In their next work they introduced a new control way to detect facial movements by using facial image processing together with EMG signals. By using a route following an experimental setup, they evaluate the performance of an electrical wheelchair [4]. Wei and Hu reported that they developed their previous studies on EMG and image processing by creating software simulation environment to extract brain intentions and declared high performance [5]. Emotive EPOC headset that measures EEG activity and recorded head movements and facial expressions was used Rechy-Ramirez and Hu. A hands-free electrical wheelchair benefited was executed via a graphical user interface [6]. Jeong et al., utilized surface electromyography (sEMG) to obtain high quality EMG signals to control a quadrotor properly. They recorded these signals any region of the body part motions such as forearm, forehead, back of the neck, index finger, jaw clenching and mouth motions [7]. A wearable headband which has twenty passive electrode networks was produced by Paul et al. They recorded horizontal eye movements, gestures and jaw clenching signals to extract user intentions on EMG and EOG. They presented that a cursor or keyboard functions can be control [8]. Costa et al., showed to control a robotic arm in two dimensions by extracting EMG signals stored in EEG recordings produced with 5 different jaw clenching [9]. Zeilfelder et al., placed a BMP280 digital pressure sensor in the outer ear canal to recognize tongue and jaw motions. They tried the system on six participant reported that tongue and jaw movements are suitable in put mechanisms for man and machine interfaces [10]. According to literature, the systems operated by the jaw have some equipment around the jaw. These devices may disturb disables, uncomfortable and interfere with the speech [11, 12].

This paper aims to enlarge of jaw machine interface (JMI) was the first introduced to literature in the previous study [13]. The signals recorded as differential multichannel from scalp and features of them extracted by MAD and S values in time domain analysis. The hemispherical patterns were recognized by using MLNN+LM and PNN.

2. Methodology

2.1 Study protocol

The EEG recordings of 18 differential channels were measured over the scalp according to the 10-20 electrode placement system [14]. A1-A2 ear electrodes described as reference. Each channel was sampled at 1024 Hz, filtered 0.3 Hz high pass, 70 Hz low pass and digitized 16 bits. Also, 50 Hz notch filter was applied for elimination of power line noise.

EEG signal features on the raw dataset of voluntary jaw movements were achieved by MAD and S values. PCA method is used to reduce feature dimension by selecting most significant ones. The best chosen 8 PCA vectors were then conveyed to MLNN+LM and PNN algorithms to recognize hemispherical changes. After all, the results are discussed and concluded. The workflow of JMI was presented in Figure 1.

Figure 1. The work flow of JMI

Figure 2. Experimental paradigm

2.1 Data collection

In this study, an experimental paradigm was made to 7 male and 3 female volunteers of healthy subjects with ages in the range of 25-35 years without any nervous system impairment. All the subjects were seated in front of the LCD which have just direction stimulus and instructed not to move any part of the body except jaw motions as given in Figure 2. JMI system has two output that voluntary control (VC) status and no control (NC) status. JMI system output is activated on VC status by the user and has no output on NC status. As seen on Figure 1, subjects move their jaws distinctly and serially (closed lips and no teeth grinding) from the case of 2 to 1 to perform the left control task and from the case of 2 to 3 to perform the right control task.

The experimental paradigm shown in Figure 2., has two trial. All the subjects implement initially 1st trial moves from left to right and then 2nd trial moves inverse. Each recording sequences of two trial last 120s starts with subject relaxation and each consists of 8 VC parts (4 right and 4 left) of 10s separated with 5s NC status. Subjects were performed VC status nearly 20 times during 10s. All of them performed 160 times left and 160 times right (20x8=160) jaw movements through experimental paradigm. So, every control task consists of 512 (1s/2*1024) samples. At last, EEG recordings was created as 320 different dataset (160 right VC and 16 left VC) each has 18 channel and each channel has 512 samples for each subject.

3. Feature Extraction

The informative and non-redundant feature vector was formed form the raw data signal on this topic. In this study, the mean-absolute deviation (MAD) and entropy (S) values were performed. The MAD is used for calculation of demand variability [15] as the mathematical presentation defined in Eq. (1).

$\mathrm{MAD}=\frac{1}{\mathrm{N}} \sum_{i=1}^{\mathrm{N}} | \mathrm{x}_{\mathrm{i}}|$       (1)

where, Xi=1,2,3…N is a time sequence, and N stands for the length of samples in the data array.

The term of entropy was introduced by Claude Shannon in his 1948 paper [16]. It holds the information of how to distinguish each signal in an orbit according to another signal behaviours [17] as given the formula below.

$\mathrm{S}(\mathrm{X})=-\sum_{\mathrm{i}=1}^{\mathrm{N}} \mathrm{p}\left(\mathrm{X}_{\mathrm{i}}\right) \log _{\mathrm{p}}\left(\mathrm{X}_{\mathrm{i}}\right)$     (2)

where, Xi is time series of the signal, p(Xi) possible probabilities and N is length of the signal.

High dimensional features of EEG dataset (320x512) are effectively extracted (320x18) by the aid of MAD and S concepts.

4. Principal Component Analysis

Principal component analysis (PCA) is a statistical method removing the redundant information from data and reduces dimensionality [18]. Converting the higher dimensional data (Xi) into a lower dimensional one (St) is carried out by PCA determining the eigenvalues and eigenvectors of the covariance matrix (C). The equations of PCA is shown in (3-5).

$C(X)=\sum_{1}^{N} \frac{\left(X_{i} X^{T}\right)}{N}$    (3)

$\mathrm{u}_{\mathrm{i}}=\mathrm{Cu}_{\mathrm{i}}, \quad \mathrm{i}=1,2,3 \ldots \mathrm{m}$   (4)

where, λistands for the eigenvalue of the covariance matrix (C) and ui is the corresponding eigenvector.

$S_{t}(i)=u_{t}^{T} X_{t}, \quad i=1,2,3 \dots m$    (5)

where, St(i) describes the principal components of the data set (Xt) and more information about PCA can be found in the study [19]. In this work, the highest variance values were selected to create a new data set using only 8 dimensions (320x8) by PCA indicating 98.54% of the information shown in Figure 3. Thus, it provides an easier calculation for machine learning algorithms.

Figure 3. Feature selection by PCA

5. Machine Learning Methods

Multilayer Neural Network with Levenberg Marquardt training algorithm (MLNN+LM) and Probabilistic Neural Network (PNN) are the kinds of neural network and used in classification and pattern recognition processes. In the study, both of the structures incorporate in k-cross validation technique. All dataset is divided into k pieces (k=10) randomly with this technique. Neural networks are repeated k-times in the independence of selection for samples [20]. Mathematical equations of classification accuracy given in (6), (7) and (8):

$\operatorname{Accuracy}(T S)=\frac{\sum_{i=1}^{|T S|} \text { estimate }\left(n_{i}\right)}{|T S|}, n_{i} \in T S$    (6)

estimate$(n)=\left\{\begin{array}{ll}1, & \text { if estimate }(n)=c n \\ 0, & \text { otherwise }\end{array}\right.$    (7)

Classification accuracy $(M L)$$=\frac{\sum_{i=1}^{|k|} \operatorname{accuracy}\left(T S_{i}\right)}{|k|}$    (8)

where, TS refers the test data set to be classified, while $n \in T S$, cn is the class of n and estimate(n) stands for the classification result of n estimated by  neural networks.

5.1 Multilayer neural network

Multilayer neural network is formed an input layer, hidden layers and an output layer. The logarithmic sigmoid activation function is applied to weighted input vector and the output is generated [21, 22].

MLNN with Levenberg Marquardt (LM) training algorithm was used to recognize left and right jaw patterns. Levenberg Marquardt is a very fast and efficient training methods [23]. It helps to provide lower hidden layers and maximize the generalization ability. In this work, two hidden layers approach (20-50 neurons) was chosen because it has the better convergence than the first order. Also, hidden neuron numbers were selected randomly while obtaining the best results [23, 24].

5.2 Probabilistic neural network

PNN was developed by Donald Specht [25] and known as a distance-based neural network and offers faster classification results. The first layer is input layer of PNN, the second layer is radial bases layer realizes the distances between the input vector and rows in the weight matrix, and the las is competitive layer determines the classification with maximum probability of correctness. The smoothing parameter is crucial in the PNN classifier hence an appropriate smoothing parameter can often be dependent on data [26]. In this work, a random search method between 0.1 and 1 was made in 0.01 steps to ensure the best performance of the PNN and to find the optimal value of the spread of activation functions.

6. Results and Discussion

In this study, one dimensional (horizontal) jaw movements has been examined to improve and enlarge the previous study [13] on JMI. Subjects-8 (Sub-8) has the best and Subject-3 (Sub-3) has the worst classification results. The explanation of these participant results compared and clarified to avoid confusion.

The raw EEG dataset (320x512) of each participant were converted 320x18 dimension with the help of MAD and S characteristics. This new feature vector fed into PCA to eliminate redundant data and select most significant information. PCA removed the redundant information from data and reduced dimensionality as 320x8 given in Figure 3 indicating 98.54% of the information. At last, MLNN+LM and PNN structures performed on the size of 320x8 MAD and S features to obtain classification results. Also, together with neural networks results true positive rates (TPRs) and false positive rates (FPRs) were calculated to check classification results given in (9), (10) respectively.

$\mathrm{TPR}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}$    (9)

$\mathrm{FPR}=\frac{\mathrm{FP}}{\mathrm{FP}+\mathrm{TN}}$   (10)

TPRs mean the rate of the correctly classified VC status and FPRs mean the rate of the misclassified NC status. All the performance results of classifiers are given in Table 1.

As seen on Table 1, Sub-8 has the best classification performance, TPRs and the minimum FPRs. For that reason, Sub-8 is named as the best participant but, it can’t be said for Sub-3. He has the lowest performances and he is marked as the least successful participant of all seen on Table 1.

Sub-8 reached the classification accuracy (Acc.) with 90.03% on horizontal jaw movement task, the highest TPR with 94.12% and the lowest FPR with 5.74% via PNN on MAD features and Sub-3 has 81.69% Acc., 85.05% TPR and 13.77% FPR. The lowest results of Sub-8 were appeared with 84.33% and Sub-3 performed to the worst performance of all with 76.58% in MLNN+LM on S features.

It is obvious that, MAD enables better performance than S on neural networks for all subjects and PNN arrives higher accuracies than MLNN+LM. When the Average results compared, the advantage of PNN on MLNN+LM is observed (1.4%) on MAD once again. Also, TPRs and FPRs displayed a similar tendency. The last column of Table 1 indicates the globally trained classifier (Sub-1+2+…+10) performance and it suggests that PNN is more suitable than MLNN+LM on MAD once and again and the global data performance 80.28% is the lowest of all. It can be deduced that personal usage of the system is more suitable to recognize voluntary jaw patterns. Figure 4 shows the best classification results of Sub-8, Sub-3, Average, and Global Data.

The neural network results of JMI are supported with the brain mapping results given below for Sub-8 and Sub-3. It is known that the right hemisphere of the brain controls the left motor activities and the other way around. The existence of delta frequencies seen on frontal and occipital lobes in changing powers anti-symmetrically on the brain hemispheres. In the literature, jaw clenching, teeth grinding and jaw movements dominantly seen on EEG named as artifacts [27-29]. But recent studies showed that they are not only qualified as artifact physical force but also have significant information on delta frequency band [13, 30-32].

Figure 4. The best results of Sub-8, Sub-3, average and global data

Figure 5 and 6 show brain signal intensities with EEG bands on conscious jaw motions on the right and left of Sub-8 respectively. The left hemisphere of the brain is active on right motions and the other way around in parallel with the literature and the delta frequencies are dominantly seen on the frontal and occipital lobes of the brain. The intensities of delta on occipital lobes can be understood that the Sub-8 followed successfully experimental paradigm on LCD. The different intensities of delta on the scalp can be explained as Sub-8 jaw fatigue during trials. Also, it can be said that Sub-8 may be right-handed according to the more delta intensities on left hemisphere than the right.

Figure 7 and 8 show brain signal intensities with EEG bands on conscious jaw motions on the right and left of Sub-3 respectively. The left hemisphere of the occipital is active as it is expected and Sub-3 followed experimental paradigm sufficiently on LCD but, it is not the same on the left hemisphere of the frontal lobe. It can be explained that Sub-3 was collapsed on the right motion of jaw during trials. Also, the delta waves seen on temporal lobes similarly on frontal lobes. These results can be interpreted as Sub-3 is more susceptible to external stimulus or have distractibility.

It is obvious that Sub-8 brain signal intensities of voluntary jaw motion on the right and left hemisphere are clearer and ordinate than Sub-3 has. By this means and thanks to the neural network results given in Table 1, Sub-8 is named as the best successful participant and Sub-3 is the worst.

Table 1. Performance results for all subject

Methods

Sub-1

Sub-2

Sub-3

Sub-4

Sub-5

Sub-6

Sub-7

Sub-8

Sub-9

Sub-10

 Average

 Global Data

MLNN+LM (%)

MAD

Acc.

83.65

88.27

81.05

85.66

87.33

82.56

84.12

89.05

81.55

86.94

85.01

78.88

TPRs

88.28

92.86

84.94

90.81

92.56

87.38

89.12

92.94

86.11

91.33

89.63

83.55

FPRs

8.11

5.68

13.48

6.94

6.13

8.76

7.55

5.11

9.14

6.12

7.7

15.64

S

Acc.

81.55

84.15

76.58

83.03

83.22

80.88

82.44

84.33

80.05

83.83

81.99

73.71

TPRs

86.48

89.76

81.77

87.95

88.38

85.05

87.05

89.04

84.28

88.55

86.83

80.33

FPRs

9.87

7.05

14.44

8.12

7.28

10.16

9.16

6.45

10.94

7.86

9.13

17.88

PNN (%)

MAD

Acc.

84.92

89.1

81.69

87.76

88.88

84.33

86.05

90.03

83.08

88.33

86.41

80.28

TPRs

89.03

93.52

85.05

91.64

93.01

88.16

90.79

94.12

86.94

92.68

90.49

84.94

FPRs

8.59

6.02

13.77

7.77

6.94

9.22

8.15

5.74

10.2

7.21

8.36

13.28

S

Acc.

81.55

84.68

77.05

83.18

84.02

81.66

82.12

85.05

80.96

83.56

82.38

75.38

TPRs

86.72

89.05

82.14

87.65

89.38

85.55

86.55

90.67

84.33

88.44

87.04

81.96

FPRs

10.96

8.12

13.56

9.15

8.22

11.18

10.14

7.78

11.42

8.87

9.94

14.14

Figure 5. Brain signal intensities of Sub-8 on conscious jaw motion to the right

Figure 6. Brain signal intensities of Sub-8 on conscious jaw motion to the left

Figure 7. Brain signal intensities of Sub-3 on conscious jaw motion to the right

Figure 8. Brain signal intensities of Sub-3 on conscious jaw motion to the left

7. Conclusion

This study means to enlarge and improve the previous work on one dimensional JMI. The main goal of the work is to upgrade quality of life and to present a new communication pathway for tetraparesis by not having some obtrusive equipments around the jaw such as electrodes.

The brain waves stored on EEG were acquired from ten participants by generating conscious jaw motions distinctly and serially with the help of experimental paradigm. MAD and S characteristics of these signals were extracted and the most important features were selected by PCA. Then, MLNN+LM and PNN classifiers were run on these features to find out horizontal jaw patterns on EEG and to get 1D control. At last, it is realized that PNN on MAD structure is the best choice for this aim according to results given in Fig. 6.  Also, the personal usage of the system is more convenient to implement 1-D control by using voluntary jaw motions according to the lowest classification results of global data given in Table1.

After all, PNN on MAD structure has fairly parallel results to the SVM on RSM structure studied in previous study. It offers a different view point, makes a contribute to the literature and encourage the JMI studies for feature developments.

Acknowledgment

The author would like to Thanks the subjects of the University of Bozok for providing the participation for this research.

The study was approved by the Ethical Committee of Bozok University. All procedures performed in study involving human participants were in accordance with the ethical standards of the institutional and/or national research committee. Informed consent was obtained from all individual participants included in the study.

  References

[1] Struijk, L.N.S.A. (2006). An inductive tongue computer interface for control of computers and assistive devices. IEEE Transactions on Biomedical Engineering, 53(12): 2594-2597. https://doi.org/10.1109/TBME.2006.880871

[2] Kornegay, J.N. (1991). Paraparesis (paraplegia), tetraparesis (tetraplegia), urinary/fecal incontinence Spinal cord diseases. Problems in Veterinary Medicine, 3(3): 363-377. 

[3] Wei, L., Hu, H., Yuan, K. (2009). Use of forehead bio-signals for controlling an intelligent wheelchair. IEEE International Conference on Robotics and Biomimetics, Thailand, pp. 108-113. https://doi.org/10.1109/ROBIO.2009.4912988

[4] Wei, L., Hu, H., Lu, T., Yuan, K. (2010). Evaluating the performance of a face movement based wheelchair control interface in an indoor environment. IEEE International Conference on Robotics and Biomimetics, Tianjin, pp. 387-392. 

[5] Wei, L., Hu, H. (2011). A hybrid brain-machine interface for hands-free control of an intelligent wheelchair. International Journal of Mechatronics and Automation, 1(2): 97-111. https://doi.org/10.1504/IJMA.2011.04004

[6] Rechy-Ramirez, E.J., Hu, H. (2013). Bi-modal human machine interface for controlling an intelligent wheelchair. IEEE Fourth International Conference on Emerging Security Technologies, Cambridge. https://doi.org/10.1109/EST.2013.19

[7] Jeong, J.W., Yeo, W.H., Akhtar, A., Norton, J.J., Kwack, Y.J., Li, S., Cheng, H. (2013). Materials and optimized designs for brain-machine interfaces via epidermal electronics. Advanced Materials, 25(47): 6839-6846. https://doi.org/10.1002/adma.201301921

[8] Paul, G.M., Cao, F., Torah, R., Yang, K., Beeby, S., Tudor, J. (2014). A smart textile based facial EMG and EOG computer interface. IEEE Sensors Journal, 14(2): 393-400. https://doi.org/10.1109/JSEN.2013.2283424

[9] Costa, A., Hortal, E., Ianez, E., Azorin, J.M. (2014). A supplementary system for a brain–machine interface based on jaw artifacts for the bidimensional control of a robotic arm. PloS One, 10(2) 9-e112352. https://doi.org/10.1371/journal.pone.0112352

[10] Zeilfelder, J., Busch, T., Zimmermann, C., Stork, W. (2018). A human-machine interface based on tongue and jaw movements. IEEE Sensors Applications Symposium (SAS), Seoul, pp. 1-6. https://doi.org/10.1109/SAS.2018.8336751

[11] Nam, Y., Koo, B., Cichocki, A., Choi, S. (2014). GOM-Face: GKP, EOG, and EMG-Based multimodal interface with application to humanoid robot control. IEEE Transactions on Biomedical Engineering, 61(2): 453-462. https://doi.org/10.1109/TBME.2013.2280900

[12] Tang, H., Beebe, D.J. (2006). An oral tactile interface for blind navigation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14(1): 116-123. https://doi.org/10.1109/TNSRE.2005.862696

[13] Bascil, M.S. (2018). A new approach on HCI extracting conscious jaw movements based on EEG signals using machine learnings. Journal of Medical Systems, 42(9). https://doi.org/10.1007/s10916-018-1027-1

[14] Jasper, H. (1958). The ten twenty electrode system of the international federation. Electroencephalogr Clin Neurophysiol Suppl., 10(2): 370-375. 

[15] Gorard, S. (2015). Introducing the mean absolute deviation ‘effect’ size. International Journal of Research & Method in Education, 38(2): 105-114. https://doi.org/10.1080/1743727X.2014.920810

[16] Shannon, C.E. (1948). Communication theory of secrecy systems. The Bell System Technical Journal, 28(4): 656-715. https://doi.org/10.1002/j.1538-7305.1949.tb00928.x

[17] Sleigh, J.W., Olofsen, E., Dahan, A., De Goede, J., Steyn-Ross, D.A. (2001). Entropies of the EEG: The effects of general anaesthesia. 5th International Conference on Memory, Awareness and Consciousness, New York.

[18] Pearson, K. (1901). On lines and planes of closest fit to systems of points in space. Phil. Mag. and Journ. Of Sci., 2(11): 559-572. https://doi.org/10.1080/14786440109462720

[19] Cao, L.J., Chua, K.S., Chong, W.K., Lee, H.P., Gu, O.M. (2003). A comparison of PCA, KPCA and ICA for dimensionality reduction in support vector machine. Neurocomputing, 55(1): 321-336. https://doi.org/10.1016/S0925-2312(03)00433-8

[20] Şen, B., Peker, M. (2013). Novel approaches for automated epileptic diagnosis using fcbf selection and classification algorithms. Turkish J. of Elec. Eng. & Comp. Sci., 21(1): 2092-2109. https://doi.org/10.3906/elk-1203-9

[21] Bascil, M.S., Tesneli, A.Y., Temurtas, F. (2015). Multi-channel EEG signal feature extraction and pattern recognition on horizontal mental imagination task of 1-D cursor movement for brain computer interface. Australasian Physical & Engineering Sciences in Medicine, 38(2): 229-239. https://doi.org/10.1007/s13246-015-0345-6

[22] Challita, N., Khalil, M., Beauseroy, P. (2016). New feature selection method based on neural network and machine learning. IEEE International Multidisciplinary Conference on Engineering Technology, Lebanon, pp. 81-85. https://doi.org/10.1109/IMCET.2016.7777431

[23] Hunter, D., Yu, H., Pukish, M.S., Kolbusz, J., Wilamowski, B.M. (2012). Selection of proper neural network sizes and architectures a comparative study. IEEE Transactions on Industrial Informatics, 8(2): 228-240. https://doi.org/10.1109/TII.2012.2187914

[24] Sheela, K.G., Deepa, S.N. (2013). Review on methods to fix number of hidden neurons in neural networks. Mathematical Problems in Engineering. https://doi.org/10.1155/2013/425740

[25] Specht, D.F. (1990). Probabilistic neural networks. Neural Networks, 3(1): 109-118. https://doi.org/10.1016/0893-6080(90)90049-Q

[26] Ward Systems Group. (2008). NeuroShell 2 User’s Manual Help (Apply PNN Network). Ward Systems Group Inc., USA.

[27] Estepp, J.R., Christensen, J.C., Monnin, J.W., Davis, I.M., Wilson, G.F. (2009). Validation of a dry electrode system for EEG. Proceedings of the Brain Factors and Ergonomics Society Annual Meeting, 53(18): 1171-1175. https://doi.org/10.1177/154193120905301802

[28] Kappel, S.L., Looney, D., Mandic, D.P., Kidmose, P. (2017). Physiological artifacts in scalp EEG and ear-EEG. Biomedical Engineering Online, 16. https://doi.org/10.1186/s12938-017-0391-2

[29] Yong, X., Ward, R.K., Birch G.E. (2008). Facial EMG contamination of EEG signals: Characteristics and effects of spatial filtering. IEEE 3rd International Symposium on Communications, Control and Signal Processing, Malta, pp. 729-734. https://doi.org/10.1109/ISCCSP.2008.4537319

[30] Huo, X., Park, H., Kim, J., Ghovanloo, M. (2013). A dual-mode machine computer interface combining speech and tongue motion for people with severe disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 21(6): 979-991. https://doi.org/10.1109/TNSRE.2013.2248748

[31] Nam, Y., Koo, B., Cichocki, A., Choi, S. (2016). Glossokinetic potentials for a tongue–machine interface. IEEE Systems, Man, & Cybernetics Magazine, 2(1): 6-13. https://doi.org/10.1109/MSMC.2015.2490674

[32] Gorur, K., Bozkurt, M.R., Bascil, M.S., Temurtas, F. (2018). Glossokinetic potential based tongue-machine interface for 1-D extraction using neural networks. Biocybernetics and Biomedical Eng., 38(3). 745-759. https://doi.org/10.1016/j.bbe.2018.06.004