Predicting Depression Risk from Facial Video-Derived Heart Rate Estimates

Predicting Depression Risk from Facial Video-Derived Heart Rate Estimates

Purude Vaishali Narayanrao P. Lalitha Surya Kumari*

Department of CSE, Koneru Lakshmaiah Education Foundation, Hyderabad 500075, Telangana, India

Department of CSE, Neil Gogte Institute of Technology, Uppal, Hyderabad 500039, Telangana, India

Corresponding Author Email: 
vlalithanagesh@klh.edu.in
Page: 
997-1004
|
DOI: 
https://doi.org/10.18280/ria.370421
Received: 
27 May 2023
|
Revised: 
26 July 2023
|
Accepted: 
1 August 2023
|
Available online: 
31 August 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Common mental disorder is caused due to depression. Medical study shows that heart rate is linked to depression. Heart rate can give early warning of potential depression. Heart rate could predict the risk of depression. This link helps diagnosis and treatment of mental health issues like depression. Heart rate of healthy person is 60-85 beats per minute. When a person is depressed, his heart rate is not in normal range. Heart rate of depressed human being is increased beyond 85 beats per minute. Depression can be predicted with a 90% accuracy by analyzing a person’s heart rate. Using Eulerian Video Magnification algorithm, it is possible to calculate heart rate of person from facial video. Which gives benefit that no need of physical contact with the person. In the proposed research Heart rate is calculated by inputting face videos. Questionnaire is formed that contains 32 questions useful for depression assessment. Real time video dataset is collected while asking depression questionnaire to the people of all age groups. Using Eulerian Video Magnification algorithm, facial video is amplified and heart rate is estimated. Based on range, dataset is labeled as depressed or not depressed. By applying machine learning algorithms like Decision Tree (DT), Support Vector Machine (SVM) and Random forest (RF), dataset is classified with accuracy ranging from 96% to 100%. The performance of this research when compared with related work carried out for same research purpose, it is observed that accuracy obtained for this research work is 51% to 85.7% till date. This research gives more accurate model with new approach to predict risk of depression using heart rate estimated from facial videos.

Keywords: 

depression, heart rate, mental health, depression assessment, Eulerian video magnification, video-based heart rate estimation, random forest, support vector machine, decision tree

1. Introduction

Predictive modeling means creating models that predicts behavior or any future event using statistics and machine learning algorithms. The crucial application area of it is healthcare in which it can predict the potential risk of certain condition or disease. As prediction is in early stage, by providing accurate treatment, patient health can be improved [1-3]. In healthcare, predictive modeling can identify patients at risk of developing certain diseases or conditions, predict the outcomes of medical interventions. Predictive modeling is applicable for mental healthcare also. Depression is one of the psychological diseases that takes control over brain.

Depression is a frightful feeling of sadness, emptiness, or inability. There is no clear reason for it. Person may feel under some pressure continuously. It is distinct from grief and other emotions. It may affect kids, adults and adolescents. It affects people of all age group.

Worldwide depression is the leading cause of disability, according to the World Health Organization (WHO). According to the survey of World Health Organization (WHO), depression is considered as common disorder affecting 3.8% population throughout the world. The economic loss caused due to it is $1 trillion.

As per medical field reports, depression risk can be identified by observing heart rate of person. In depression heart rate increases by 10 to 15 bits per minute (bpm). To estimate heart rate in clinics electrocardiogram (ECG) is used. The alternative is using smart health band to get heart rate. For both these, patient has to cooperate. Without physical contact with patient, it is possible to calculate heart rate [4] using facial videos [4]. Hence in this research paper, an effort is made to estimate heart rate inputting facial videos using Eulerian Video Magnification (EVM) algorithm and estimated heart rate is used to predict [4] the risk of depression [4]. The machine learning (ML) algorithms like Decision Tree, Support Vector Machine and Random forest are used to predict the risk more accurately. The accuracy varies from 96.42 to 100%.

1.1 Motivation

Depression not only affects the person but it affects the people surrounding him. These people may include family members, relatives, neighbours, colleagues and many others in the society. The day-to-day life is affected because of depression. Mainly the risk of depression brings suicidal thought in the person mind. Depression can undermine a person’s relationships, make working and maintaining good health very difficult, and in severe cases, may lead to suicide.

As per WHO, the suicidal rate per 100K is as shown in the Figure 1. The graph indicates 21.6 to 72.4% people per 100K are attempting suicide worldwide which is serious issue.

Figure 1. Top 10 suicidal countries in the world per 100k

Such depression if predicted at early stage then steps towards the diagnosis can be taken. Hence prediction plays vital role before diagnosis.

At present ECG is used to estimate heart rate of affected person. ECG needs physical contact of that person and complete cooperation from the person. If person does not cooperate then ECG may fail and give wrong estimation. Hence in the current study does not need physical contact of affected person. Using EVM algorithm, heart rate is estimated from facial videos.

1.2 EVM

Eulerian Video Magnification is the algorithm which inputs video and visualizes the variations in video or subtle color change by magnifying it (making it larger). Video means time series data which is processed by EVM. EVM estimates HR from variation in facial skin color that is caused due to blood circulation. The main function of heart is to push the blood to each and every part of the body including brain. While passing the blood to brain, it has to pass through facial vessels. As heart rate is affected, the blood circulation also affects. This changes color of facial skin. The change cannot be observed with naked eyes, hence need enlarged video. The enlarged video is the output of EVM algorithm. After magnification color changes caused due to heart rate become clearly visible.

Pulse of any person creates visual pattern which is difficult to catch with naked eye. If ordinary video is enlarged then the change in the color caused due to pulse, can be observed. This enlargement is achieved using Eulerian Video Magnification algorithm. The video magnification is carried out using following steps:

(1) Rigid translation:

$\mathrm{I}(\mathrm{x}, \mathrm{t})=\mathrm{f}(\mathrm{x}+\delta(\mathrm{t}))$                    (1)

Consider a translating 1D image with intensity denoted byI(x, t) at positionxand timet.Because it is translating, we can express the image's intensities with a displacement functionδ(t).

(2) Relative to image structures, small translation is assumed.

Under the assumption that the displacementδ(t) is small, we can approximate the first term with a first-order Taylor series expansion aboutx, as

I(x, t) = f(x)+ δ(t) (σ f(x) / σ x)        (2)

(3) Temporally bandpassed signal is amplified-

A first-order Taylor expansion is valid, we can relate the previous equation to motion magnification

$\mathrm{I}(\mathrm{x}, \mathrm{t})=\mathrm{I}(\mathrm{x}, \mathrm{t})+(\alpha-1) \beta_{\mathrm{t}}[\mathrm{i}(\mathrm{x}, \mathrm{t})]$          (3)

The application of it is to extract heart rate from inputted video. As the heart is beating, the blood starts circulation including the face, which changes color. The color change is slight, like half gray-level. It is in a narrow band of temporal frequencies and smooth spatially. If we can perceive this change, effectively we can find heart rate of an individual.

According to medical experts [5, 6], heart rate of patients suffering from depression is about 10-15 beats per minute higher than healthy people [7]. Heart rate variability is predictive biomarker for depression as well as major depressive disorder [8-10]. There is also association between heart rate and blood pressure which can be used for prediction of mental disorders [11]. After calculating heart rate, if it goes beyond normal range (60-85 beats per minute), then dataset is labeled as depressed (1), otherwise it is not depressed (0). This labeled dataset is passed to machine learning algorithms and performance is observed. The comparative analysis is performed for proposed research work.

The section wise division of the paper is, Section II covers related work, Section III covers experimental part and Section IV compares this work with previous work while Section V concludes the paper.

2. Related Work

The technological accomplishment of Machine Learning (ML) has paved the way in different application areas. The tasks which were impossible by human being few years ago are possible using ML by single click or voice command. Using ML algorithms healthcare can be managed better. The predictions outputted by ML algorithms make patient life easier. The predictions are perfectly applicable for mental healthcare also. One of such applications is predicting depression. Since decades most of the research has been carried out to predict the mental state of a person and to predict depression. It is possible to accomplish this by using multimedia input like image, video, audio and text.

However, most of the existing studies focus on textual data from social media. Few studies consider both text and image data. The depression means state of mental health is identified using different machine learning algorithms [12]. The application of machine learning to predict depression is reviewed by author. When previous literature is studied then it is observed that facial videos of a person are mostly used for emotion recognition application for displaying emotional states [13]. Face is considered as the index of mind to read emotions from face [14]. Emotions estimated from facial expressions plays role in non-verbal communication [7, 15]. These authors do not predict any disease using emotions.

Human behavior can be studied from captured videos as well to predict diseases like mental disorder [16, 17]. The authors are not applying recognized emotions in real time applications. Face recognition is used for authentication in Smart home, Smart door unlock and healthcare applications [18]. As facial expression changes, number of features to be extracted for emotion identification increases. Hence space and time requirement increase. Researchers has reduced dimensionality which saves space and time.

In addition to face, text classification can also be used to detect the risk of depression. Text may be written text, language used during speech or posts on social media. Using tweets posted during pandemic period depression is predicted [19]. During pandemic time when whole world was facing lockdown, people experienced depression. The reasons were health loss, job loss, relocation. Social media like Twitter was the medium to express their opinion. Text may also contain answers given to questionaries asked to patients [20].

In further research deep learning is used for predicting depression using facial expressions of video dataset [21]. Author has captured videos while affected people were watching movies. In some research facial videos are captured while watching different films [22]. Using these videos heart rate is estimated and using it person is classified as depressed or healthy [22]. It is possible to use the Facial Action Coding System for predicting depression, anxiety and stress levels [23]. Depression is associated with heart rate variability [24]. In previous work by using BAUM dataset and EVM algorithm, predicted heart rate is used to assess the risk of depression [24]. Author uses secondary dataset and estimates heart rate. Heart rate is used as input to predict mental state.

Till now dynamics of partial or full facial videos are used for emotion recognition, authentication, and healthcare with accuracy from 51% to 85.7%. In this research most accurate machine learning based model is obtained with accuracy of 100% using facial videos and estimated heart rate.

3. Experiments

3.1 Data

The facial videos dataset is primary that is real time video dataset collected during interview. For depression assessment 32 different questionnaires are formed. These are asked to the people of all age group. Some of them cooperated to give answers while video was captured. But due to insecure feeling, some have rejected for video capturing. Nearly 400 people are interviewed and the dataset is formed.

3.2 Methodology

The input to the model is videos from the primary dataset. By using following 3 modules the calculations are done and heart rate is obtained. And it is used as feature for classification in machine learning algorithms.

In the proposed methodology, the modules used are:

(1) Preprocessing module: This module reads the video and then uses Haar cascade function to select Region of Interest (ROI) from the video. The magnification process is carried out on selected ROI.

(2) EVM: This module takes the input from pyramid module which generates Gaussian/Laplacian pyramids. Then uses Fast-Fourier transform (FFT) for temporal bandpass filter.

Heart rate module: From FFT results calculate heart rate.

3.3 Architecture

The architecture diagram of proposed model is as shown in Figure 2.

Figure 2. The workflow of the proposed research

Captured video has to undergo 3 step process. In preprocessing step, after removal of unwanted noise, ROI is selected using Haar cascade algorithm. Haar cascade algorithms are used in computer vision and image processing applications. It is used with OpenCV library. One of its benefits is seed. It is used in preprocessing for selecting the Region of Interest (ROI). The selected ROI is magnified using EVM by applying spatial filtering, temporal filtering and amplification. The change in the skin color caused due to affected blood circulation can be used to calculate heart rate [24]. The green color component of the skin color is used to calculate heart rate. It is stored in the database. Out of all videos, 4 videos got heart rate more than 85. Based on range of heart rate, dataset is labeled as 0(not depressed) and 1(depressed). Means 4 videos get label as 1 while remaining get label as 0. This labeled dataset is passed to ML algorithms Decision Tree, Support Vector Machine (SVM) and Random Forest. The accuracy of SVM is 96% while accuracy of Decision Tree and Random Forest is 100%. The proposed model is trained for classification of the dataset as depressed or non-depressed person.

3.4 Algorithm

Table 1. Algorithm for the proposed model

Input:

Dataset containing videos

Steps:

  1. For each video Vi to Vn in the dataset, remove noise.
  2. Select region of interest (ROI) for each Vi in the dataset.
  3. Apply spatial filtering, temporal filtering and amplification for each video Vi.
  4. By extracting green color component of magnified video Vi, calculate heart rate (HRi).
  5. If HRi>85 then label as 1 (depressed).

Otherwise label as 0 (non-depressed)

  1. Repeat steps 2 to 5 till Vn
  2. Split the dataset as training 70% and testing 30%.
  3. Apply decision tree, SVM and random forest on above dataset.
  4. Evaluate each algorithm using performance metrics like accuracy, precision, recall and errors like Mean Absolute Error (MAE), Mean Squared Error (MSE) and Root Mean Squared Error (RMSE).
  5. Plot confusion matrix and classification report.

Output:

Classification as depressed or non-depressed

3.5 Experimental results

Figure 3. Sample heart rate for some videos from dataset

Using algorithm stated in Table 1, for each video heart rate is calculated and its plot is as shown in Figure 3.

Green color indicates that heart rate is in normal range while red color indicates heart rate crossing 85. The dendrogram is plotted based on heart rate obtained in Figure 4.

Figure 4. Dendrogram based on heart rate

Figure 5. Depression label based on heart rate

The obtained dataset of heart rate is labeled based on range as 0 or 1 as shown in Figure 5. Label 1 is given if heart rate is more than 85 indicating depressed, otherwise label 0 is given indicating not depressed.

The labeled dataset is spilt into 70% and 30% used for training and testing respectively. The ML algorithms used for classification are decision tree, SVM and Random forest. The performance of these algorithms is evaluated based on metrics as shown in Table 2.

Using the metrics given in Table 2, the performance of all 3 algorithms is as shown in Figure 6, which also includes confusion matrix and classification report. Part1 of the figure shows results of decision tree algorithm application and part2 shows results for SVM while part3 shows results for random forest.

Table 2. Performance metrics

Sr.No.

Metrics

Formula

1

Accuracy

TP+TN/T+TN+FP+FN

2

Precision

TP/TP+FP

3

Recall

TP/TP+FN

4

F1-score

2*precision*recall/precision+recall

TP: True Positive         TN: True Negative

FP: False Positive         FN: False Negative

Figure 6. Performance of machine learning algorithms

4. Comparative Study

When achieved results are compared with related work carried out before, for the same research, it is observed that accuracy ranges from 51% to 85.7%. While accuracy obtained from proposed model is 96.42% to 100%. The comparison can be observed in Table 3. The features used for classification are different. Some has used full face while others have used partial areas like eyes. The facial dynamics is also observed for emotion recognition. The different Machine Learning algorithms used are SVM, Logistics regression, K Means, K Nearest neighbor and Neural Network.

Each author has used single algorithm for predicting depression. Authors has used available secondary dataset.

Alghowinem et al. [25] has used 2 secondary datasets Pittsburgh and AVEC’14. ROI is eyes as well as full face. Author applied SVM algorithm and obtained accuracy of 85.7%. Zhou et al. [26] has applied Logistic Regression on Rochester dataset with ROI as eyes and got accuracy of 75.6%. Maddage et al. [27] and Ooi et al. [28, 29] used ORI and ORYGEN dataset with accuracy of 75.6% and 51% respectively. Yang et al. [30] has used audio dataset CHI-MEI and obtained accuracies ranging from 53.85 to 65.38%. Authors Hasani et al. [31], Joshi et al. [32], Saad et al. [33] and Shahar and Hel-Or [34] has applied Machine Learning and Deep Learning algorithms and achieved accuracy of 68.5 to 80%. Proposed model has tried to build model on primary dataset. The model uses 3 machine learning algorithms and obtained accuracy of 96.42%.

Table 3. Comparative study

Dataset

Author [Year]

Inputs Used

Classification Algorithm

Reported Accuracy(precision)

Pittsburgh+AVEC’14

Alghowinem et al. [25]

Eyes, Full Face

SVM

85.7%

Rochester

Zhou et al. [26]

Full Face, Eyes

LR

82%

ORI

Maddage et al. [27]

Full Face

GMM

75.6%

ORYGEN

Ooi et al. [28, 29]

Full Face

KNN

51%

CHI-MEI

Yang et al. [30]

AUs, Audio

CHMM

65.38%

AUs

HMM

53.85%

AffectNet

Hasani et al. [31]

Facial Expression recognition

Deep Learning

68.5%

FER2013

71.53%

Black Dog

Joshi et al. [32]

Facial dynamics

K-Means

76.7%

NSRRandMASS

Saad et al. [33]

Polysomnograms-a type of sleep study

heart rate profiling algorithm based on Machine Learning

79.9%

CASME

Shahar and Hel-Or [34]

color change during micro emotion expression

LSTM NN

80.0%

This research (Real time dataset)

2022

HR-Color change in facial videos

Decision Tree

100%

SVM

96.42%

Random Forest

100%

The accuracy obtained using different datasets of comparative study is plotted in the Figure 7.

Figure 7. Comparative analysis of different datasets and proposed system

5. Conclusion and Future Work

Medical study has identified link between depression and heart rate. The link helps for diagnosis of mental health issues. Clinically heart rate is calculated using ECG. Which needs complete cooperation from patient. Hence in this study without physical contact heart rate is calculated. In the proposed research, primary dataset is developed and used for research. The research has been carried out to form the questionnaires required for depression assessment. These are used during face-to-face interview process for the person who is doubtful for risk of depression. During interview video is captured and used for prediction as input. The captured video is magnified using EVM algorithm and heart rate is calculated. The heart rate act as useful biomarker to predict the risk of depression. The prediction accuracy ranges from 96.42 to 100%. Hence the proposed model is the most accurate model which can be used for healthcare applications for monitoring mental health of individual.

In future the work can be extended for larger dataset containing large population like college students. The diverse populations can be used to assess its generalizability. Recent algorithms like deep learning can be applied for larger dataset. To apply deep learning algorithm on larger video dataset, model need to be implemented on GPU.

Overall, our proposed method has the potential to contribute to the development of more accurate and efficient tools for predicting the risk depression using biomarker heart rate.

6. Declarations

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors. Author has used primary dataset which is collected by author.

Competing interests

Author A declares that she has no conflict of interest. Author B declares that she has no conflict of interest.

Authors' contributions

The first author is research scholar in CSE department of Koneru Lakshmaiah Education Foundation pursuing research under the guidance of Dr. Lalitha Surya Kumari.

The manuscript was written by first author after performing experiments.

The guide has suggested changes to improve quality of research.

Funding

The authors did not receive support from any organization for the submitted work. No funding was received to assist with the preparation of this manuscript. No funding was received for conducting this study.

Availability of data and materials

Dataset will be made available upon request.

  References

[1] Liang, L., Liu, M., Martin, C., Sun, W. (2018). A deep learning approach to estimate stress distribution: A fast and accurate surrogate of finite-element analysis. Journal of The Royal Society Interface, 15(138): 20170844. https://doi.org/10.1098/rsif.2017.0844

[2] Jaques, N., Taylor, S., Sano, A., Picard, R. (2017). Predicting tomorrow’s mood, health, and stress level using personalized multitask learning and domain adaptation. In IJCAI 2017 Workshop on Artificial Intelligence in Affective Computing. PMLR, pp. 17-33.

[3] Sumathi, V., Velmurugan, R., Sudarvel, J., Sathiyabama, P. (2021). Intelligent classification of women working in ict based education. In2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS). IEEE, 1: 1711-1715. https://doi.org/10.1109/ICACCS51430.2021.9441803

[4] Vaishali, P., Lalitha Surya Kumari, P. (2022). Visual heart rate-A key biomarker to diagnose depressive disorder. In Communication, Software and Networks: Proceedings of INDIA 2022. Singapore: Springer Nature Singapore, pp. 555-563. https://doi.org/10.1007/978-981-19-4990-6_51

[5] https://www.medicalnewstoday.com/articles/heart-rate-could-predict-depression-risk#A-link-confirmed

[6] Kircanski, K., Williams, L.M., Gotlib, I.H. (2019). Heart rate variability as a biomarker of anxious depression response to antidepressant medication. Depression and Anxiety, 36(1): 63-71. https://doi.org/10.1002/da.22843

[7] Gavrilescu, M., Vizireanu, N. (2019). Predicting depression, anxiety, and stress levels from videos using the facial action coding system. Sensors, 19(17): 3693. https://doi.org/10.3390/s19173693

[8] Hartmann, R., Schmidt, F.M., Sander, C., Hegerl, U. (2019). Heart rate variability as indicator of clinical state in depression. Frontiers in Psychiatry, 9: 735. https://doi.org/10.3389/fpsyt.2018.00735

[9] Choi, K.W., Jeon, H.J. (2020). Heart rate variability for the prediction of treatment response in major depressive disorder. Frontiers in Psychiatry, 11: 607. https://doi.org/10.3389/fpsyt.2020.00607

[10] Sun, G., Shinba, T., Kirimoto, T., Matsui, T. (2016). An objective screening method for major depressive disorder using logistic regression analysis of heart rate variability data obtained in a mental task paradigm. Frontiers in Psychiatry, 7: 180. https://doi.org/10.3389/fpsyt.2016.00180

[11] Latvala, A., Kuja-Halkola, R., Rück, C., D’Onofrio, B.M., Jernberg, T., Almqvist, C., Mataix-Cols, D., Larsson, H., Lichtenstein, P. (2016). Association of resting heart rate and blood pressure in late adolescence with subsequent mental disorders: A longitudinal population study of more than 1 million men in Sweden. Jama Psychiatry, 73(12): 1268-1275. https://doi.org/10.1001/jamapsychiatry.2016.2717

[12] Narayanrao, P.V., Kumari, P.L.S. (2020). Analysis of machine learning algorithms for predicting depression. In 2020 International Conference on Computer Science, Engineering and Applications (Iccsea). IEEE, pp. 1-4. https://doi.org/10.1109/ICCSEA49143.2020.9132963

[13] Gaikwad Kiran, P., Sheela Rani, C.M. (2019). Comparative analysis of emotion states based on facial expression modality. Journal of Advanced Research in Dynamical and Control Systems, 11(2): 1365-1372.

[14] Durga, B.K., Rajesh, V. (2018). Review of facial emotion recognition system. International Journal of Pharmaceutical Research, 10(4): 260-266.

[15] Satyanarayana, P., Madhar Khan, P., Junez Riyaz, S. (2019). Human emotion detection based on facial expression using convolution neural network. International Journal of Research in Technology, 7(6).

[16] Durga, B.K., Rajesh, V. (2019). In illumination variations facial emotion recognisation by using local tenary patterns. International Journal of Advanced Science and Technology, 28(8): 197-205.

[17] Videla, L.S., Rao, M.N., Anand, D., Vankayalapati, H.D., Razia, S. (2019). Deformable facial fitting using active appearance model for emotion recognition. In Smart Intelligent Computing and Applications: Proceedings of the Second International Conference on SCI 2018. Springer Singapore, 1: 135-144. https://doi.org/10.1007/978-981-13-1921-1_13

[18] Gaikwad, K.P., Sheela Rani, C.M., Mahajan, S.B., Sanjeevikumar, P. (2018). Dimensionality reduction of facial features to recognize emotion state. Advances in Systems, Control and Automation: ETAEERE-2016, 719-725. https://doi.org/10.1007/978-981-10-4762-6_69

[19] Vaishali, P., Kumari, P.L.S. (2021). Ensemble learning based classifier to predict depression caused due to pandemic. In Journal of Physics: Conference Series. IOP Publishing, 2089(1): 012026. https://doi.org/10.1088/1742-6596/2089/1/012026

[20] Narayanrao, P.V., Kumari, P. (2023). Regularized CNN based model for analyzing, predicting depression and handling overfitting. Ingénierie des Systèmes d'Information, 28(1): 247-254. https://doi.org/10.18280/isi.280129

[21] De Melo, W.C., Granger, E., Hadid, A. (2019). Depression detection based on deep distribution learning. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 4544-4548. https://doi.org/10.1109/ICIP.2019.8803467

[22] Mustafa, A., Bhatia, S., Hayat, M., Goecke, R. (2017). Heart rate estimation from facial videos for depression analysis. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, pp. 498-503. https://doi.org/10.1109/ACII.2017.8273645

[23] Carney, R.M., Blumenthal, J.A., Stein, P.K., Watkins, L., Catellier, D., Berkman, L.F., Czajkowski, S.M., O’Connor, C., Stone, P.H., Freedland, K.E. (2001). Depression, heart rate variability, and acute myocardial infarction. Circulation, 104(17): 2024-2028. https://doi.org/10.1161/hc4201.097834

[24] Narayanrao, M.P.V. (2021). Estimation of visual heart rate to predict depression. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(11): 6050-6055.

[25] Alghowinem, S., Goecke, R., Cohn, J.F., Wagner, M., Parker, G., Breakspear, M. (2015). Cross-cultural detection of depression from nonverbal behaviour. In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 1: 1-8. https://doi.org/10.1109/FG.2015.7163113

[26] Zhou, D., Luo, J., Silenzio, V., Zhou, Y., Hu, J., Currier, G., Kautz, H. (2015). Tackling mental health by integrating unobtrusive multimodal sensing. In Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9381

[27] Maddage, N.C., Senaratne, R., Low, L.S.A., Lech, M., Allen, N. (2009). Video-based detection of the clinical depression in adolescents. In 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3723-3726. https://doi.org/10.1109/IEMBS.2009.5334815

[28] Ooi, K.E.B., Low, L.S.A., Lech, M., Allen, N. (2011). Prediction of clinical depression in adolescents using facial image analaysis. In WIAMIS 2011: 12th International Workshop on Image Analysis for Multimedia Interactive Services, Delft, The Netherlands, April 13-15, 2011. TU Delft; EWI; MM; PRB.

[29] Ooi, K.E.B. (2014). Early prediction of clinical depression in adolescents using single-channel and multi-channel classification approach (Doctoral dissertation, RMIT University).

[30] Yang, T.H., Wu, C.H., Huang, K.Y., Su, M.H. (2017). Coupled HMM-based multimodal fusion for mood disorder detection through elicited audio-visual signals. Journal of Ambient Intelligence and Humanized Computing, 8: 895-906. https://doi.org/10.1007/s12652-016-0395-y

[31] Hasani, B., Negi, P.S., Mahoor, M.H. (2020). BReG-NeXt: Facial affect computing using adaptive residual networks with bounded gradient. IEEE Transactions on Affective Computing, 13(2): 1023-1036. https://doi.org/10.1109/TAFFC.2020.2986440

[32] Joshi, J., Goecke, R., Parker, G., Breakspear, M. (2013). Can body expressions contribute to automatic depression analysis?. In2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1-7. https://doi.org/10.1109/FG.2013.6553796

[33] Saad, M., Ray, L.B., Bujaki, B., Parvaresh, A., Palamarchuk, I., De Koninck, J., Douglass, A., Lee, E.K., Soucy, L.J., Fogel, S., Morin, C.M., Bastien, C., Merali, Z., Robillard, R. (2019). Using heart rate profiles during sleep as a biomarker of depression. BMC Psychiatry,19: 1-11. https://doi.org/10.1186/s12888-019-2152-1

[34] Shahar, H., Hel-Or, H. (2019). Micro expression classification using facial color and deep learning methods. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. https://doi.org/10.1109/ICCVW.2019.00207