Advancing Cephalometric Soft-Tissue Landmark Detection: An Integrated AdaBoost Learning Approach Incorporating Haar-Like and Spatial Features

Advancing Cephalometric Soft-Tissue Landmark Detection: An Integrated AdaBoost Learning Approach Incorporating Haar-Like and Spatial Features

Said Elaiwat* Mohammad Azad Mohammad Khursheed Alam Marwan Abo-zanona Bassam Elzaghmouri Hani Omar

Faculty of Information Technology, Applied Science Private University, Amman 11937, Jordan

Department of Computer Science, Jouf University, Sakaka 72388, Saudi Arabia

Department of Preventive Dental Science, Jouf University, Sakaka 72388, Saudi Arabia

Department of Management Information Systems, College of Business Administration, King Faisal University, Al-Ahsa 31982, Saudi Arabia

Department of Computer Science, Faculty of Computer Science and Information Technology, Jerash University, Jerash 26150, Jordan

Department of Computer Science, Zarqa University, Zarqa 13110, Jordan

Corresponding Author Email: 
S_elaiwat@asu.edu.jo
Page: 
2879-2886
|
DOI: 
https://doi.org/10.18280/ts.400649
Received: 
6 April 2023
|
Revised: 
20 July 2023
|
Accepted: 
19 September 2023
|
Available online: 
30 December 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The detection of cephalometric landmarks in radiographic imagery is pivotal to an extensive array of medical applications, notably within orthodontics and maxillofacial surgery. Manual annotation of these landmarks, however, is not only labour-intensive but also subject to potential inaccuracies. To address these challenges, we propose a robust, fully automated method for detecting soft-tissue landmarks. This innovative method effectively integrates two disparate types of descriptors: Haar-like features, which are primarily employed to capture local edges and lines, and spatial features, designed to encapsulate the spatial information of landmarks. The integration of these descriptors facilitates the construction of a potent classifier using the AdaBoost technique. To validate the efficacy of the proposed method, a novel dataset for the task of soft-tissue landmark detection is introduced, accompanied by two distinct evaluation protocols to determine the detection rate. The first protocol quantifies the detection rate within the Mean Radial Error (MRE), while the second protocol measures the detection rate within a predefined confidence region R. The conducted experiments demonstrated the proposed method's superiority over existing state-of-the-art techniques, yielding average detection rates of 76.7% and 94% within a 2mm radial distance and within the confidence region R, respectively. This study's findings underscore the potential of this innovative approach in enhancing the accuracy and efficiency of cephalometric landmark detection.

Keywords: 

medical image analysis, landmark detection, Haar-like features, Adaboost feature selection, cascade classifier

1. Introduction

Cephalometric analysis has attracted considerable attention from dentists, orthodontists, and oral and maxillofacial surgeons in recent decades, providing crucial insights into patients' bony, dental, and soft tissue structures. This analysis is routinely used as a diagnostic tool for an array of conditions, such as obstructive sleep apnea [1] and mandible/lower jaw diagnosis [2].

Clinically, cephalometric analysis typically necessitates the manual marking of all anatomical landmarks on a 2D cephalometric X-ray image, followed by the calculation of pertinent linear and angular measurements using instruments such as protractors. However, this manual process can be laborious, time-consuming, and susceptible to both random and systematic errors. The advancements in imaging technology and computer vision techniques provide an opportunity to circumvent these limitations by replacing traditional marking practices with automated ones.

Existing approaches to cephalometric landmark detection can be broadly classified into two categories: Knowledge-based and AI-based approaches [3]. The former leverages pre-existing models to construct a knowledge base, thereby facilitating the resolution of complex problems. The simplest technique in this category is edge-based detection, where pre-defined edges are utilized to ascertain the location of the landmarks. For instance, Liu et al. [4] unveiled a method to detect 13 cephalometric landmarks using an edge-based technique, which was tested on a modest set of 10 cephalograms and compared to manual identification techniques.

Several alternative methods have been proposed in the same vein, including studies [5, 6]. However, these edge detection-based approaches generally assume that all cephalometric landmarks can be located on, or around, the edges - an assumption that does not hold true for all types of cephalometric landmarks. Conversely, other studies have applied active model-based approaches, such as the active shape model (ASM) [7] and the active appearance model (AAM) [8].

In the AI-based category, various techniques have been employed to detect cephalometric landmarks, including Random Forest (RF) [9] and Support Vector Machine (SVM) [6]. In light of the remarkable success of machine learning in computer vision [10-12] and medical image analysis [13-15], researchers have begun to apply deep learning models for cephalometric landmark detection. For instance, Arık et al. [16] introduced a deep learning model for cephalometry landmark detection using convolutional neural networks, which was capable of detecting 19 landmarks with an average detection rate of 76%. However, due to the small size of the training data (only 150 images for training and 250 images for testing), the accuracy was somewhat constrained.

Despite significant strides in cephalometric landmark detection, the majority of efforts are concentrated on hard-tissue cephalometric analysis. In contrast, effective orthognathic surgery requires the examination of both hard and soft tissue cephalometric data. Furthermore, no public dataset exists currently for the analysis of soft tissue landmarks. To bridge this gap, we introduce an efficient AdaBoost-based method for soft-tissue landmark detection in this paper. The proposed method amalgamates several desirable properties, such as simplicity (by using the AdaBoost model to construct a potent classifier), efficiency (through the use of two different types of features, namely, Haar-like and spatial features), and low computational cost (by developing a cascade classifier and confining the search to candidate regions). Additionally, we introduce a novel dataset dedicated to the analysis of soft-tissue cephalometric landmarks.

The remainder of this paper is structured as follows: Section 2 offers an overview of our proposed dataset. Section 3 elaborates on the proposed method for soft-tissue landmark detection. Experimental results are presented in Section 4. Finally, the paper concludes with Section 5.

Figure 1. Samples from the dataset along with annotated landmarks

2. Soft Tissue Cephalometric Dataset

In this section, we present our proposed soft tissue cephalometric dataset. Unlike the existing datasets, which proposed mainly focus on hard tissue landmarks, our dataset is dedicated to analyzing the landmarks of soft tissue.

The dataset contains 252 cephalometric X-ray images collected from male and female patients between ages 18 to 28 years. These images were captured by Cranex D digital X-ray unit, version 3 (Soredex Co., Tuusula, Finland), with resolution 2880 × 2304 and stored in PNG format. The annotation process includes 11 soft tissue landmarks manually marked in each x-ray image and reviewed by medical experts in the field of dentistry. Initially, the dataset was annotated by four examiners with different degrees of expertise, two juniors and two experts. Then, the annotated dataset was revised by a medical expert (orthodontist) with 20 years experience in the field of dentistry. Figure 1 shows samples from the dataset along with annotated landmarks.

3. Proposed Method

3.1 Overview

In this section, we describe our proposed method for sot-tissue landmark detection in 2D lateral cephalograms. Figure 2 shows the pipeline of our proposed method, which consists of two phases.

Figure 2. Pipeline of our proposed method

Training phase: where AdaBoost classifier is trained to build a strong classifier from those weak classifiers generated from landmark local descriptor. The resulted classifier is then restructured to form a Cascade classifier, which allows to reduce the computational time, dramatically. Note that AdaBoost model is a binary-based classifier which means that we need a classifier for each landmark. To achieve that, training images are firstly enhanced and then patched according to the examined landmark j (1≤j≤11) into two groups. The first group (+ve patches) represent patches centered by landmark j, while the second group (-ve patches) represents those patches defined randomly and do not include landmark j. After that, landmark local descriptors are defined from both groups, resulting in +ve descriptors and -ve descriptors. Finally, these descriptors are used to feed the AdaBoost Model for training (cascade training) and generating a trained model for landmark j.

Testing phase: in this phase, each trained model is used to detect the location of a certain landmark. To detect all landmarks, image enhancement is firstly applied on the input image followed by image patching. For each patch p in the input image, the landmark local descriptor dp is defined and classified by Nth trained classifiers. Each classifier j decides either the examined descriptor dp represents the landmark j or not.

The next subsections explain the proposed method in details.

3.2 X-ray image pre-processing

In general, x-ray images are characterized by low intensity, high noise, poor contrast and weak representation of boundaries, especially for soft tissues, which can dramatically affect the information of the image. In this work, we apply Adaptive Gamma Correction [17] method to enhance the quality of x-ray images. This method is basically modifying the value of V component of HSV color model by applying Adaptive Gama Correction (AGC) with Weighting Distribution (WD). This was done by firstly defining Weighted Cumulative Distribution Function from Cumulative Distribution Function using Weighted Distribution model and then modifying Gama parameter based on Weighted Cumulative Distribution Function.

Mathematically, the Weighted Distribution model (wd) is defined as bellows:

$w d(i)=p d f_{\max }\left(\frac{p d f(i)-p d f_{\min }}{p d f_{\max }-p d f_{\min }}\right)^\alpha$      (1)

where, pdf(i) represents the probability density function at intensity i while pdfmin and pdfmaxrepresent the maximum and the minimum pdf of the statistical histogram, respectively. Parameter α is used to adjust the distribution of wd. pdf(i) can be formulated as pdf(i)=ni/N, where ni and N are number of pixels having intensity value i and total number of pixels in the image, respectively.

Now, Weighted Cumulative Distribution Function (CDF) is defined as bellows:

$c d f_w(i)=\sum_{i=0}^{i_{\max }}\left(\frac{w d(i)}{\sum w d}\right)$      (2)

where, $\sum w d$ represents the sum of Weighted Distributions w.r.t the intensity levels (from i=0 to imax). Finally, the value of Adaptive Gamma γ is derived from cdf and used to transform pixel intensities as T(i). Both γ and T(i) are expressed by:

$\begin{gathered}\gamma=1-c d f_w(i) \\ T(i)=i_{\max }\left(\frac{i}{i_{\text {max }}}\right)^\gamma\end{gathered}$      (3)

3.3 Landmark local descriptor

The local descriptors of each landmark include two different types of features. The first one is Haar-like feature, dedicated to capture local edges and line within a patch. In contrast, the second one (spatial feature) is dedicated to represent the location of the landmark.

3.3.1 Haar-like features

Haar-like features have been widely applied in many object detection problems [18]. These features are characterized by simplicity and the ability to present edges and lines effectively. Note that all landmarks (soft-tissue-landmarks) are mainly located on the edges of soft tissues, making this type of feature ideal to represent the landmarks.

Figure 3. Five Haar-like templates. The background of templates shown in gray color while white and black rectangles are used to calculate the corresponding feature in the patch

In our method, five different templates are used to build Haar-like features as illustrated in Figure 3. The value of Haar-like feature within a template can be defined by calculating the difference between the sum of the pixels within white rectangles and the sum of the pixels within black rectangles.

In practice, defining Haar-like features from the image may lead to high computational cost. However, using integral image instead of standard image allows to define these features at a low computational cost. The integral image II at location (x, y) is defined as the sum of pixels located before x and y position as bellows:

$I I(x, y)=\sum_{x^{\prime}=1}^x \sum_{y^{\prime}=1}^y I\left(x^{\prime}, y^{\prime}\right)$      (4)

3.3.2 Spatial features

Spatial features are dedicated to represent the location of landmarks with respect to the image size. Although the simplicity of these features, they provide valuable information that helps to discriminate landmarks from each other. For example, Tri landmark is located in the upper part of the face (Forehead) while Me landmark is located in the lower part of the face (Chin). Spatial features is formulated as (x, y) location of the landmark normalized according to size of the image as follows:

$\begin{aligned} & x^{\prime}=\frac{x}{w} \times 100 \\ & y^{\prime}=\frac{\stackrel{y}{y}}{h} \times 100\end{aligned}$      (5)

where, w and h are image width and height, respectively.

3.4 Feature selection and classification

To build a strong classifier, AdaBoost model [19] is used to select those effective(weak) classifiers and combine them to form our strong classifier. Each weak classifier is constructed from a single feature with trained threshold values. Given a training set, the threshold values of a weak classifier are trained to minimize the number of misclassified samples. In more details, two threshold values are used in each classifier, representing the boundaries between positive and negative classes as shown in Figure 4. Mathematically, the output of each.

Classifier can be represented as:

$h_i(x)=\left\{\begin{array}{l}1 \text { if } \theta_{i l} \leq f_i(x) \leq \theta_{i u} \\ -1 \quad \text { otherwise }\end{array}\right.$      (6)

where, hi(x) represents the output of classifier hi at sample x and can be either 1 for positive class or −1 for negative class. fi(x) is denoted as the feature value of sample x in the feature vector hi while θil and θiu are denoted as the lower and higher bound thresholds, respectively.

However, starting with a weak classifier (better than a random guess), the AdaBoost model boosts the current classifier by selecting and adding more weak classifiers toward decreasing the error rate. Although boosting process select only those effective features, many features are used to build a strong classifier. Defining and processing these features during the testing phase may lead to high computational cost. To overcome this issue, all selected classifiers are restructured to form a cascade classifier which is a decision tree allows to reject those unpromising regions (non-landmark regions) at early stages. Thus, only promising regions receive more processing through several stages. Figure 5 gives schematic illustration of cascade classifier.

Figure 4. Boundaries between positive and negative classes through two thresholds

Figure 5. Schematic illustration of cascade classifier

3.5 Optimizing landmarks detection

In this section, we propose a procedure to improve the speed of landmark detection in addition to the detection rate. Unlike hard-tissue landmarks, all soft-tissue landmarks are located on the out boundaries (edges) of the soft- tissues as shown in Figure 1. This fact can help us to limit our search around those soft-tissue edges and exclude other regions. Practically, defining soft-tissue edges is not an easy task. For simplicity, we search for those candidate regions that may include all possible soft-tissue edges. Even by including other regions, such as part of hard-tissue edges, it is still much better than considering the whole image. To define the candidate regions, three main steps are applied on the input image (Figure 6(a)):

1. Build a mask for soft-tissue regions (Figure 6 (b) to (d)).

2. Define Canny edges [20] (Figure 6 (e)).

3. Apply the mask on the Canny edges to get the candidate regions (Figure 6 (f) and (g)).

The mask of candidate regions is defined by first apply hard binarization to detect hard-tissue regions (Figure 6 (b)). The resulted mask is then inverted to exclude hard-tissue regions (Figure 6 (c)). Finally, unwanted regions are excluded by reset any region horizontally located before hard-tissue region (Figure 6 (d)).

Figure 6. Proposed schema to detect candidate regions. (a): the input image. (b): the hard binarization. (c): the inverse of (b). (d): excluding regions horizontally located before hard-tissue. (e): Canny edges. (f): applying the mask (d) on Canny edges(e). (g): candidate regions presented on the image (green color)

4. Experimental Result

4.1 Evaluation protocol

Our landmark detection method has been evaluated through 10 folds cross validation, which is widely used in machine learning based methods [10, 21]. The dataset is divided into 10 folds, from F1 to F10, where one fold (outer fold) is used for the testing phase while the reset are used for the training phase. The whole dataset was evaluated by repeating this process under different settings (outer fold selection) to cover all possible options.

To evaluate the performance of our proposed method, two approaches have been applied here. The first approach assumes that a landmark is correctly detected if it is located (by the system) within a circular region with radial distance r around the reference location (the actual location of the landmark).

In the second approach, a confidence region is defined for each landmark by analyzing its locations along different anatomical structures [22]. To achieve that, 27 cephalometric images (including males and females) are selected to be annotated by four examiners, two of them have high clinical orthodontic experience. The annotated images (108 cephalometric images) are then revised by a medical expert (orthodontist) with long experience to ensure that the landmarks are located within the correct anatomical structures. For each landmark, the confidence region is represented as an ellipse [22] which includes the variations in the landmark localization and formulated by a confidence limit α as follows:

$\alpha={CHI}_2 \frac{\left(\frac{x}{\sigma_x}\right)^2-2 \rho\left(\frac{x}{\sigma_x} \cdot \frac{y}{\sigma_y}\right)+\left(\frac{y}{\sigma_y}\right)}{1-\rho^2}$      (7)

where, CHI represents a function that returns the probability α for the chi-square distribution with two degrees of freedom (x and y coordinates). The variable ρ is denoted as the correlation between x and y while the remaining variables, σx and σy, are denoted as the standard deviations of x and y, receptively.

Figure 7 illustrates the confidence ellipse for each landmark at α=0.01 (equivalent to 99% confidence level). The red points in each ellipse represent the locations of the examined landmarks, identified on 27 images by three examiners.

4.2 Landmark detection results

Table 1. Mean radial error (MRE), standard deviation (SD) and detection rates of each landmark

Landmark

MRE

(mm)

STD

(mm)

Detection Rates

(2mm)

(3mm)

(4mm)

1:Tri

0.86

0.64

97.6%

99.6%

100%

2:G

3.83

5.95

47.3%

63.1%

75.0%

3:N

5.38

7.50

69.8%

69.8%

80.5%

4:Prn

0.88

0.70

95.6%

99.6%

100%

5:Cm

1.35

0.99

84.6%

90.9%

98.0%

6:Sn

1.27

0.97

88.1%

94.4%

98.4%

7:Ls

1.14

0.98

91.3%

97.6%

99.2%

8:Li

1.44

2.36

93.7%

96.4%

97.6%

9:Sm

3.72

8.14

75.4%

86.1%

88.1%

10:Pg

2.98

2.07

45.6%

57.8%

70.2%

11:Me

4.33

6.78

55.1%

65.5%

72.6%

Avg

2.47

3.37

76.7%

83.7%

89.1%

We have assessed the performance of the proposed method using both predefined approaches (previous section), circular region with r radius and confidence ellipse. Table 1 reports the mean redial error (MRE) and standard deviation (SD) for each landmark along with the detection rate at r=2mm, 3mm and 4mm. The average MRE and SD of all landmarks are 2.47 and 3.37, receptively. The highest MRE rates are reported by N and Me landmarks, as these landmarks are located in different anatomical structures with high variations. We can notice from Table 1 that our method, with 2mm error margin, achieved 76% average detection rate of all landmarks (averaged over 10 folds/runs). The highest detection rate (97.6%) was reported by Tri landmark while the lowest detection rate (45.6%) was reported by Pg landmark. By increasing the error margin to 3mm and 4mm, the average detection rates of all landmarks are improved to reach 83.7% and 89.1%, respectively.

Table 2 shows the parameters of confidence ellipses along with reliability at each landmark. The values of semi minor axis are ranged from 0.77mm to 3.91mm with 1.76mm average value. In contrast, the values of semimajor axis are ranged from 1.92mm to 13.46 with 4.54 average value. We can notice that Sm landmark has the smallest confidence ellipse while G and Pg landmarks have the largest confidence ellipse. The average detection rate (reliability) of all landmarks within the confidence ellipse was 94% where Tri, G, Prn, Cm and Me achieved the highest reliability (100%) while Sm achieved the lowest reliability (77.8%).

Table 2. Parameters of confidence ellipse (Angle, Semimajor Axis and Semimajor Axis) along with Reliability at each landmark

Land.

Angle

(deg)*

Semiminor

Axis(mm)

Semimajor

Axis(mm)

Reliability

1:Tri

67.9

1.09

2.25

100%

2:G

87.2

1.73

13.46

100%

3:N

87.3

1.06

4.63

81.5%

4:Prn

-15.6

0.87

2.63

100%

5:Cm

-59.6

1.58

2.55

100%

6:Sn

-43.5

2.40

2.51

96.3%

7:Ls

75.4

0.82

2.21

96.3%

8:Li

-38.6

3.08

3.61

92.6%

9:Sm

-16.5

0.77

1.92

77.8%

10:Pg

-23.5

3.91

8.36

92.6

11:Me

-75.5

2.00

5.80

100%

* The angle in degree between the x-axis and the semimajor Axis

4.3 Comparative results

For comparison, our method was compared with recent state-of the-art methods for the task of soft-tissue landmark detection. Figure 8 reports the performance of our method along with other methods based on the detection rate (within 2mm) and mean radial error of four common landmarks (Sn, Ls, Li and Pg). In the case of the detection rate, the average reported results of our method and study [23] are relatively close for the common landmarks. In contrast, the reported results based on the mean radial error shows that our method achieved lower error rates compared to study [24] in all common landmarks. It is worth noting that the evaluation protocol was done under different settings, e.g., different training datasets. Nevertheless, it still gives a standard evaluation of our method compared to other methods.

Table 3 reports the performance of our method along with study [25] based on the reliability of detection within predefined confidence regions. In our method, these regions were defined for each landmark using 27 cephalometric images annotated by four examiners (more details in Sec 4.1). In contrast, study [25] defined these regions using 10 cephalometric images annotated by 10 examiners. We can notice that our method achieved 94% reliability rate in average to detect 11 landmarks, while study [25] achieved 72% reliability rate in average to detect 9 landmarks.

Figure 7. Confidence ellipses for soft-tissue landmarks

Figure 8. Performance comparison based on the detection rate (within 2mm) and the mean radial error for selected landmarks

Table 3. Performance comparison based on the Reliability (detection rate within the confidence region)

 

This Work

[11]

Evaluation protocol

27 cephalometric images annotated by four examiners

10 cephalometric images annotated by 10 examiners

Landmark

 

 

1:Tri

100%

-

2:G

100%

57%

3:N

81.5%

83%

4:Prn

100%

52%

5:Cm

100%

-

6:Sn

96.3%

97%

7:Ls

96.3%

51%

8:Li

92.6%

78%

9:Sm

77.8%

86%

10:Pg

92.6%

82%

11:Me

100%

61%

Avg

94%

72%

5. Conclusions

In this work, we presented an automated soft-tissue landmark detection system for 2D cephalometric images. Unlike other landmark detection methods, our method focuses on detecting soft tissue landmarks by integrating two different types of features (Haar-like and spatial features) along with cascade classifiers. Due to the high similarity between soft-tissue landmarks, spatial descriptor plays an important in improving the discrimination between different landmarks. To evaluate the performance of our method, we also presented a new dataset for the task of soft-tissue landmark detection.

The experimental results demonstrate the effectiveness of the proposed method, with a mean radial error (MRE) of 2.47 and a detection rate of 76% for a 2mm error margin. By increasing the error margin to 3mm and 4mm, the detection rates improve to 83.7% and 89.1%, respectively. Additionally, the reliability analysis based on confidence ellipses shows promising results, with an average detection rate of 94% within the predefined confidence regions.

Future avenues of work include extending our method by applying more sophisticated descriptor, such as Curvelet descriptor, instead of Haar-like descriptor, and expanding the size of our dataset (by involving additional samples). In addition to that, we intend to extend our experiments to cover the detection of both soft and hard tissue.

Declarations

Ethical Approval and Consent to participate

This study has been ethically cleared and approved by the Local Committee of Bioethics (LCBE) with the approval number of 9-02-43, Jouf University.

  References

[1] Bilici, S., Yigit, O., Celebi, O.O., Yasak, A.G., Yardimci, A.H. (2018). Relations between hyoid-related cephalometric measurements and severity of obstructive sleep apnea. Journal of Craniofacial Surgery, 29(5): 1276-1281. https://doi.org/10.1097/SCS.0000000000004483

[2] Nanda, R.S., Merill, R.M. (1994). Cephalometric assessment of sagittal relationship between maxilla and mandible. American Journal of Orthodontics and Dentofacial Orthopedics, 105(4): 328-344. https://doi.org/10.1016/S0889-5406(94)70127-X

[3] Juneja, M., Garg, P., Kaur, R., Manocha, P., Batra, S., Singh, P., Jindal, P. (2021). A review on cephalometric landmark detection techniques. Biomedical Signal Processing and Control, 66: 102486. https://doi.org/10.1016/j.bspc.2021.102486

[4] Liu, J.K., Chen, Y.T., Cheng, K.S. (2000). Accuracy of computerized automatic identification of cephalometric landmarks. American Journal of Orthodontics and Dentofacial Orthopedics, 118(5): 535-540. https://doi.org/10.1067/mod.2000.110168

[5] Mosleh, M.A., Baba, M.S., Himazian, N., AL-Makramani, B.M. (2008). An image processing system for cephalometric analysis and measurements. In 2008 International Symposium on Information Technology, 4: 1-8. https://doi.org/10.1109/ITSIM.2008. 4631953

[6] Pouyan, A.A., Farshbaf, M. (2010). Cephalometric landmarks localization based on histograms of oriented gradients. In 2010 International Conference on Signal and Image Processing, Chennai, India, pp. 1-6. https://doi.org/10.1109/ICSIP.2010.5697431

[7] Montúfar, J., Romero, M., Scougall-Vilchis, R.J. (2018). Automatic 3-dimensional cephalometric landmarking based on active shape models in related projections. American Journal of Orthodontics and Dentofacial Orthopedics, 153(3): 449-458. https://doi.org/10.1016/j.ajodo.2017.06.028

[8] Vučinić, P., Trpovski, Ž., Šćepan, I. (2010). Automatic landmarking of cephalograms using active appearance models. The European Journal of Orthodontics, 32(3): 233-241. https://doi.org/10.1093/ejo/cjp099

[9] Lindner, C., Wang, C.W., Huang, C.T., Li, C.H., Chang, S.W., Cootes, T.F. (2016). Fully automatic system for accurate localisation and analysis of cephalometric landmarks in lateral cephalograms. Scientific Reports, 6(1): 33581. https://doi.org/10.1038/srep33581

[10] Elaiwat, S., Bennamoun, M., Boussaïd, F. (2016). A spatio-temporal RBM-based model for facial expression recognition. Pattern Recognition, 49: 152-161. https://doi.org/10.1016/j.patcog.2015.07.006

[11] Elaiwat, S., Bennamoun, M., Boussaid, F. (2016). A semantic RBM-based model for image set classification. Neurocomputing, 205: 507-518. https://doi.org/10.1016/j.neucom.2016.05.013

[12] Tripathy, S.K., Srivastava, R. (2021). AMS-CNN: Attentive multi-stream CNN for video-based crowd counting. International Journal of Multimedia Information Retrieval, 10: 239-254. https://doi.org/10.1007/s13735-021-00220-7

[13] Bier, B., Goldmann, F., Zaech, J.N., Fotouhi, J., Hegeman, R., Grupp, R., Unberath, M. (2019). Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views. International Journal of Computer Assisted Radiology and Surgery, 14: 1463-1473. https://doi.org/10.1007/s11548-019-01975-5

[14] Danks, R.P., Bano, S., Orishko, A., Tan, H.J., Moreno Sancho, F., D’Aiuto, F., Stoyanov, D. (2021). Automating Periodontal bone loss measurement via dental landmark localisation. International Journal of Computer Assisted Radiology and Surgery, 16(7): 1189-1199. https://doi.org/10.1007/s11548-021-02431-z

[15] Pirhadi, A., Salari, S., Ahmad, M.O., Rivaz, H., Xiao, Y. (2023). Robust landmark-based brain shift correction with a Siamese neural network in ultrasound-guided brain tumor resection. International Journal of Computer Assisted Radiology and Surgery, 18(3): 501-508. https://doi.org/10.1007/s11548-022-02770-5

[16] Arık, S.Ö., Ibragimov, B., Xing, L. (2017). Fully automated quantitative cephalometry using convolutional neural networks. Journal of Medical Imaging, 4(1): 014501-014501. https://doi.org/10.1117/1.JMI.4.1.014501

[17] Huang, S.C., Cheng, F.C., Chiu, Y.S. (2012). Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Transactions on Image Processing, 22(3): 1032-1041. https://doi.org/10.1109/TIP.2012.2226047

[18] Viola, P., Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001, Kauai, HI, USA, pp. I-I. https://doi.org/10.1109/CVPR.2001.990517

[19] Yang, M., Crenshaw, J., Augustine, B., Mareachen, R., Wu, Y. (2010). AdaBoost-based face detection for embedded systems. Computer Vision and Image Understanding, 114(11): 1116-1125. https://doi.org/10.1016/j.cviu.2010.03.010

[20] Canny, J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, (6): 679-698. https://doi.org/10.1109/TPAMI.1986.4767851

[21] Elaiwat, S. (2021). Holistic word descriptor for lexicon reduction in handwritten Arabic documents. Pattern Recognition, 119: 108072. https://doi.org/10.1016/j.patcog. 2021.108072

[22] Tanikawa, C., Yagi, M., Takada, K. (2009). Automated cephalometry: System performance reliability using landmark-dependent criteria. The Angle Orthodontist, 79(6): 1037-1046. https://doi.org/10.2319/092908-508R.1

[23] Wang, S., Li, H., Li, J., Zhang, Y., Zou, B. (2018). Automatic analysis of lateral cephalograms based on multiresolution decision tree regression voting. Journal of Healthcare Engineering, 2018: 1797502. https://doi.org/10.1155/2018/1797502

[24] Dai, X., Zhao, H., Liu, T., Cao, D., Xie, L. (2019). Locating anatomical landmarks on 2D lateral cephalograms through adversarial encoder-decoder networks. IEEE Access, 7: 132738-132747. https://doi.org/10.1109/ACCESS.2019.2940623

[25] Lee, C., Tanikawa, C., Lim, J.Y., Yamashiro, T. (2019). Deep learning based cephalometric landmark identification using landmark dependent multi-scale patches. arXiv:1906.02961. https://doi.org/10.48550/arXiv.1906.02961