Biometric User Authentication System via Fingerprints Using Novel Hybrid Optimization Tuned Deep Learning Strategy

Biometric User Authentication System via Fingerprints Using Novel Hybrid Optimization Tuned Deep Learning Strategy

Senthil Kumar Natarajan* Ramadevi Rathinasabapathy Jaisankar Narayanasamy Arikrishnaperumal Ramaswamy Aravind

Department of Electronics and Communication Engineering, Sathyabama Institute of Science and Technology, Chennai 600119, Tamil Nadu, India

Department of Biomedical Engineering, SIMATS School of Engineering, SIMATS, Chennai 602105, India

Department of Electronics and Communication Engineering, Misrimal Navajee Munoth Join Engineering College, Chennai 600097, India

Department of Electronics and Communication Engineering, Prince Shri Venkateshwara Padmavathy Engineering College, Chennai 600127, India

Corresponding Author Email: 
sentilme@gmail.com
Page: 
375-381
|
DOI: 
https://doi.org/10.18280/ts.400138
Received: 
10 October 2022
|
Revised: 
22 January 2023
|
Accepted: 
7 February 2023
|
Available online: 
28 February 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Due to security concerns, the necessity for authentication and identity techniques has increased in the modern world. A novel Accurate and Automated Fingerprint Biometric Authentication Model (AAFBAM) is introduced. The operation of the suggested AAFBAM is divided into two parts: (a) enrollment and (b) verification. During the enrollment phase, the database is prepared, and the input fingerprint is authenticated during the identification phase. The enrollment phase includes the data acquisition stage, preprocessing feature extraction stage, and minutiae point detection phase. The minutiae point detection is performed using the MISHO-based Optimized Deep Neural Network (MISHO-DNN) classifier. The weight function of DNN is tuned optimally using the proposed Memory Integrated Spotted Hyena Optimization (MISHO) algorithm to enhance its detection accuracy. The Verification Phase includes the preprocessing, feature extraction stage, minutiae point detection with MISHO-DNN, minutia matching, and minutiae score evaluation. Here, the minutiae score is obtained by matching minutiae from both phases and is compared with the threshold value. When the minutiae score exceeds the threshold, the user is identified as the genuine user, and their request is accepted. Otherwise, the user is recognized as an unauthenticated user, and their request is rejected. Finally, a comparative evaluation is conducted to validate the efficiency of the projected model.

Keywords: 

biometric authentication, spotted hyena optimization minutiae matching, minutiae score evaluation

1. Introduction

The rapid advancement of information technology has aided not only the expansion of businesses but has also been used to dismantle those same enterprises in terms of information leakage. The organization also must ensure the data's integrity. There are a variety of authentication methods accessible, each with its own set of benefits. Fingerprints are biometrics that does not alter with aging or other factors. The other characteristics of biometrics are prone to alter as a result of the age factor and other factors. As a result, fingerprint authentication may be done quickly and efficiently [1, 2]. Preprocessing, feature extraction, and classification procedures make up a fingerprint recognition system. The system should be built to prevent limits on fingerprint location, matching rate, and accuracy, all of which might be crucial sources for identifying people. Preprocessing is a technique for enhancing images to obtain high-quality images [3, 4]. The image was enhanced using techniques, such as normalization, image orientation, and Gabor filtering. The noise in the image is filtered using histogram equalization [5]. Prasad et al. [6] suggests the wavelet transform as a suitable improvement approach without providing any proof or comparable outcomes. Galton points, also known as minutiae, have been utilized to identify and classify fingerprints [7]. Despite the Euclidean distance classification, normalization, and Gabor filter performance [8] only delivered 89.6% efficiency due to a poor preprocessing strategy. One of the finest methodologies for classification that is recommended is Euclidean distance. The minutiae points and Euclidean distance categorization produce a high level of accuracy. For Euclidean distance classification, the extraction of minutiae is critical. In the chain coding approach, image quality is important for binary images [9]. When compared to pattern-based matching, minutiae-based matching is determined to be superior because just the relative location is required. Pattern-based matching, on the other hand, collects the fingerprint's overall features [10]. As a result, minutiae-based matching [11] is concerned with high precision. Furthermore, as compared to deterministic and probabilistic-based fusions, the employment of evolutionary-based fusion in multi-biometric systems is a promising state-of-the-art strategy that has demonstrated its capacity to improve performance accuracy [12]. Swarm Intelligence (SI) based hybrid meta-heuristic algorithm [13] has been used to resolve the issue of low accuracy by optimizing weights associated with hand-based modalities.

1.1 Challenges

  • To achieve higher recognition performance, a multimodal biometrics recognition strategy was proposed by Basha et al. [14] using local fusion visual features and a variation Bayesian extreme learning machine. However, the rate of recognition was insufficient.
  • A joint scarcity-based feature level fusion approach was developed to improve multimodal biometric identification accuracy [15]. The fusion method was stable and improved overall recognition accuracy substantially. The temporal complexity of recognition, on the other hand, was higher.
  • With the help of an SVM classifier and adaptive neuron-fuzzy inference system (ANFIS), an efficient biometric multimodal recognition strategy employing Fingerprint and Iris was described by Bailey et al. [16]. The recognition rate of this approach is higher. True positive rate of person recognition, on the other hand, was insufficient.
  • An ideal weight score is determined to fuse the derived feature sets of finger knuckle and finger vein images, resulting in a multimodal biometric recognition system employing finger knuckle and finger vein images [17]. As a result, recognition accuracy improves.
  • The user identification and authentication utilizing multi-modal behavioral biometrics performed well in terms of false acceptance and rejection rates. However, the system's recognition accuracy was insufficient.

The key contribution of this research work is manifested below:

  • To develop an accurate and automated fingerprint biometric authentication with the assistance acquired from the new MISHO based Optimized Deep neural network (MISHO-DNN) classifier.
  • The weight function of DNN is optimized via Memory Integrated Spotted Hyena Optimization (MISHO) algorithm to enhance the detection accuracy. This MISHO is developed by amalgamating the concepts of SHO and CSA, respectively.

The rest of this paper is arranged as: Section 2 portrays the information regarding the proposed biometric user authentication system: an overview, Section 3 and Section 4 depict about Enrollment phase and verification phase, respectively. The results acquired with the projected model are discussed comprehensively in Section 5. This paper is concluded in Section 6.

2. Design of Biometric User Authentication System

2.1 Architectural description

The proposed AAFBAM includes two major phases: (a) the enrollment phase and (b) the verification phase. Figure 1 manifests the architecture of the projected model. The steps followed in each of the phases are manifested below.

Figure 1. The architecture of the projected AAFBAM model

(a) Enrollment phase: This phase includes the “data acquisition stage, pre-processing feature extraction stage, and minutiae point detection phase”. Initially, the fingerprint image $H_i^{i n p}$  from both the left and right hands is considered as the input. The collected data $H_i^{i n p}$  is pre-processed via the image thinning approach. Then, from the pre-processed images $H_i^{p r e}$ , the features, such as ridge ending fE, Ridge bifurcation, fB and Galton points fG are extracted. All the features are integrated, and it is represented using the notation F. Then, the minutiae point detection is performed using the MISHO-based Optimized Deep Neural Network (MISHO-DNN) classifier [18]. To enhance the detection accuracy of the DNN (ultimate decision-maker), its weight functions W are tuned optimally using the proposed Memory Integrated Spotted Hyena Optimization (MISHO) algorithm. The proposed algorithm is developed by integrating the concepts of the SHO algorithm [19] and the CSO algorithm [20]. The prepared dataset is denoted as Di.

(b) Verification Phase: The Verification Phase includes the “pre-processing, feature extraction stage, minutiae point detection with MISHO-DNN, minutiae matching, and minutiae score evaluation”. The minutiae point is detected precisely via MISHO-DNN. The minutiae matching is accomplished between the prepared dataset Di and the extracted features of a user's input fingerprint image. Further, the minutiae score M is computed, and it is contrasted with the pre-defined threshold value T. When the minutiae score is greater than the threshold, then the user is identified as the genuine user and the request is accepted, otherwise, the user is identified as the unauthenticated user, and the request is rejected.

3. Enrolment Phase

3.1 Architectural description

This phase includes the “data acquisition stage, pre-processing, feature extraction stage, and minutiae point detection phase”.

Data acquisition: The data is collected from left and right-hand fingers (“index finger, little finger, middle finger, ring finger, and thumb finger”). This image is together represented as $H_i^{\it {inp }} \in(lefthand,righthand)$.

Step 1- Pre-processing: The information quality $H_i^{i n p}$ is enhanced via the image thinning approach. The pre-processed data acquired after image thinning is denoted as $H_i^{p r e}$.

Step 2- Feature extraction: Subsequently, $H_i^{p r e}$ the features like Ridge ending fE, Ridge bifurcation, fB and Galton points fG are extracted.

Step 3- Feature fusion: All these extracted features are fused as F=fE + fB + fG.

Minutiae point detection: Next, the minutiae point detection takes place via MISHO-DNN. To enhance the detection accuracy of DNN, its weight function W is fine-tuned using a new hybrid optimization model named as MISHO algorithm. As a result, the database is prepared effectively. The prepared dataset is denoted as Di.

3.2 Pre-processing-image thinning

The collected input image $H_i^{\it {inp }} \in(lefthand,righthand)$ is pre-processed via image thinning to enhance the quality of $H_i^{i n p}$. The pre-processed image acquired after image thinning is denoted as $H_i^{p r e}$.

3.3 Feature selection

The features like Ridge ending, Ridge bifurcation, and Galton points are extracted from $H_i^{p r e}$.These extracted features are integrated, and they are denoted as F=fE + fB + fG.

3.4 Minute point detection

A new MISHO-DNN is used for precise minutiae point detection. To acquire a precise detection mechanism, the weight function of DNN is fine-tuned with the new MISHO model.

  1. All the inputs Fi are multiplied by their weights. The weight function Wi provides information regarding the strength of the input Fi. After the addition of weights, the bias function is added. The major objective behind this research work is to maximize the recognition accuracy Racc. Mathematically, the objective function Obj can be given as per Eq. (2). To achieve this objective, the weight function Wi is fine-tuned via MISHO.

$Z_i=F_i * W_i$     (1)

$O b j=\max \left(R_{a c c}\right)$      (2)

The solution fed as input to MISHO is manifested in Figure 2.

Figure 2. Solution encoding

In Figure 2, N denotes the count of weight functions.

  1. To the acquired linear expression Zi, the activation function is applied. The activation function assists in inculcating the non-linearity in the model.
  2. All the computations are performed in the hidden layer. After the completion of the operations, the data moves to the output layer, from where the outcome is acquired.
  3. On acquiring the final prediction values from the output layer, the error is computed. The error function is the difference between the actual and the predicted outcome.

MISHO model: The stages used in the MISHO model are as follows:

Step 1: The population p of m the count of search agents is initialized. The position of the search agent is denoted as Qi; i=1, 2, …, n. The current iteration is pointed as t and the maximal iteration count is denoted as maxt.

Step 2: Validate the termination criterion: check whether t<maxt. If the condition is satisfied then move to step 4, else terminate the process.

Step 3: Compute the fitness of the search agents using Eq. (2).

Step 4: Based on the computed fitness, explore the best search agent Qbest.

Step 5: The targeted solution (optimal weight) is encircled by the search agent in the Encircling prey phase. This phase can be modeled mathematically as per Eq. (3).

$D=\left|B.Q_{\it {prey }}\;(t)-Q_{\it {agent }}\;(t)\right|$      (3)

Here, D, Qprey, and Qagent points to the distance between the prey and the search agent; opposition of the prey; and position of the search agent, respectively. The next position Qagent(t+1) is computed as per Eq. (4). The notation B, E points to the co-efficient vector, and they are computed as per Eq. (5) and Eq. (6), respectively.

$Q_{\text {agent }}(t+1)=Q_{\text {prey }}(t)-E.D$      (4)

$B=2.rand 1$      (5)

$E=2. H.rand2-H$      (6)

$H=5-\left[\frac{t}{\left(5 * \max \,^t\right)}\;\right]$      (7)

In Eq. (7), H is linearly decreasing from 5 to 0 over the variation in the iteration count.

Step 6: They search for the target solution after it has been surrounded. This phase can be mathematically given as per Eq. (8) and Eq. (9), respectively.

$D=\left|B.Q_{\text {best }}-Q_{\text {others }}\right|$      (8)

$Q_{\it {others }}=Q_{\it {best }}-E.D$     (9)

Here, Qbest denotes the position of the first best search agent and Qothers is the position of other search agents.

Step 7: In the new memory integrated attacking prey phase, the detected optimal weight (target prey) is attacked. The suggested memory-integrated prey assault phase can be analytically represented using Eq. (10) and Eq. (11), respectively.

$Q(t+1)=\frac{C}{m} * Memory$     (10)

$\operatorname{\it{Memory}}(t+1)=\left\{\begin{array}{cc}Q_{a g e n t}(t+1) & \it { if }\left[\begin{array}{c}\it { fit }(t+1)> \\ >\it { fit }(\operatorname{\it{Memory}}(t))\end{array}\right] \\ \it { Memory }(t) & \it { else }\end{array}\right.$     (11)

Here, Q(t+1) save the best solution and updates the positions of other search agents according to the position of the best search agent. In addition, Memory(t+1) is the memory of the next iteration. fit(t+1) and fit(Memory(t)) is the fitness function, respectively.

Step 8: In a particular search space, verify if any search agents wander outside the border and adapt accordingly.

Step 9: If a better solution exists than that of the prior optimal solution, compute the updating search agent optimal solution and update the vector Qbest.

Step 10: Update the search agent fitness value for the group of spotted hyenas C.

Step 11: The algorithm will be terminated if the stopping requirement is met.

Step 12: After the stopping requirements have been met, return the most optimum solution (optimal weight) that has been achieved thus far. As a result, the database is prepared effectively. The prepared dataset is denoted as Di.

4. Verification Phase

4.1 Step-by-step verification process

When an I input enters the verification phase, it's validated in this phase (whether he/she is an authorized person or not).

Step 1: Initially I is pre-processed via an image thinning approach. This pre-processed image is denoted as Ipre. From the pre-processed data Ipre, the features like Ridge ending, Ridge bifurcation, and Galton points are extracted. These extracted features are together denoted as G.

Step 2: Subsequently, the minutiae point is detected via MISHO-DNN. To enhance the detection accuracy, the weight function W of DNN is fine-tuned using the new MISHO model.

Step 3: The minutias’ matching is accomplished between the prepared dataset Di and the extracted features G of I.

Step 4: Further, the minutiae score M is computed, and it is contrasted with the pre-defined threshold value T. If M>T, then the user is identified as the genuine user, else unauthenticated user.

5. Result and Discussion

5.1 Simulation procedure

The projected AAFBAM with the MISHO-DNN model has been implemented in MATLAB. The sample images and their pre-processed, as well as minutiae points identified for image sample sets, are shown in Figure 3.

5.2 Analysis on accuracy

The projected model is validated over the existing models like KNN, SVM, DNN, SHO-DNN, and CSA-DNN, respectively. The results acquired are shown in Table 1.

5.3 Analysis on F1- measure

The projected model has recorded the highest detection performance in terms of F1-measure. The results acquired are shown in Table 2.

5.4 Analysis on precision

The highest precision achievement is a major challenge faced in the existing models [12, 15]. This challenge has been overcome in this research work, and it is evident from the results shown in Table 3.

5.5 Analysis on recall

The results acquired with the existing as well as existing models in terms of recall are depicted in Table 4.

5.6 Convergence analysis

The projected model has been considered an optimization model, and it has been solved with the new MISHO model (a conceptual blend of CSA and SHO). The convergence speed is the major challenge faced by most of the existing models. The results acquired in terms of convergence speed are shown in Figure 4.

Figure 3. Sample set: Input raw image, pre-processed image, and its identified minutiae points

Table 1. Analysis of the performance of the MISHO-DNN model in terms of accuracy

Approaches

count of samples=25

count of samples=50

count of samples=75

count of samples=100

count of samples=200

KNN

85.2642005

86.14909163

84.64311055

87.1965866

90.05221628

SVM

84.78626212

82.82412277

83.75117711

86.80361492

89.33610572

DNN

85.60416026

87.83344952

84.84964197

87.9031447

91.48937378

SHO-DNN

89.33740869

88.73110533

91.94471044

89.71215596

93.19260083

CSA-DNN

86.75357192

88.05887216

91.03561308

88.6138214

92.12448169

MISHO-DNN

90.16178938

92.22771523

92.86154985

94.23333272

95.32666594

Table 2. Analysis of the performance of the MISHO-DNN model in terms of F1-measure

Approaches

count of samples=25

count of samples=50

count of samples=75

count of samples=100

count of samples=200

KNN

84.34809

83.88146

85.01988

90.33565

87.2165

SVM

81.98961

82.16496

83.31485

89.46556

87.10648

DNN

86.07264

86.50961

85.14287

92.63749

92.39413

SHO-DNN

87.35168

90.14116

92.35454

94.34025

95.42417

CSA-DNN

87.00285

86.58702

91.42392

93.47351

92.5767

MISHO-DNN

90.22633

91.53329

93.627

94.93897

95.56679

Table 3. Analysis of the performance of the MISHO-DNN model in terms of precision

Approaches

count of samples=25

count of samples=50

count of samples=75

count of samples=100

count of samples=200

KNN

83.4568

84.2453

85.13595

85.22577

88.98395

SVM

82.60569

83.04662

83.22253

84.49102

85.63596

DNN

85.27343

85.80875

86.83014

86.85674

89.16552

SHO-DNN

87.94544

89.03041

89.18225

89.68425

91.45732

CSA-DNN

86.99387

87.06939

87.19151

88.15056

90.74305

MISHO-DNN

90.20692

90.22795

92.43605

92.85763

93.29297

Table 4. Analysis of the performance of the MISHO-DNN model in terms of recall

Approaches

count of samples=25

count of samples=50

count of samples=75

count of samples=100

count of samples=200

KNN

84.09872

84.45164

85.9493

87.02201

87.8676

SVM

83.21897

83.80097

84.51808

84.79386

87.19874

DNN

85.8204

86.79228

86.86541

88.22545

90.43205

SHO-DNN

89.30965

89.64757

89.81876

91.9121

93.25075

CSA-DNN

86.02746

87.9349

88.7807

91.43693

91.90141

MISHO-DNN

90.41223

90.42335

92.04691

93.66687

95.06325

Figure 4. Convergence analysis of the projected model

6. Conclusion

In this research work, a novel AAFBAM has been proposed by following two major phases: (a) enrollment and (b) verification. During the enrollment phase, the database is prepared, and during the identification phase, the input fingerprint is authenticated. In the enrollment phase, the fingerprint data from both the left and right hands have been pre-processed via the image thinning approach. Then, from the pre-processed data the features, such as ridge ending, Ridge bifurcation, and Galton points have been extracted. The MISHO-DNN classifier has then used to detect Minutiae points. The suggested MISHO method was used to optimize the weight function of the RNN to improve its detection accuracy. The minutiae matching and Minutiae score evaluation, on the other hand, took place during the Verification Phase. The minutiae score is calculated by combining minutiae from both stages and comparing it to the Threshold value. If the minutiae score is above the threshold, the user is recognized as a real user, and his or her request is approved; otherwise, the person is recognized as an unauthenticated user, and his or her request has been refused. Finally, a comparative evaluation is undergone to validate the efficiency of the projected model. The projected model has recorded the highest accuracy as 95.36% while training the model with 200 counts of samples. Thus, the projected model is said to be much more significant for user authentication. In future, it’s planned to test the most will huge database. Moreover, the evaluation will also be made with individuals belonging to different age groups. The hands with strains will also be taken into consideration.

  References

[1] Aleem, S., Yang, P., Masood, S., Li, P., Sheng, B. (2020). An accurate multi-modal biometric identification system for person identification via fusion of face and finger print. World Wide Web, 23(2): 1299-1317. https://doi.org/10.1007/s11280-019-00698-6

[2] Baskar, M., Renuka Devi, R., Ramkumar, J., Kalyanasundaram, P., Suchithra, M., Amutha, B. (2021). Region centric minutiae propagation measure orient forgery detection with finger print analysis in health care systems. Neural Processing Letters, 1-13. https://doi.org/10.1007/s11063-020-10407-4

[3] Kapoor, K., Rani, S., Kumar, M., Chopra, V., Brar, G.S. (2021). Hybrid local phase quantization and grey wolf optimization based SVM for finger vein recognition. Multimedia Tools and Applications, 80(10): 15233-15271. https://doi.org/10.1007/s11042-021-10548-1

[4] Purohit, H., Ajmera, P.K. (2022). Multi-modal biometric fusion based continuous user authentication for E-proctoring using hybrid LCNN-salp swarm optimization. Cluster Computing, 25(2): 827-846. https://doi.org/10.1007/s10586-021-03450-w

[5] Nagaty, K.A. (2001). Fingerprints classification using artificial neural networks: A combined structural and statistical approach. Neural Networks, 14(9): 1293-1305. https://doi.org/10.1016/S0893-6080(01)00086-7

[6] Prasad, P.S., Sunitha Devi, B., Janga Reddy, M., Gunjan, V.K. (2018). A survey of fingerprint recognition systems and their applications. In International Conference on Communications and Cyber Physical Engineering. Springer, Singapore, pp. 513-520.

[7] Aravind, A.R., Chakravarthi, R. (2021). Fractional rider optimization algorithm for the optimal placement of the mobile sinks in wireless sensor networks. International Journal of Communication Systems, 34(4): e4692. https://doi.org/10.1002/dac.4692

[8] Jain, A.K., Nandakumar, K., Ross, A. (2016). 50 years of biometric research: Accomplishments, challenges, and opportunities. Pattern Recognition Letters, 79: 80-105. https://doi.org/10.1016/j.patrec.2015.12.013

[9] Raja, J., Gunasekaran, K., Pitchai, R. (2019). Prognostic evaluation of multimodal biometric traits recognition based human face, finger print and iris images using ensembled SVM classifier. Cluster Computing, 22(Suppl 1): 215-228. https://doi.org/10.1007/s10586-018-2649-2 

[10] Sowa, J.F. (Ed.). (2014). Principles of Semantic Networks: Explorations in the Representation of Knowledge. Morgan Kaufmann. pp. 1-91. 

[11] Sagayam, K.M., Ponraj, D.N., Winston, J., Yaspy, J.C., Jeba, D.E., Clara, A. (2019). Authentication of biometric system using fingerprint recognition with Euclidean distance and neural network classifier. International Journal of Innovative Technology and Exploring Engineering, 8(4): 766-771.

[12] Chen, Y., Yang, J., Wang, C., Liu, N. (2016). Multimodal biometrics recognition based on local fusion visual features and variational Bayesian extreme learning machine. Expert Systems with Applications, 64: 93-103. https://doi.org/10.1016/j.eswa.2016.07.009

[13] Shekhar, S., Patel, V.M., Nasrabadi, N.M., Chellappa, R. (2013). Joint sparse representation for robust multimodal biometrics recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(1): 113-126. https://doi.org/10.1109/TPAMI.2013.109

[14] Basha, A.J., Palanisamy, V., Purusothaman, T. (2010). Fast multimodal biometric approach using dynamic fingerprint authentication and enhanced iris features. In 2010 IEEE International Conference on Computational Intelligence and Computing Research, pp. 1-8. https://doi.org/10.1109/ICCIC.2010.5705857

[15] Veluchamy, S., Karlmarx, L.R. (2017). System for multimodal biometric recognition based on finger knuckle and finger vein using feature-level fusion and k-support vector machine classifier. IET Biometrics, 6(3): 232-242. https://doi.org/10.1049/iet-bmt.2016.0112

[16] Bailey, K.O., Okolica, J.S., Peterson, G.L. (2014). User identification and authentication using multi-modal behavioral biometrics. Computers & Security, 43: 77-89. https://doi.org/10.1016/j.cose.2014.03.005

[17] Pakutharivu, P., Srinath, M.V. (2015). A comprehensive survey on fingerprint recognition systems. Indian Journal of Science and Technology, 8(35): 1-12. https://doi.org/10.17485/ijst/2015/v8i35/80504.

[18] Sengar, S.S., Hariharan, U., Rajkumar, K. (2020). Multimodal biometric authentication system using deep learning method. In 2020 International Conference on Emerging Smart Computing and Informatics (ESCI), pp. 309-312. https://doi.org/10.1109/ESCI48226.2020.9167512

[19] Dhiman, G., Kumar, V. (2017). Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Advances in Engineering Software, 1(114): 48-70. https://doi.org/10.1016/j.advengsoft.2017.05.014

[20] Askarzadeh, A. (2016). A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Computers & Structures, 169: 1-12. https://doi.org/10.1016/j.compstruc.2016.03.001