Age-Dependent Palm Print Recognition Using Convolutional Neural Network

Age-Dependent Palm Print Recognition Using Convolutional Neural Network

Muhamad Azhar Abdilatef Alobaidy* Zead Mohammed Yosif Ahmed Mamoon Alkababchi

Mechatronics Engineering Department University of Mosul, College of Engineering, Mosul 41002, Iraq

Computer Engineering Department University of Mosul, College of Engineering, Mosul 41002, Iraq

Corresponding Author Email: 
muhamad.azhar@uomosul.edu.iq
Page: 
795-800
|
DOI: 
https://doi.org/10.18280/ria.370328
Received: 
15 April 2023
|
Revised: 
23 May 2023
|
Accepted: 
31 May 2023
|
Available online: 
30 June 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Biometric engineering is one of the most important and modern fields that affect human life directly. It can be considered as a new technology relatively, that is used for identity verification and/or the identification of persons depending on their physiological features, which include the morphological, biological, and characteristics of their behaviors. Many types of biometric recognitions are used depending on features of eyes, faces, hands (palm and/or fingerprints), voice, and many others. All the works before were focused on persons’ detection only but nor on their ages. This feature (age) considered as one of the not solved problems in the field of detection. In this paper, the palm recognition model consisted of many steps. The first step related to palm detection. Other techniques used to remove noisy portion from extracted image. After preparing images for training, a deep neural network represented by convolutional neural network is selected. A new idea and method (mechanism) is used. Palm print features' recognition algorithm depending on Convolutional Neural Network (CNN) is presented for recognizing individuals (persons recognition in different ages’ classes). Palm print technique is depended for different ages’ classes. The dataset is selected firstly for many known persons with different ages, for each person many palm image items are trained and tested using deep learning techniques. As mentioned, the CNN method is used for the training purpose, which means the recognition must be done depending on the CNN deep learning algorithm. The FAR and GAR factors are used to measure the performances of the recognition.  The given results shown that the selection of the palm instead of other features types makes the recognition easier. More than 96% of the results were accurate. Also, the used algorithm which included the CNN had competitive performance, the algorithm succeeded to separate between the features according to the persons’ ages. The overall process is completed within 0.01×10-6 second, which can be considered fast and suggested to be used in real time.

Keywords: 

biometric, palm, CNN, deep learning, age

1. Introduction

Figure 1. Palm used as a part of hand

Most of the biometric system designs are based on gathering information from one of those parts of the body and are used to identify the person, this information can be in a form of voice, fingerprint, hand geometry, palm print (Figure 1), iris, and etc. The palm print represents the main internal part of the hand, thus for each person; two palms are there, one in each hand. The palm print has been studied and used in the field of biometric engineering [1, 2]. Any biometric study is usually based on five main factors [3, 4]. These factors, (Figure 2) are represented in terms of: accuracy, security, environmental constraints and user acceptance, computation speed, and finally cost. There are many types of palm print recognition feature extractions types [5, 6].

The bulk of current palm print identification research, like deep learning approaches, frequently train discriminative features from palm print photographs, which generally require a large number of labeled samples to obtain a decent degree of recognition [7, 8].

The five factors listed above are taken, generally, into account during a system evaluation. This evaluation is used according to the given results of the used application or system. The evaluation process is very important for reliability purposes. Whenever these factors show that the designed system is not as good as needed, it must be re-designed or be enhanced to obtain good results. The most important one among the mentioned factors is accuracy. The accuracy is mostly related to the non-error ratio that is usually used for performance measurement and security. The False Acceptable Ratio (FAR), False Rejected Rate (FRR) and the Genuine Accepted Ratio (GAR) can be used for this purpose. The speed is also considered as an important factor according to that the applications of this field are mostly used in real time. The security is also highly needed and depended to measure the activity of the developed application; any error might lead to a big problem in systems or the organizations in which the used application is depended [9, 10]. Finally the user acceptance and the application’s cost were also depended as factors of measuring the application or the biometric system. These factors were considered as deep learning factors and depended today in most of the fields that depend on recognition, detection inside the classification, and highly accurate results can be obtained in many papers when deep learning is used [11-14].

Figure 2. Biometrics factors

In recent works, ANNs, PCA, and other techniques are used for palm print recognition. The key difference between conventional machine learning and deep learning like CNN is that deep learning requires feature extraction internally, while feature extraction in traditional machine learning is performed separately. In this paper, and due to the advantages mentioned above, the CNN technique is used. The palm print for many classes is detected and recognized (classified) using CNN.

2. Literature Review

For high authentication successful system, biometric engineering has been used in most of these related works, there are many hundreds of researches in this area, Tee Connie et al. proposed an automated palm print recognition method based on PCA [15]. Patprara Tunkpien used the compact extraction of theory lines method from images of palm prints using consecutive filtering operations in another study [16], where the image is smoothed first, then worked on this piece. The palm print photographs are subjected to a variety of filters. Connie et al. [17] suggested palm print identification using the PCA and the use of ICA [15]. Sina Akbari Mistani et al. [18] proposed a new method for improving the efficiency of palm print recognition systems by using multispectral analysis of hybrid features. A novel preprocessing method for DCT domain palm print identification is presented by Imtiaz et al. [19]. Using a two-dimensional discrete cosine transform (2D-DCT), the feature extraction process is carried out locally in this technique. A strong survey for most palm print recognition systems has been studied in addition to these works [18, 20]. The obtained results are assessed using an intelligent palm print recognition method in terms of verification and recognition rate. The number of individuals correctly detected by the biometric device is known as the correct recognition rate. The FAR and, on occasion, the FRR are used to calculate the verification rate. For the total number of not legitimate accesses, the FAR is the percentage of approved not genuine statements. The FRR is the proportion of legitimate claims that are refused out of all legitimate accesses. Any biometric device must have the lowest feasible FAR and FRR in order to work properly. Jaspreet Kour et al. made a study about palm print recognitions; many existed algorithms about palm print recognition are discussed, used, and analyzed. A simple approach to preprocessing and region of interest (ROI) for extraction has been used [2, 21, 22]. The available databases were finally examined and analyzed. This field's research has not come to a halt yet, and it is still improving and expanding at a rapid rate. Due to all the previews works in the field of detection and recognitions, especially for those done depending on the palm print, there are no classification and detection studies is done to detect and classify persons according to their ages. In this paper, these ideas will be considered during the detection, recognition and the classification steps. Regarding to the mentioned and the other hundreds not mentioned studies, the ages are not considered in the separation and classification processes. Two novel components make up the mechanism that this study suggests. Designing a pre-processing module that automatically aligns palm print photos captured with a peg-free sensor is the first step. In this module, the hand picture is separated from the backdrop and the middle of the palm is isolated for identification. Second, a thorough comparison and analysis of three different subspace projection approaches principal component analysis, fisher discriminant analysis, and independent component analysis is performed using a typical palm print database.

3. CNN Deep Learning

As a modern method used to spatially detect and classify in practices, deep learning methods are used [8, 13, 23]. The learning used in both academic and practical fields of application is used in the application of recognition such as image recognition and voice recognition. The primary distinction between deep learning and standard machine learning is that feature extraction is already included internally in deep learning, but it is not automatically done in classic machine learning.

One of the most common examples of a learning machine in recent years relies on a deep learning approach, if not, it is known as the Convolutional Neural Network (CNN). It has been widely adopted once it is detonated in the visual computing area. CNN exploitation that is for facial treatment and facial recognition is noteworthy. The CNN network is used to oversee machine learning methods that may extract the "deep" information from a data set using a strict example based on training in the sense; this method resembles how the human brain operates when using any type or method of the learning process. CNN has been applied successfully to the image for feature extraction, person and object recognition as well as image segmentation. As shown here, the explosion has been on CNN used in recent years according to their ability to recognize complex features using the non-linear multi-layer architectural type. The origin of CNN dates back to the beginning of 1990. The prevailing skepticism of CNN used depends usually on the assumption according to whether the feature extraction while using the gradient asset should always be appropriate.

A typical CNN variant called the multi-layer perceptron (MLP) has numerous convolution layers that came before subsampling (pooling) layers and FC layers at the very end. Figure 3 displays a CNN architecture example for image classification.

In the feature extraction procedure, a convolution tool is utilized to find and separate the distinctive qualities of a picture for examination. The feature extraction network has several pairs of convolutional or pooling layers. A fully connected layer that uses the previously recovered features and the output of the convolutional process to classify the images. This CNN feature extraction approach seeks to extract the fewest number of features from a dataset as is practical. By integrating the features of an initial collection of features into a single new feature, it develops additional features [24].

Figure 3. An example of CNN architecture for image classification

The main argument used was gradation based on methods of improvement, which leads to falling to the local minimum. In recent years, these assumptions are often abolished due to the promising results of CNN produced across many areas of research. Because of this time, deep artistic models drawn from art, which rely on CNN structures, are used mostly in the visual fields that are computed [25, 26].

4. Morphological Operations

The morphological operation is one of the most important techniques used in image processing and other fields. This technique includes removing the unwanted parts of images and restructuring the parts that are eroded. Thus, two types of operations are included in morphological operation: erosion and dilation [11]. These two operations are done; the erosion is used for thinning the object, while the dilation is used for size increasing (Figure 4).

Figure 4. Dilation and erosion

To do that, a suitable element must be used; this element is represented in a rectangular matrix of ones and zeros, the ones are active points in the matrix while the zeros are not [5, 27, 28] (Figure 5).

Figure 5. Representation of elements

5. Methods and Results

In this work, the Matlab environment 2019, that is installed in a laptop type Acer, which has processor Intel(R) Core(TM) i3-4030U CPU @ 1.90 GHz, 4 GB RAM, 500 GB hard memory, 64-bit operating system, x64-based processor, windows 10 Pro is used. According to the biometric system based on one or many of the sense elements in the bodies of humans [13, 29-31], and because of being less practiced and less rigging than others, the palm print is used here for recognition and authentication.

The steps of palm recognition model consist of many steps. The first step related to palm detection, the palm should be extracted from the whole image that include part of hand and its fingers in addition to palm region, the image processing technique is used for this purpose. Other techniques are used to remove noisy portion from extracted image. These steps are collected together in steps for the purpose of ROI selection of the images which are used to train neural network. After preparing images for training, a suitable neural network model has to be prepared. A deep neural network represented by convolutional neural network is selected. In the following paragraphs and sections, many details about the mentioned steps are described.

In this work many classes are used, which contains palms of many persons, in different ages, these palms are captured firstly using a camera, this can be done in different environments and lighting cases. The palm recognition system used here has two phases; the first one is the training phase, while the second is the testing phase, Figure 6. The training phase includes constructing a data set that represents several images (palms' are included) with twenty-one classes; twenty of these classes are depended for the users which are separated according to the ages (20-60) years, while the other class for the stranger (same range of ages).

Figure 6. Steps of palm’s detection work

Each class contains 100 images and ten persons, ten images for each person. Whenever a data set is selected, each image in this set of data must be entered into the detection phase to capture the hand and crop it from the whole image. In this work, white background for the images is used. The first step in the detection phase is to convert the image into a grayscale format, and then convert the grayscale format into binary format; after that, an opening morphological operation is applied to remove noise and small noisy pixels from the image. The region properties tools are used to get a boundary box of the interesting area and create a new image that contains the region of interest (ROI), of the original image as shown in Figure 7. After the detection is completed, the selected items have to be trained and classified. The CNN is used to train the system and classify the data according to the input dataset (Kumari and Seeja, 2020). As mentioned before, the dataset is classified into 21 classes as shown in Figure 8, each one of the classes has a range of ages as shown in Table 1.

Table 1. Classes information relating to the ages

Class No.

Range of Age (Years)

No. of people

No. of Images /Person

Total No. of Images

1

20 - 21

10

10

100

2

22 - 23

10

10

100

3

24 - 25

10

10

100

4

26 - 27

10

10

100

5

28 - 29

10

10

100

6

30 - 31

10

10

100

7

32 - 33

10

10

100

8

34 - 35

10

10

100

9

36 - 37

10

10

100

10

38 - 39

10

10

100

11

40 - 41

10

10

100

12

42 - 43

10

10

100

13

44 - 45

10

10

100

14

46 - 47

10

10

100

15

48 - 49

10

10

100

16

50 - 51

10

10

100

17

52 - 53

10

10

100

18

54 - 55

10

10

100

19

56 - 57

10

10

100

20

58 - 59

10

10

100

21

Strangers

10

10

100

Figure 7. ROI (region of interest)

The CNN trains the network to classify any input image into one of the 21 classes. In this training phase, the data has to be separated into two parts, the first one is for the training purpose, which represents 70% of the images, while the second part represents 30% of the input images; this part represents the validation of the system. The accuracy of the given results for the training images has reached 96%; that represents the validation of the system that is given in Figure 9, the average required time required to obtain the result was equal to (0.01×10-6) second. This consuming time is very small and give a good impression to use the system in real time applications.

Figure 8. The twenty-one classes

Figure 9. System validation

If any of the given images (Palms) has to be tested, the sequences shown in Figure 10 must be used, which includes reading the images, palm detection, classification stage and then selecting the related class.

Figure 10. Testing system steps

As mentioned before, the classifier depends on CNN, according to this classifier; the system decides to which class the image belongs. Finally, during the testing process, the following results for each user were obtained as depicted in Figure 11.

Figure 11. Testing system results

Diagram Figure 12 shows the statistical results for all of the twenty classes used in this paper.

Figure 12. System statistical results

6. Conclusions

The CNN method is used in this paper for the purpose of training and classification, this method can be considered as one of the modern, newest and best methods used in this field. The classes of depended data are separated according the ages of the selected persons (data set). Each class included ten persons within a known range of age, with ten images for each. According to the obtained results, more than 96% of the processed images data (processed results) using the proposed method are accurate. This proposed method gives better results in compared with the other intelligent methods. Referring to the reviewed and the presented papers, it is the first time to classify the data set (persons) according to their ages and separating them into classes, especially when the palm print recognition technique is used. It is suggested to apply this method and the depended mechanism for more than these number of classes (21) inside increasing the number of peoples for each class without reducing the accuracy. The average needed time to complete the work which was equal to (0.01×10-6) second, make the used method considered as a high-speed system, which can be suggested to be used in a real-time. Regarding to the mentioned results, it can be concluded that the system is able be used as a biometric model to recognize persons in many applications, and for different ages.

Acknowledgment

The authors wish to thank the University of Mosul – and the staff of college of engineering, especially the Mechatronics engineering and computer engineering departments' staff for their support.

  References

[1] Aghdam, O.A., Ekenel, H.K., (2018). Robust deep learning features for face recognition under mismatched conditions. 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, pp. 1-4. https://doi.org/10.1109/SIU.2018.8404319

[2] Aizi, K., Ouslim, M., (2019). Score level fusion in multi-biometric identification based on zones of interest. Journal of King Saud University - Computer and Information Sciences. https://doi.org/10.1016/j.jksuci.2019.09.003

[3] Foudil, B. (2017). Biometric system for identification and authentication. https://hal.science/tel-01456829/document.

[4] Joardar, S., Chatterjee, A. (2019). Palm dorsa vein pattern based biometric verification system using anisotropic generalized procrustes analysis on weighted training dictionary. Applied Soft Computing 85: 105562. https://doi.org/10.1016/j.asoc.2019.105562

[5] Benzidane, R., Sereir, Z., Bennegadi, M.L., Doumalin, P., Poilâne, C. (2018). Morphology, static and fatigue behavior of a natural UD composite: The date palm petiole ‘wood.’ Composite Structures, 203: 110-123. https://doi.org/10.1016/j.compstruct.2018.06.122

[6] Çalik, N., Kurban, O.C., Yilmaz, A.R., Ata, L.D., Yildirim, T. (2017). Signature recognition application based on deep learning. 2017 25th Signal Processing and Communications Applications Conference (SIU), Antalya, Turkey, pp. 1-4. https://doi.org/10.1109/SIU.2017.7960454

[7] Chai, T., Prasad, S., Wang, S. (2019). Boosting palmprint identification with gender information using DeepNet. Future Generation Computer Systems, 99: 41-53. https://doi.org/10.1016/j.future.2019.04.013

[8] Chen, Y., Yang, J., Wang, C., Liu, N. (2016). Multimodal biometrics recognition based on local fusion visual features and variational Bayesian extreme learning machine. Expert Systems with Applications, 64: 93-103. https://doi.org/10.1016/j.eswa.2016.07.009

[9] Chen, Y., Wo, Y., Xie, R., Wu, C., Han, G. (2019). Deep Secure Quantization: On secure biometric hashing against similarity-based attacks. Signal Processing 154: 314-323. https://doi.org/10.1016/j.sigpro.2018.09.013

[10] Elmahmudi, A., Ugail, H. (2018). Experiments on deep face recognition using partial faces. 2018 International Conference on Cyberworlds (CW), Singapore, pp. 357-362. https://doi.org/10.1109/CW.2018.00071

[11] Kumari, P., Seeja, K.R. (2020). Periocular Biometrics for non-ideal images: With off-the-shelf Deep CNN & Transfer Learning approach. Procedia Computer Science, International Conference on Computational Intelligence and Data Science, 167: 344-352. https://doi.org/10.1016/j.procs.2020.03.234

[12] Zhang, Q. (2018). Deep learning of electrocardiography dynamics for biometric human identification in era of IoT. 2018 9th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON), USA, pp. 885-888. https://doi.org/10.1109/UEMCON.2018.8796676

[13] Wu, L., Xu, Y., Cui, Z., Zuo, Y., Zhao, S., Fei, L. (2021). Triple-type feature extraction for palmprint recognition. Sensors, 21: 4896. https://doi.org/10.3390/s21144896

[14] Zhao, S., Zhang, B. (2020). Deep discriminative representation for generic palmprint recognition. Pattern Recognition 98: 107071. https://doi.org/10.1016/j.patcog.2019.107071

[15] Connie, T., Jin, A.T.B., Ong, M.G.K., Ling, D.N.C. (2005). An automated palmprint recognition system. Image and Vision Computing 23: 501-515. https://doi.org/10.1016/j.imavis.2005.01.002

[16] Uddin, Z., Jan, Z., Ahmad, J., Abbasi, A. (2014). A novel technique for principal lines extraction in palmprint using morphological TOP-HAT filtering. world Applied Sciences Journal, 31: 2010–2014. https://doi.org/10.5829/idosi.wasj.2014.31.12.229

[17] Connie, T., Teoh, A., Goh, M., Ngo, D. (2003). Palmprint Recognition with PCA and ICA. https://www.ist.massey.ac.nz/dbailey/sprg/IVCNZ/Proceedings/IVCNZ_41.pdf.

[18] Rane, M., Somvanshi, P. (2012). Survey of palmprint recognition. International Journal of Scientific & Engineering Research, 3: 1-7. https://www.ijser.org/paper/Survey-Of-Palmprint-Recognition.html. 

[19] Imtiaz, H., Aich, S., Fattah, S.A. (2012). A novel pre-processing technique for frequency domain palm-print recognition. Journal of Electrical Systems, 8: 185-197. https://www.ijstr.org/final-print/april2012/A-Novel-Pre-processing-Technique-for-DCT-domain-Palm-print-Recognition.pdf.

[20] Kong, A., Zhang, D., Kamel, M. (2009). A survey of palmprint recognition. Pattern Recognition, 42: 1408-1418. https://doi.org/10.1016/j.patcog.2009.01.018

[21] Azhar, M. (2017). Orientation effectiveness in the objects detection areas using types of edges detection techniques. International Journal of Computer Science & Engineering Survey, 8(2): 13-26. http://dx.doi.org/10.5121/ijcses.2017.8202

[22] Kubota, T., Ushijima, Y., Nishimura, T. (2006). A region-of-interest (ROI) template for three-dimensional stereotactic surface projection (3D-SSP) images: Initial application to analysis of Alzheimer disease and mild cognitive impairment. International Congress Series, 1290: 128-134. https://doi.org/10.1016/j.ics.2005.11.104

[23] Qian, J., Yang, J., Tai, Y., Zheng, H. (2016). Exploring deep gradient information for biometric image feature representation. Neurocomputing, Binary Representation Learning in Computer Vision, 213: 162-171. https://doi.org/10.1016/j.neucom.2015.11.135

[24] Comer, ML., J. Delp, E., n.d. (1999). Morphological Operation for Color Image Processing. Computer Vision and Image Processing Laboratory School of Electrical Engineering Purdue University West Lafayette, Indiana. https://doi.org/10.1117/1.482677

[25] Alzubaidi, L., Zhang, J., Humaidi, A.J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M.A., Al-Amidie, M., Farhan, L. (2021). Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data, 8(53). https://doi.org/10.1186/s40537-021-00444-8

[26] Bengs, T. (2018). Putting authentication in the palm of your hand. Biometric Technology Today, 2018: 8-11. https://doi.org/10.1016/S0969-4765(18)30095-X Evaluation_of_biometric_systems.pdf.

[27] Fabiani, C., Pisello, A.L., Barbanera, M., Cabeza, L.F., (2020). Palm oil-based bio-PCM for energy efficient building applications: Multipurpose thermal investigation and life cycle assessment. Journal of Energy Storage, 28: 101129. https://doi.org/10.1016/j.est.2019.101129

[28] Azhar Abdilatef, M. (2014). Moving object detection in industrial line application. http://earsiv.cankaya.edu.tr:8080/browse?type=author&value=Abdilatef%2C+Muhamad+Azhar.

[29] El-Abed, M., Charrier, C. (2014). Evaluation of Biometric Systems. https://hal.science/hal-00990617/file/InTech.

[30] Azhar Abdilatef, M. (2016). A Comparison between moving object detection methods including a novel algorithm used for industrial line application. International Journal of Computer Applications, 152. https://www.ijcaonline.org/archives/volume152/number1/26282-26282-2016911753?format=pdf.

[31] Mistani, S.A., Minaee, S., Fatemizadeh, E. (2011). Multispectral Palmprint Recognition Using a Hybrid Feature. https://doi.org/10.48550/ARXIV.1112.5997