Proposed Convolutional Neural Network Model for Finger Vein Image Classification

Proposed Convolutional Neural Network Model for Finger Vein Image Classification

Ahmed H. Alhadethy* Ikram Smaoui Ahmed Fakhfakh Saad M. Darwish

School of Electronics and Telecommunications, University of Sfax, Sfax 3029, Tunisia

School of Electronics and Telecommunications, LETI Laboratory, University of Sfax, Sfax 3029, Tunisia

School of Electronics and Telecommunications, MaRTS Laboratory CRNS, University of Sfax, Sfax 3029, Tunisia

Department of Information Technology, Institute of Graduate Studies and Research, Alexandria University, Alexandria 21526, Egypt

Corresponding Author Email: 
ikram.smaoui@enetcom.usf.tn
Page: 
127-137
|
DOI: 
https://doi.org/10.18280/ria.380113
Received: 
26 August 2023
|
Revised: 
10 November 2023
|
Accepted: 
15 November 2023
|
Available online: 
29 February 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The identification of individuals through finger vein patterns has become a prominent biometric technique due to its non-invasiveness and uniqueness. Convolutional neural networks (CNNs) have been at the forefront of this technology, offering impressive recognition rates within large, labeled datasets. Despite their successes, the application of CNNs to finger vein recognition remains a challenging task, largely due to the high dimensionality of input data and the multitude of classification outputs required. This paper presents an optimized CNN model designed to address the intricacies of finger vein image classification. It is posited that increasing the number of feature extraction layers, coupled with a strategic selection of kernel sizes for each layer, significantly enhances model accuracy. Through a series of systematic experiments, the optimal layer configurations were identified, resulting in an architecture that surpasses previous models in classification precision. The proposed CNN architecture demonstrates a classification accuracy exceeding 99%, an improvement over existing method. It is noteworthy that the development of this model has been constrained by the limited scale of current finger vein databases, which poses risks of overfitting. Hence, the expansion of these databases is suggested as a future avenue to reinforce the robustness of the training process. The results depicted in this study underscore the potential of deep learning techniques in biometric security, with the advanced CNN model setting a new benchmark in finger vein recognition.

Keywords: 

CNN algorithm, multiple-functional layers, image classification, finger vein identification, valuable feature extraction

1. Introduction

In today's world, the act of accessing private or confidential information has become an integral part of our daily lives. This rapidly growing trend, which encompasses a larger portion of the population with each passing day, presents significant security risks. One widely embraced solution to this issue is authentication, where an individual is expected to provide certain credentials, such as an ID and password pair or answers to secret questions. These passwords need to strike a balance between being easy to remember and sufficiently strong to prevent unauthorized access [1-3]. However, complex passwords or cryptographic keys are often too difficult to commit to memory.

On the contrary, the utilization of biometric data, which involves behavioral or biological traits, holds the potential to replace password-based authentication. Biometric-based authentication requires the physical presence of the individual, making it a challenging task to circumvent. Moreover, compared to conventional password-based systems, biometrics are notably harder to steal, duplicate, or share [4, 5].

Information security is becoming more and more important as a result of advances in science and technology and ongoing improvements to safety standards. The security of people's information can be ensured through biometric identification technology, which will eventually supplant traditional identity verification methods in daily life [1]. The location of finger veins beneath the skin's surface offers several advantages, including enhanced security, privacy, contactless identification, and cost-effectiveness, in comparison to other biometric identification methods. Consequently, finger vein recognition technology holds significant potential for various applications and has gained prominence as a focal point of research in biometric recognition [2].

Conventional biometric approaches, such as iris recognition, do not inherently guarantee confidentiality, as the features they rely on are visible externally on the human body, rendering them susceptible to potential forgery. To address this issue, researchers proposed a biometric system based on the patterns of veins inside the finger, thus utilizing features located internally within the human body. Figure 1 compares finger vein and traditional biological identifications. In comparison to traditional biological identification technologies, this new method excels in accuracy and is highly reliable in preventing counterfeiting, all while being less restrictive. Firstly, it relies on the uniqueness of individual finger vein patterns, resulting in false alarms of less than 0.01% and false identifications of less than 0.0001%. Secondly, the finger vein patterns are located inside the human body, rendering them immune to theft or replication. Thirdly, the non-contact, short-distance infrared ray imaging is impervious to external factors like dirt, moisture, or damage to the finger's skin.

Figure 1. Comparison between finger vein and traditional biological identifications

These advantages make this technology highly suitable for deployment in commercial establishments, residences, and other private settings, showcasing its substantial potential for practical applications [1, 6, 7].

Figure 2. Two ways of finger vein acquisition: (a) Light reflection; and (b) Light transmission

The extraction of vein patterns is dependent on the presence of blood, because hemoglobin in the blood absorbs infrared light, showing vein patterns as distinct dark outlines. A specialized camera captures this interaction of infrared light, creating an image of the finger vein pattern. This image is then translated into pattern data, which is saved as a template for biometric authentication data for an individual. During the authentication procedure, a specific finger vein image is taken and compared to the person's previously stored template [3].

Figure 2 depicts two approaches for collecting finger vein images: the light reflection method and the light transmission method. The positioning of near-infrared light is the major difference between both approaches. The light reflection approach involves placing near-infrared light on the palm side of the finger and capturing the finger vein pattern through light reflection from the palm's surface. In contrast, near-infrared light is placed on the dorsal side of the finger in the light transmission method, allowing the light to penetrate the finger. The light transmission method, as opposed to the light reflection approach, can capture high-contrast images, which is why most image acquisition systems choose to employ it [4].

When the separation between venous and non-venous regions is insufficient, the main issue in finger vein detection is the deterioration of finger-vein pictures, which prevents the accurate use of finger-vein network properties. In practice, finger-vein segmentation results are often inadequate, and they are very vulnerable to distortions due to the low contrast character of finger-vein images. Consequently, the reliable segmentation of the finger-vein network is a critical requirement for successful finger-vein recognition [5]. In general, there are two main challenges for finger vein recognition:

1) The quality of infrared finger vein images significantly impacts recognition performance.

2) Limited texture information in finger vein patterns and the potential for variations in the finger's pose can introduce challenges for finger-vein recognition, especially when feature extraction methods lack robust generalization [2].

Another critical challenge is designing a robust classifier that achieves high recognition rates and fast recognition speeds to make the system practical for real-world applications. Even though images are obtained from the same individual's finger, variations like transitions, scaling, or rotations due to factors such as user actions or acquisition conditions can result in non-identical images. These variations increase the distance between images of the same person, reducing matching performance, even when using accurately segmented images [1].

In recent years, a wide range of feature extraction and classification applications have increasingly utilized convolutional neural networks (CNN). CNN can identify finger veins by using convolution kernels of varying sizes to extract fine information from photos. For vein recognition, conventional CNNs were known to have complicated topologies [3-5], which require a lot of data and computing power for inference and training. Because of this, more effective lightweight CNNs that use less memory and computing power were created. Lightweight CNN architectures have been presented in several recent publications to enhance recognition task performance while decreasing computing costs. One of these systems employed a compact CNN alongside a triplet loss function, which was structured with stage blocks to capture intricate features and stem blocks to capture broader features from the images [6]. Recently, several studies have used CNN technologies and carried out to distinguish the finger vein based on its biometric images [7-9]. A different system developed a single CNN that was compatible with neural networks and performed well in tests requiring finger recognition and anti-spoofing [10].

Figure 3. CNN architecture

CNNs, short for convolutional neural networks, represent a type of artificial neural network. CNNs have demonstrated proficiency in various image-related tasks, including classification, detection, and segmentation. They are predominantly employed for tasks involving image processing and input from visual data. They are frequently used in a variety of computer vision tasks and have attracted interest from a variety of industries. A convolution module is a module that accepts an input picture and is coupled with an input layer [5]. This module is primarily comprised of a number of blocks, which are, in turn, comprised of a series of layers, with convolutional layers serving as the primary structural elements (see Figure 3) [4, 5].

This module functions as a feature extractor since it transforms a picture into a feature vector [6]. The vector is subsequently linked to the classification module's input. It comprises interconnected layers. This classification module is responsible for combining collected characteristics to categorize the input picture. CNN output is provided by the final layer of this module (prediction) [6]. Using the softmax function, the output values are typically standardized between (0,1). Following is a discussion of the fundamental components of convolutional neural networks, with a particular emphasis on the many layers utilized above the convolutional and classification modules. Fully connected layers are replaced with convolutional layers for at least one network layer. Fold Layers are driven by non-linear activation functions, such as ReLU, after which we may add one or two fully connected layers to get the final classification network output. Each of these layer types will be explained in depth in the rest of this section, along with the parameters associated with each layer and how to establish them and train the CNN.

The scope of the research depends on developing a security system through the use of artificial intelligence and vein fingerprinting, as vein fingerprinting is considered one of the methods characterized by high security in identifying people as it is a modern biometric method. It was necessary to develop an intelligent system to identify people with high accuracy through the finger vein. A modern algorithm was developed to determine a person’s identity through.

Figure 4. Dataset sample classes

Since there are few publicly available datasets for finger vein recognition, the experiments were carried out utilizing the well-known Finger Vein USM (FV-USM) Database [11]. This database offers information about finger veins and finger shape, as well as extracted areas of interest (ROI) for vein recognition. It may be employed to validate both unimodal (finger vein and finger geometry) and bimodal (finger vein and geometry) systems.

The database contains images acquired from 123 volunteers, 83 males and 40 females, who were personnel and students at the University Sains Malaysia. The ages of the individuals ranged from 20 to 52 years. Each subject provided data from four fingers: the left index, left middle, right index, and right middle, yielding a total of 492 finger classes.

The captured finger images yielded two crucial features: geometry and vein patterns. Every finger was captured six times within a single session, and each individual participated in two sessions, spaced at least two weeks apart. In the first session, a total of 2952 images (123 individuals × 4 fingers × 6 captures) were obtained. Consequently, across the two sessions, a total of 5904 images from 492 finger classes were collected. These images possessed spatial and depth resolutions of 100×300 and 256 grey levels, respectively. For reference, in Figure 4 illustrates a sample of finger vein images [11].

2. Related Work

In 2023, A unique strategy was presented that combines a lightweight and low-complexity convolutional neural network (CNN) with intellectual property (IP) to speed up the inference process in finger vein recognition [12]. In client mode, this neural network system functions autonomously. It captures the user's finger vein image with a near-infrared (NIR) camera built into an embedded system, extracts vein features efficiently using specialized algorithms, and quickly completes user identification. Implementing various preprocessing approaches and altering CNN results in improved image quality and recognition accuracy. The practicality and resilience of this proposed finger vein identification system were verified through extensive experimental data collected using finger vein image capture equipment developed in their laboratory, adhering to specifications akin to existing market products.

In 2023A study presented a novel, cost-effective, end-to-end contactless system for wrist vein biometric detection based on deep learning [13]. They used the FYO wrist vein dataset to train a unique U-Net CNN structure, which successfully extracted and segmented wrist vein patterns. The extracted images exhibited a Dice Coefficient of 0.723. The study implemented a CNN and Siamese Neural Network to match wrist vein images, achieving the highest F1 score of 84.7%. Remarkably, the average matching time was under 3 seconds on a Raspberry Pi. These subsystems were seamlessly integrated through a designed graphical user interface (GUI) to establish a fully functional, deep learning-based wrist biometric recognition system.

In 2022, a research endeavor [14] proposed a biometric technique predicated on the fusion of bimodal features from finger vein and face data, employing a convolutional neural network (CNN). This fusion operation occurred within the feature layer and employed the self-attention mechanism to derive weights for both biometrics. The self-attention weight feature was combined with the bimodal fusion feature channel Concat, in conjunction with the RESNET residual structure. To substantiate the efficacy of bimodal feature layer fusion, experiments were conducted utilizing AlexNet and VGG-19 network models for extracting finger vein and face image features as inputs to the feature fusion module. These comprehensive experiments exhibited recognition accuracies exceeding 98.4%, underscoring the high efficiency of the bimodal feature fusion.

In 2022, a novel finger vein identification network based on a CNN with a hybrid pooling mechanism, was introduced. A block-wise feature extraction network was used in the scheme to extract discrete characteristics from interclass vein picture samples, regardless of their visual quality [15]. Images entering FVR-Net underwent preprocessing to segment vein patterns from the background. The feature extraction network comprised blocks consisting of a convolutional layer followed by hybrid pooling, with output activation maps concatenated before passing features to another block within the network. The hybrid pooling layer incorporated both max pooling and average pooling in parallel, enabling the activation of discrete features while considering the entire input volume for better feature localization. After feature extraction, three fully connected layers (FCLs) were utilized for classification. The model underwent extensive experimentation on publicly available finger vein datasets, achieving outstanding recognition performance with accuracies reaching up to 97.84% and 97.22% for good and poor-quality images, respectively. Various network hyperparameters were adjusted to optimize the model's settings for the best recognition accuracy in a finger vein biometric system.

In 2019, A novel finger vein detection approach based on convolutional neural networks has been introduced [16]. When working with finger-vein images of various quality, this method displayed outstanding stability and precision. It was rigorously evaluated using four publicly available databases. The major goal of this work was to demonstrate a deep learning strategy for finger vein recognition that consistently achieved accuracy of more than a 95% correct identification rate across all four databases studied.

3. Dataset Acquisition and Preparation

3.1 Dataset description

In this section, we provide an overview of the datasets employed to assess the efficacy of the model we have proposed for identification. The dataset is named Vein Finger Dataset. This dataset contains 4428 images which are divided into 123 classes. Each class has 34 images. Figure 4 illustrates samples of classes in the dataset. This dataset was downloaded from the Finger Vein USM (FV-USM) Database [16]. This dataset was split into two sections, specifically, a training segment and a testing segment, as outlined below.

Training Part: 3100 images from the dataset, or 70% of the dataset, were used to train the CNN algorithm.

Testing Part: 1328 images, or 30% of the dataset, were used to train the CNN algorithm in this section.

3.2 Input data preprocessing

In this step, noise is removed from the images using three types of filters (Mean, Median, and Gaussian) filters. The goal of noise removal is to increase the accuracy of extracting features from images and prevent features from being affected by noise. Since images vary in a wide range of sizes, all images in the dataset will be resized to a specified size (64×64). The image values are then normalized by dividing the pixel value by 255 to generate pixel values between (0, and 1), which increases system speed while decreasing storage value.

4. Finger Vein Identification Based on CNN Multiple Functional Layers

This paper focused on proposing a new scenario with high accuracy for determining the fingerprint of a vein. It was based on building a deep model architecture of the CNN algorithm through which high prediction accuracy can be reached.

4.1 Proposed identification architecture

To enhance the performance of finger vein identification and effectively extract essential features from an individual's input dataset, an advanced deep network with ten key layers has been designed. This CNN model consists of various components, including an input layer, three convolution layers, three pooling layers, two fully connected layers, and an output layer. The architecture of the proposed CNN model is depicted in Figure 5.

Figure 5. The proposed CNN models

Three convolution layers were utilized in the suggested system to increase the number of features recovered from the test image. To extract a feature map, three layers were created using varying kernel sizes. The characteristics of convolutional layers are shown in Table 1.

Table 1. Convolution layers characteristics

Layer

Function of Activation

Kernel Size

First Convolution

Relu

256

Second Convolution

Relu

128

Third Convolution

Relu

64

Only significant and powerful features are chosen for the pooling layer, and three levels of pooling have been employed to choose the features that have the most influence on the determination of vein class. The characteristics of the pooling layer are shown in Table 2.

Table 2. Pooling layers characteristics

Layer

Kernel Size

First Pool

5

Second Pool

2

Third Pool

2

Utilize dropouts after pooling layers. The dropout layer serves as an auxiliary layer by removing features that have little bearing on classification accuracy and vein class prediction. In the proposed model, three additional dropout layers have been included, as shown in Table 3.

Table 3. Characteristics of dropout layers

Lyare

Kernel Size

First Dropout

0.025

Second Dropout

0.4

Third Dropout

0.5

Table 4. Fully connection layer characteristics

Layer

Activation Function

First Fully Connection

Relu

Second Fully Connection

SoftMax

The system flattens features after the dropout layer chooses the best ones, converting them from a 2D array to a 1D array. Two completely connected layers make up the suggested system. The activation functions in the first and second layers are Relu and softmax, respectively. Table 4 displays the characteristics of the fully connected layer.

This layer trains the dataset and assigns a weight to each vein finger category based on the attributes extracted from the images in that category.

5. Experiments and Results

The deep learning CNN algorithm's practical training experiments will be discussed in this section, where the algorithm was tested by varying the number of training iterations as follows:

Case 1: Divide the increasing dataset into 90% for training and 10% for testing. The outcomes of training and testing at various epochs are shown in Table 5.

Case 2: Divide the dataset so that 80% was used for training and 20% was used for testing. Table 6 shows the results of testing and training using various epochs.

Case 3: Dividing the dataset into 30% for testing and 70% for training, Table 7 displays the outcomes of training and testing using various epochs.

Several experiments were conducted consisting of five stages in each stage to obtain high accuracy with the least possible error. The first value is an era of 10, the second value is an era of 50, the third value is an era of 100, the fourth value is an era of 150, and the fifth value is an era of 200). In particular, case 3 at epoch 100 had a 99.69% accuracy of testing, which is the maximum accuracy of testing achieved by the suggested system. Table 8 illustrates a summary of these results.

We trained this group of finger veins in the first stage with a value of (Epoch 10) and obtained a training accuracy (Train ACC) of 0.4914, where he found a loss during training with a value of 0.3074. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.4444, we examined the percentage of loss, and the results showed that the test loss had a value of 0.2876. We trained this group of finger veins in the second stage with a value of (Epoch 50) and obtained a training accuracy (Train ACC) of 0.9118, where he found a loss during training with a value of 0.0745. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.8333, we examined the percentage of loss, and the results showed that the test loss had a value of 0.1244. We trained this group of finger veins in the second stage with a value of (Epoch 100) and obtained a training accuracy (Train ACC) of 0.9731, where he found a loss during training with a value of 0.0244. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.8333, we examined the percentage of loss, and the results showed that the test loss had a value of 0.1900, which is clarified in Figure 6.

We trained this group of finger veins in the second stage with a value of (Epoch 150) and obtained a training accuracy (Train ACC) of 0.9844, where he found a loss during training with a value of 0.0198. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.83, we examined the percentage of loss, and the results showed that the test loss had a value of 0.2737. We trained this group of finger veins in the second stage with a value of (Epoch200) and obtained a training accuracy (Train ACC) of 0.9908, where he found a loss during training with a value of 0.0125. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.8754, we examined the percentage of loss, and the results showed that the test loss had a value of 0.2389.

We trained this group of finger veins in the first stage with a value of (Epoch 10) and obtained a training accuracy (Train ACC) of 0.4197, where he found a loss during training with a value of 0.3442. Then we ran an accuracy test, and the results showed that (Test ACC) had a value of 0.5819, we examined the percentage of loss, and the results showed that the test loss had a value of 0.2848. We trained this group of finger veins in the second stage with a value of (Epoch 50) and obtained a training accuracy (Train ACC) of 0.9168, where he found a loss during training with a value of 0.0695. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.8757, we examined the percentage of loss, and the results showed that the test loss had a value of 0. 1182, which is clarified in Figure 7.

Table 5. Train-test of first case results at various epoch numbers

Epoch

Accuracy

Loss

(a) 10 epochs

(b) 50 epochs

(c) 100 epochs

(d) 150 epochs

(e) 200 epochs

Table 6. Train-test of second case results at various epoch numbers

Epoch

Accuracy

Loss

(a) 10 epochs

(b) 50 epochs

(c) 100 epochs

(d) 150 epochs

(e) 200 epochs

Table 7. Train-test of third case results at various epoch numbers

Epoch

Accuracy

Loss

(a) 10 epochs

(b) 50 epochs

(c) 100 epochs

(d) 150 epochs

(e) 200 epochs

Table 8. Summary of train-test results

Case

Epoch

Train ACC

Train Loss

Test ACC

Test Loss

Case 1

10

0.4914

0.3074

0.4444

0.2876

50

0.9118

0.0745

0.8333

0.1244

100

0.9731

0.0244

0.8333

0.1900

150

0.9844

0.0198

0.8300

0.2737

200

0.9908

0.0125

0.8754

0.2389

Case 2

10

0.4197

0.3442

0.5819

0.2848

50

0.9168

0.0695

0.8757

0.1182

100

0.9698

0.0305

0.9096

0.1378

150

0.9748

0.0270

0.9266

0.0712

200

0.9899

0.0121

0.8870

0.1726

Case 3

10

0.3241

0.3787

0.3717

0.3500

50

0.8647

0.1447

0.8158

0.1874

100

0..9951

0.0311

0.9969

0.290

150

0.9749

0.0262

0.8585

0.1664

200

0.9724

0.0238

0.8283

0.2280

We trained this group of finger veins in the second stage with a value of (Epoch 100) and obtained a training accuracy (Train ACC) of 0.9698, where he found a loss during training with a value of 0.0305. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.9096, we examined the percentage of loss, and the results showed that the test loss had a value of 0.1378. We trained this group of finger veins in the second stage with a value of (Epoch 150) and obtained a training accuracy (Train ACC) of 0.9748, where he found a loss during training with a value of 0.027. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.9266, we examined the percentage of loss, and the results showed that the test loss had a value of 0.0712.

We trained this group of finger veins in the second stage with a value of (Epoch 200) and obtained a training accuracy (Train ACC) of 0.9899, where he found a loss during training with a value of 0.0121. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.887, we examined the percentage of loss, and the results showed that the test loss had a value of 0.1726, which is clarified in Figure 7.

Figure 6. Identification results corresponding to each epoch value for case 1

We trained this group of finger veins in the first stage with a value of (Epoch 10) and obtained a training accuracy (Train ACC) of 0.3241, where he found a loss during training with a value of 0.3787. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.3717, we examined the percentage of loss, and the results showed that the test loss had a value of 0.35. We trained this group of finger veins in the second stage with a value of (Epoch 50) and obtained a training accuracy (Train ACC) of 0.8647, where he found a loss during training with a value of 0.1447. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.8158, we examined the percentage of loss, and the results showed that the test loss had a value of 0.1874. We trained this group of finger veins in the second stage with a value of (Epoch100) and obtained a training accuracy (Train ACC) of 0.9951, where he found a loss during training with a value of 0.0311, which is clarified in Figure 8.

Figure 7. Identification results corresponding to each epoch value for case 2

Figure 8. Identification results corresponding to each epoch value for case 3

Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.9969, we examined the percentage of loss, and the results showed that the test loss had a value of 0.029. Table 9 presents the filters usage.

We trained this group of finger veins in the second stage with a value of (Epoch1 50) and obtained a training accuracy (Train ACC) of 0.9749, where he found a loss during training with a value of 0.0262. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.8585, we examined the percentage of loss, and the results showed that the test loss had a value of 0.1664.

We trained this group of finger veins in the second stage with a value of (Epoch 200) and obtained a training accuracy (Train ACC) of 0.9724, where he found a loss during training with a value of 0.0238. Then we ran an accuracy test, and the results showed that (Test ACC) has a value of 0.8283, we examined the percentage of loss, and the results showed that the test loss had a value of 0.228.

The following scales (Precision, Recall, and F1-Score) will be used to represent the system evaluation findings. The system evaluation findings are shown in Table 10.

Table 9. Use of filters

Filter Used

Train ACC

Train Loss

Test ACC

Test Loss

Mean

0.9587

0.4897

0.9595

0.4789

Median

0.9641

0.3657

0.9588

0.4179

Gaussian

0.9698

0.3361

0.9642

0.3214

Mean, Median and Gaussian

0..9951

0.0311

0.9969

0.290

Table 10. Results of system evaluation

Class

Precision

Recall

F1-Score

Class 1

1.00

1.00

1.00

Class 2

1.00

1.00

1.00

Class 3

1.00

0.97

0.98

Class 4

1.00

1.00

1.00

Class 5

1.00

1.00

1.00

Class 6

0.96

1.00

0.98

Class 7

1.00

1.00

1.00

Class 8

1.00

1.00

1.00

Class 9

1.00

0.94

0.97

Class 123

1.00

1.00

1.00

Average

0.9968

0.9961

0.9973

6. Proposed System Comparison

In this section, the performance evaluation of the proposed vein fingerprint recognition system by contrasting it with prior studies is presented. To facilitate this comparison, we have chosen specific research papers for reference, as listed in literature [6-10]. For literature [6], this research used several preprocessing techniques for image enhancement and modified CNN for vein fingerprint classification, while the research [7] used the novel U-Net CNN algorithm to classify vein fingerprints. The research [8] used two CNN models for classification which are the AlexNet model, and the VGG model. While research [9] used CNN with a hybrid pooling mechanism. The last research [10] used CNN for classification. Table 11 illustrates a comparison between the results of the proposed work with previous works.

Table 11. Comparison with previous works

Research

Algorithms

Accuracy

[6]

Several preprocessing techniques and the modified CNN

95.82%

[7]

Novel U-Net CNN

84.70%

[8]

AlexNet and VGG-19

98.40%

[9]

CNN with a hybrid pooling mechanism

97.84%

[10]

CNN

95.00%

Proposed Classification System

Proposed CNN model

99.69

7. Conclusions

The process of distinguishing between vein fingerprints from one person to another is considered a challenge, due to the great similarity between one fingerprint and another. This is because the vein capillaries are very similar from one person to another. For these reasons, it was necessary to design an effective model for extracting the different features in vein fingerprints. Through the previous tables, note the convergence in the accuracy of the results, which makes the proposed system able to recognize between vein fingerprints excellently, as the accuracy rate of the proposed system in discrimination reached more than 99 percent. This accuracy is considered excellent and the proposed system using deep learning was a successful choice in the classification of the vein fingerprint. Through the previous comparison, notice that the proposed system using the CNN deep learning algorithm reached a higher accuracy than the previous algorithms, The CNN deep learning method includes a number of layers that allow the system to extract features from vein fingerprint images and choose the best characteristics in addition to categorization.

  References

[1] Zhang, J., Lu, Z., Li, M. (2020). Active contour-based method for finger-vein image segmentation. IEEE Transactions on Instrumentation and Measurement, 69(11): 8656-8665. https://doi.org/10.1109/TIM.2020.2995485

[2] Ramachandra, R., Raja, K.B., Venkatesh, S.K., Busch, C. (2019). Design and development of low-cost sensor to capture ventral and dorsal finger vein for biometric authentication. IEEE Sensors Journal, 19(15): 6102-6111. https://doi.org/10.1109/JSEN.2019.2906691

[3] Noh, K.J., Choi, J., Hong, J.S., Park, K.R. (2020). Finger-vein recognition based on densely connected convolutional network using score-level fusion with shape and texture images. IEEE Access, 8: 96748-96766. https://doi.org/10.1109/ACCESS.2020.2996646

[4] Alani, S., Baseel, A., Hamdi, M.M., Rashid, S.A. (2020). A hybrid technique for single-source shortest path-based on A* algorithm and ant colony optimization. IAES International Journal of Artificial Intelligence, 9(2): 356-363. https://doi.org/10.11591/ijai.v9.i2.pp256-263

[5] ben Fredj, H., Sghaier, S., Souani, C. (2021). An efficient face recognition method using CNN. In 2021 International Conference of Women in Data Science at Taif University (WiDSTaif), Taif, Saudi Arabia, pp. 1-5. https://doi.org/10.1109/WiDSTaif52235.2021.9430209

[6] Shen, J., Liu, N., Xu, C., Sun, H., Xiao, Y., Li, D., Zhang, Y. (2021). Finger vein recognition algorithm based on lightweight deep convolutional neural network. IEEE Transactions on Instrumentation and Measurement, 71: 1-13. https://doi.org/10.1109/TIM.2021.3132332

[7] Fairuz, S., Habaebi, M. H., Elsheikh, E. M. A., Chebil, A. J. (2018). Convolutional neural network-based finger vein recognition using near infrared images. In 2018 7th International Conference on Computer and Communication Engineering (ICCCE), Kuala Lumpur, Malaysia, pp. 453-458. https://doi.org/10.1109/ICCCE.2018.8539342

[8] Das, R., Piciucco, E., Maiorana, E., Campisi, P. (2018). Convolutional neural network for finger-vein-based biometric identification. IEEE Transactions on Information Forensics and Security, 14(2): 360-373. https://doi.org/10.1109/TIFS.2018.2850320

[9] Boucherit, I., Zmirli, M.O., Hentabli, H., Rosdi, B.A. (2022). Finger vein identification using deeply-fused convolutional neural network. Journal of King Saud University-Computer and Information Sciences, 34(3): 646-656. https://doi.org/10.1016/j.jksuci.2020.04.002

[10] Yang, W., Luo, W., Kang, W., Huang, Z., Wu, Q. (2020). FVRAS-net: An embedded finger-vein recognition and antispoofing system using a unified CNN. IEEE Transactions on Instrumentation and Measurement, 69(11): 8690-8701.https://doi.org/10.1109/TIM.2020.3001410

[11] Finger Vein USM (FV-USM) Database. http://drfendi.com/fv_usm_database/.

[12] Chang, R.C.H., Wang, C.Y., Li, Y.H., Chiu, C.D. (2023). Design of low-complexity convolutional neural network accelerator for finger vein identification system. Sensors, 23(4): 2184. https://doi.org/10.3390/s23042184

[13] Marattukalam, F., Abdulla, W., Cole, D., Gulati, P. (2023). Deep learning-based wrist vascular biometric recognition. Sensors, 23(6): 3132. https://doi.org/10.3390/s23063132

[14] Wang, Y., Shi, D., Zhou, W. (2022). Convolutional neural network approach based on multimodal biometric system with fusion of face and finger vein features. Sensors, 22(16): 6039. https://doi.org/10.3390/s22166039

[15] Tamang, L.D., Kim, B.W. (2022). FVR-Net: Finger vein recognition with convolutional neural network using hybrid pooling. Applied Sciences, 12(15): 7538. https://doi.org/10.3390/app12157538

[16] Avcı, A., Kocakulak, M., Acır, N. (2019). Convolutional neural network designs for finger-vein-based biometric identification. In 2019 11th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, pp. 580-584. https://doi.org/10.23919/ELECO47770.2019.8990612