OPEN ACCESS
The neurodegenerative disease such as: Parkinson's disease (PD), mild Alzheimer’s affects many people and has a serious influence on their life, With the quick advancement of computeraided diagnostic (CAD) methods, early detection is crucial since effective treatment halts the spread of the disease. Image fusion is useful for medical diagnostics. In this paper we propose a multimodality medical image fusion algorithm in NSST domain. Shearlets (NSST) are decomposed similarly to contourlets (NSCT), except that instead of applying the Laplacian pyramid followed by directional filtering, shearlets use a shear matrix. In this article the Biorthogonal CDF9/7 filter is applied in the shiftinvariant shearlet filter banks, then the coefficients of low frequency bands are selected using maximum rule, and using the gradient in each subband high frequency image to motivate the modified pulse coupled neural networks (Modified PCNN). Finally reverse IHS to get the fused color image, all this to optimize the calculation performance and improve the characteristics of the fused image for medical diagnosis. Our approach was validated with several brain diseases modalities: Alzheimer’s…etc. The findings reveal that the suggested image fusion technique has a higher quality than those fused by previous algorithms existing.
image fusion, image gradient, nonsubsampled shearlet transform (NSST), modified pulse coupled neural network (MPCNN)
Because of the high demands of multimodality therapeutic image fusion, a number of combination strategies have been created within the final few a long time. For the most part, these methods can be categorized into three classes, such as spatial, transform and decision domain [13].
These days, with tall innovation, restorative picture combination contains an expansive number of applications, counting conclusion, investigate, and treatment, localisation of threatening destinations offer assistance surgical visualization by combining two pictures ‘same scene’ largeresolution for delicate tissues from different modalities.
The new multiresolution technics, like curvelet transform [4], the contourlet transform [5], nonsubsampled contourlet transform [6] and the nonsubsampled shearlet transform (NSST) [7], has become the focus of research now. This transform can accurately portray the smoothness of edges and shapes. Its considerable computational complexity, however, prevents it from being used for medical image fusion. Bengana et al. [8, 9] present two works, the first algorithm based on CDF9/7 wavelet Lifting Scheme, and the second hybrid algorithm utilizing nonsubsampled contourlet for contrastenhancement to adjust the necessity of nearby and worldwide differentiate upgrades of each input picture appearance.
The comparison between the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) in fusion domain gives the best and excellent directional sensitivity and lower computing complexity. These characteristics of NSST make it suited for medical image fusion, as it can better characterize picture line peculiarities and are a true image sparse representation approach.
Similarly, pulse coupled neural networks (PCNN) are a novel form of neural network that is distinct from regular neural networks [10].
In the NSCT domain, the modification from this basic PCNN, a fuzzyadaptive reduced PCNN (RPCNN)based fusion strategy is presented in the study of Yang et al. [11].
As of late, Vanitha et al. proposed an unused approach Based on Spatial frequency PAPCNN, this demonstrate has been tentatively confirmed that it features a meeting speed of our show existing within the NSST Domain [12]. Conjointly Wang et al. proposed the scale invariant feature transformation (SIFT) descriptor and the profound convolutional neural network (CNN) within the shiftinvariant shearlet transform (SIST) [13], to bargain the issue, moo capacity of the conventional combination rules.
Many contributions suggested in this article, the first contribution in this work is implemented the biorthogonal CDF9/7 filter in the shiftinvariant shearlet filter banks (SFBs) which optimize computing performance in the NSST domain and improve the detailed characteristics of the fused image for medical diagnosis.
The second contribution inspired the algorithm proposed by Ding et al. [14] when proposed the image gradient motivation the PCNN. In this article, we applied this algorithm to motivate the modified pulse coupled neural networks (Modified PCNN) in NSST domain.
The third contribution of our proposed approach allows to better detect neurodegenerative diseases.
In this study, the Biorthogonal CDF9/7 filter of a lowpass Laplacian pyramid (LP), in nonsubsampled shearlets transform (NSST) and (MPCNN) methods are used to fuse CT, MRI and PET for predicting several neurodegenerative diseases.
This paper is organized as follow: The details of shearlets and NSST transform in Section 2 and 3. The Modified PCNN with motived image gradient in Section 4. The results with discussions in Section 5. And method proposed based on CDF 9/7 filter in Section 6. At last, the conclusion is appeared in Section 7.
1.1 Literature review
The image fusion technology has been popular in recent years for predicting several neurodegenerative diseases, many algorithms and software tools have been created by researchers. In the field of health care. Past literary works that are directly relevant to the suggested methodology are presented in this section.
Several researchers use the popular dataset i.e., Whole brain atlas. from the http://www.med.harvard.edu/AANLIB/. Wang et al. [13] propose a new method for multifocus image fusion based on PCNN and random walks. Vanitha et al. [15] had focused on primary to the adaptive parameter PAPCNN in NSST domain (NSSTPAPCNN), the image gradient motivation PCNN in NSCT domain (NSCTGPCNN) [15], in the NSST domain [16], the bounded measured PCNN technique BMPCNNNSST. Wang et al. [17], Multimodal Medical Image Fusion Based on Multichannel Coupled Neural P Systems and MaxCloud Models in Spectral Total Variation Domain.
The Shearlet Transform (ST) may be a novel multiscale geometric analysis technique that incorporates the benefits of both the contourlet and curvelet transforms. Shearlet has a number of advantages over contourlet being replaced by a shear matrix in shearlet, which optimizes computing performance [18, 19].
This makes it idealize for characterizing directional frameworks since it permits us to more successfully collect the subtle elements of an image from different bearings and get a more suitable representation (regularly surveyed by sparsity) for the focusing on picture.
The theory from shearlets is cited in the study of Miao et al. [20]. The framework based in same wavelet foremost and is comparative to contourlets, whereas the contourlet comprises of an application of the Laplacian pyramid taken after by directional filtering. For shearlets, this directional filtering is supplanted by a shear framework [21], which are given by this matrix: $A_0=\left(\begin{array}{ll}4 & 0 \\ 0 & 2\end{array}\right), S_0=\left(\begin{array}{ll}1 & 1 \\ 0 & 1\end{array}\right)$.
With $A_0$ denote the parabolic scaling matrix and $S_0$ denote the shear matrix.
The measure of recurrence back of the shearlets is outlined in Figure 1. for a few specific values of A and S. The shearlets shape a tight outline at different scales and direction, and are ideally meager in speaking to images with edges [21]. The frequency support from function ψ j,l,k are easily observed by support of $\widehat{\Psi_1}$ and $ \widehat{\Psi_2}$ (Figure 2).
$\sup \widehat{\Psi_{j, k, l}} \subset\left\{\left(\xi_1, \xi_2\right): \xi_1 \in\left[2^{2 j1},2^{2 j4}\right] \cup\left[2^{2 j4}, 2^{2 j1}\right],\left\frac{\xi_2}{\xi_1}+l 2^{j}\right \preccurlyeq 2^{j}\right\}$ (1)
When $\widehat{\Psi_1}$ and $ \widehat{\Psi_2}$ is a wavelet and $2^{j} \leq l \leq 2^{j}1, j \geq 0$.
Picture decay based on shearlet change is composed by two parts, decay of multidirection utilizing shear lattice S0 or S1 and multiscale break down of each direction utilizing wavelet packets. This system with shearlets is appeared in Figure 3.
Figure 1. The Shearlets' frequency support
Figure 2. (a) Shearlets in frequency domain; (b) The size of a shearlet's frequency support ψj, l, k
Figure 3. Shearlets decomposition with the CDF 9/7 filter
The most important contribution in our article is the selection of nonsubsampled shearlet transform (NSST) for color medical images using the optimal shearing filter, with CDF9/7. Antonini et al. [22] show the superiority from Theses wavelets for the decorrelation of naturel images. The wavelets ‘CohenDaubechiesFeauveau CDF9/7’ have a great number of null moments, symmetrical, biorthogonal and the low pass filters have nine coefficients in analysis and seven coefficients to synthesis (see Table 1 and Figure 4).
Table 1. The analysis and the synthesis filter coefficients

The analysis filter coefficients 

I 
Lowpass filter 
Highpass filter 
0 
0.602949018236579 
+1.115087052457000 
+1 
+0.266864118442875 
+0.591271763114250 
+2 
0.078223266528990 
0.057543526228500 
+3 
0.0116864118442875 

+4 
+0.026748757410810 
0.091271763114250 

The synthesis filter coefficients 

I 
Lowpass filter 
Highpass filter 
0 
+1.115087052457000 
0.6029490182363579 
+1 
+0.591271763114250 
0.266864118442875 
+2 
0.057543526228500 
0.078223266528990 
+3 
+0.091271763114250 
+0.011686411844287 
+4 

+0.026748757410810 
Figure 4. (a) CDF9/7 biorthogonal wavelet, (b) Scaling function from CDF9/7
Figure 5. NSST decomposition of twolevel based on CDF 9/7
The nonsubsampled shearlet transform NSST decomposition based on CDF9/7 shown in Figure 5 with two level. The nonsubsampled pyramid is utilized to wrap up multiscale factorization, which might result in (k+1) subimages with one lowfrequency picture and k highfrequency pictures of the same estimate as the first picture, where k indicates the number of decay levels. The nonsubsampled adaptation of ST, known as NSST, has been presented based on nonsubsampled pyramid filters (NSPFs) and shiftinvariant shearlet filter banks (SFBs) [23]. For each decomposition level, an SFB is applied to obtain the multidirectional representations of the corresponding band. In this decomposition multidirectional, we applied the biorthogonal filter based on CDF9/7.
Comparing to conventional image processing means, pulsecoupled neural systems (PCNNs) are neural models proposed by modeling a visual cortex, and creating exceptionally quickly for highperformance medical image processing, counting strength against noise, autonomy of geometric varieties in input designs, capability of bridging minor concentrated varieties in input patterns, etc.
AMPCNN show has been displayed to rearrange operations and make strides computing speed, we have streamlined PCNN show through disentangling Eqns. (2) and (3), hence we get the taking after equations.
$\operatorname{Fij}(n)=\operatorname{Sij}$ (2)
$\operatorname{Lij}(n)=V^L \sum W Y(n1)$ (3)
$\operatorname{Uij}(n)=F i j(n)(1+\beta \operatorname{Lij}(n))$ (4)
$\operatorname{Yij}(n)=\left\{\begin{array}{c}1, U i j(n)>\theta_{i j}(n) \\ 0, \text { otherwise }\end{array}\right.$ (5)
$\theta_{i j}(n)=e^{\alpha^\theta} \theta_{i j}(n1)+V^\theta Y i j(n1)$ (6)
where, $\beta$ = linking strength;
Fij (n)= feeding input of neuron at nth iteration;
Lij (n)= linking input of neuron;
$V^L$ = linking input amplitude;
Uij(n) = internal activity of neuron;
Yij(n) = pulse generator output;
$\theta_{i j}(n)$ = dynamic threshold;
$V^\theta, \alpha^\theta$ = amplitude and exponential decay coefficient of dynamic threshold.
This parameter cited influence from the quality of image fusion. Most current investigate employments the strategy of backward investigation of these parameter values, which is subjective to a few degrees to make strides picture quality. In our paper, the average gradient of image is used to motivate the MPCNN network (Figure 6).
Figure 6. The diagram of modified PCNN model [14] motived by image average gradient ‘G’
The average gradient (AG) can be utilized as an objective record to specific the picture sharpness, primarily since it can reflect the alter rate of the dim level of the image boundary. The larger the AG, we get good image quality [15]. It is characterized by Eq. (7):
$g=\frac{\sum_{i=1}^{M1} \sum_{j=1}^{N1} \sqrt{\left(\left(F(i, j)F(i+1, j)^2\right)+\left(\left(F(i, j)F(i, j+1)^2\right) / 2\right.\right.}}{(M1)(N1)}$ (7)
where: F(i,j) indicates that the grey value of the image is on line i, column j.
M is the total number of rows in the image, while N is the total number of columns.
The proposed strategy can be summarized within the taking after steps.
Step 1: pretreatment by NSCT to enhance and increase contrast:
a Applied NSCT for each input images utilizing five scales from 4, 8, 8, 16, 16 directions.
b The median operator is used for Estimate the noise standard deviation and average energy distribution of standard white noise in each subbands.
c Modify the NSCT coefficients for each subbands like cited in the article reference [24].
d The PET RGB image converted to Hue Saturation Value (IHS) space [25].
Step 2: Use NSST decomposition by CDF9/7:
a Applied NSST for (CT or MRI) and (PET or SPECT) medical image.
b After conversion the RGB color image by IHS space, the high and low frequency subband from I components space are obtained by NSST transform based CDF9/7 filter.
Step 3: The Motived PCNN it’s based by measuring the image gradient from two high frequency subband.
Step 4: The most extreme esteem to induce the fused low frequency subband.
Step 5: The output coefficients from step 3 and 4 are applied the inverses NSST transform.
Step 6: The inverse IHS applied from fused I component and original H, S to recuperate the final fused medical image.
The proposed NSST–M–PCNN method is shown in Figure 7. which concrete steps can be listed as follows.
Figure 7. Image fusion with proposed algorithm
In this section, we utilizing more than one hundred double of multimodal medical images of size (256 * 256), encoded with 8 bits per pixel MRI&CT, and SPECT, PET scans are available for download from [26]. This database contains different neurodegenerative diseases namely: glioma, mild Alzheimer’s, and metastatic bronchogenic carcinoma.
To demonstrate the efficacy of our method, we compared it to other newly developed fusion methods [27], including as: The pulse coupled convolutional neural network (PCNN), the adaptive parameter PAPCNN in NSST domain (NSSTPAPCNN) [14], the image gradient motivation PCNN in NSCT domain (NSCTGPCNN) [15], and in the NSST domain, the bounded measured PCNN technique BMPCNNNSST [17]. The parameters settings in our experience as follows:
The NSST decomposition level N is set at 4 and the number of directions to 16,16,8,8.
We have also set the parameters of modified MPCNN VL=1, Vθ=20, αθ=0.1, and β=0.4, Iterations = 200.
We choose the N=4 decomposition bands and the number of directions to 16,16,8,8 are obtained for each image. To illustrate how the shearlet transform coefficients tends to zero in continuous sections. After diverse test, we confirmed the Best shearlet coefficients with four decomposition To improve edge detection. For a better comparison between our proposed method against the four stateoftheart fusion methods. The parameters of modified MPCNN that we used in our experiments are the same as in their papers [14, 15, 17].
Matlab was used to test these images on an Intel Core (I3) 2.13 GHz PC with 2GB of RAM. 2019b.
Firstly, we present six pairs of source images (Figure 8). Subjective visual perception enables us to make direct comparisons and make objective picture quality assessments, which are also used to assess the success of the proposed technique.
Figure 8. The six source image pairs are as follows: CT & MRI, MRI & PET, MRI & SPET
Figures 9 and 10 clearly demonstrate the benefits of complementary flowing CT and MRI images. The proposed method's brightness reveals a high tissue density, and the bone may be seen through the soft tissues.
The MRIPET fusion shown in Figure 11. can offer clear soft tissue and metabolism of specific tissues, which is useful in medical diagnostics. The NSCTGPCNN and NSSTSFPAPCNN fusion results in poor energy preservation, but the proposed technique performs well in multimodality energy preservation.
The combination of SPECT and MRI in Figure 12 is commonly used to reflect an organism's soft tissues and metabolism. The BMPCNNNSST and NSSTSFPAPCNN techniques perform poorly in terms of conserving structured information in the MRI modality, but the suggested technique achieves the best results in terms of both structured information preservation and detailed information extraction.
As the necessary reference image cannot be obtained, the fusion results are evaluated also on the basis of evaluation parameters like entropy and sharpness. The experiments were carried out on several distinct multisource.
Figure 9. The results of the fusion of normal brain, (a) PCNN, (b) NSCTGPCNN, (c) BMPCNNNSST, (d) NSSTSFPAPCNN, (e) Proposed
Figure 10. The results of the fusion of metastatic bronchogenic carcinoma, (a) PCNN, (b) NSCTGPCNN, (c) BMPCNNNSST, (d) NSSTSFPAPCNN, (e) Proposed
Figure 11. The results of the fusion of Alzheimer’s disease, (a) PCNN, (b) NSCTGPCNN, (c) BMPCNNNSST, (d) NSSTSFPAPCNN, (e) Proposed
Figure 12. The results of the fusion of glioma disease, (a) PCNN, (b) NSCTGPCNN, (c) BMPCNNNSST, (d) NSSTSFPAPCNN, (e) Proposed
Figure 13. Experiments with image fusion and objective assessments of CTMRI, MRIPET, and SPECTMRI
Table 2. Average metrics of two sets (CT&MRI, MRI&PET, MRI&SPECT)
Dataset 
Methods 
EN 
SD 
SS 
VIF 
CTMRI 
PCNN 
6.301 
90.101 
0.561 
0.444 
NSCTGPCNN [15] 
6.610 
92.263 
0.582 
0.453 

BMPCNNNSST [17] 
6.353 
93.852 
0.683 
0.421 

NSSTSFPAPCNN [14] 
6.082 
87.351 
0.602 
0.432 

Proposed 
6.801 
98.012 
0.711 
0.498 

MRIPET 
PCNN 
3.508 
62.020 
0.601 
0.281 
NSCTGPCNN [15] 
3.981 
65.301 
0.632 
0.250 

BMPCNNNSST [17] 
3.559 
68.257 
0.627 
0.283 

NSSTSFPAPCNN [14] 
4.587 
67.423 
0.650 
0.274 

Proposed 
4.701 
68.190 
0.720 
0.281 

MRISPECT 
PCNN 
4.051 
60.123 
0.462 
0.451 
NSCTGPCNN [15] 
4.351 
58.260 
0.501 
0.442 

BMPCNNNSST [17] 
4.122 
60.036 
0.448 
0.297 

NSSTSFPAPCNN [14] 
4.221 
59.034 
0.518 
0.298 

Proposed 
4.220 
60.014 
0.558 
0.481 
6.1 Evaluation parameters
In this work, to evaluate the visual information fidelity between the source images and fused image, the following picture quality measures are used: Entropy (EN), Standard Deviation (SD), Structure Similarity (SS), and Visual Information Fidelity (VIF).
The performance of fusion is evaluated using four objective measures in this research. Table 2 shows the results of the evaluation of fusion techniques on pairs of images. The metrics in "bold" imply that the existing fusion techniques have higher values.
We compare the various fusion techniques using four measures in order to quantitatively analyze the performance of the suggested algorithm. Table 2 lists the pairs sets (CT&MRI, MRI&PET, and MRI&SPECT). The results that are in bold represent existing fusion techniques with greater values. Finally, the superiority of the metrics confirms the robustness of the proposition scheme. The (SD) & (VIF) from Table 2 is visualized in Figure 13.
For multimodality medical picture fusion, the fusion approach of NSST transform based on CDF 9/7 filter and modified pulse coupled neural network is utilized in this research. The using of maximum rule is combined with the low frequency subband NSST decomposition. Then, the high frequency sub bands are driven by Image gradient as the original information to motivate Modified PCNN. After fusing the low frequency and high frequency coefficients, the fused image is produced using the inverse NSST transform. The findings show that this approach can not only preserve the features of the two source images and highlight their subtleties, but also produce a final fused image with a large amount of information, resulting in a pleasing visual impression. We want to verify the performance of the algorithm proposed in several public datasets, to reach the level of the state of the art. One of the limitations of data access. Given the widespread use of deep learning technology, finally our future research will concentrate on the deep learning approach combining NSST based on CDF9/7 filter with modified PCNN in multimodal medical imaging fusion. Finally transform to conduct image fusion research is a novel way of thinking with a lot of promise for application.
The authors would like to thank the DirectorateGeneral of Scientific Research and Technological Development (Direction Générale de la Recherche Scientifique et du Développement Technologique, DGRSDT, URL: www.dgrsdt.dz.
[1] Li, S., Kang, X., Fang, L., Hu, J., Yin, H. (2017). Pixellevel image fusion: A survey of the state of the art. information Fusion, 33: 100112. https://doi.org/10.1016/j.inffus.2016.05.004
[2] Tiwari, P., Melucci, M. (2019). Towards a quantuminspired binary classifier. IEEE Access, 7: 4235442372. https://doi.org/10.1109/ACCESS.2019.2904624
[3] Yin, H. (2018). Tensor sparse representation for 3D medical image fusion using weighted average rule. IEEE Transactions on Biomedical Engineering, 65(11): 26222633. https://doi.org/10.1109/TBME.2018.2811243
[4] Nencini, F., Garzelli, A., Baronti, S., Alparone, L. (2007). Remote sensing image fusion using the curvelet transform. Information Fusion, 8(2): 143156. https://doi.org/10.1016/j.inffus.2006.02.001
[5] Li, K., Chen, X., Hu, X., Shi, X., Zhang, L. (2010). Image denoising and contrast enhancement based on nonsubsampled contourlet transform. In 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, pp. 131135. https://doi.org/10.1109/ICCSIT.2010.5563631
[6] Yang, X.H., Jiao, L.C. (2008). Fusion algorithm for remote sensing images based on nonsubsampled contourlet transform. Acta Automatica Sinica, 34(3): 274281. https://doi.org/10.3724/SP.J.1004.2008.00274
[7] Asha, C.S., Lal, S., Gurupur, V.P., Saxena, P.P. (2019). Multimodal medical image fusion with adaptive weighted combination of NSST bands using chaotic grey wolf optimization. IEEE Access, 7: 4078240796. https://doi.org/10.1109/ACCESS.2019.2908076
[8] Bengana, A., Hacene, I.B., Chikh, M.A. (2015). MRIT1 and T2 image fusion for brain image using CDF wavelet based on lifting scheme. Global Journal of Medical Research, 15(6): 2833.
[9] Bengana, A., Chikh, M.A., Hacene, I.B. (2018). Multimodal medical image fusion using multiresolution transform. International Journal of Biomedical Engineering and Technology, 27(3): 221232. https://dx.doi.org/10.1504/IJBET.2018.10015309
[10] Wang, Z., Wang, S., Guo, L. (2018). Novel multifocus image fusion based on PCNN and random walks. Neural Computing and Applications, 29(11): 11011114. https://doi.org/10.1007/s0052101626339
[11] Yang, Y., Que, Y., Huang, S., Lin, P. (2016). Multimodal sensor medical image fusion based on type2 fuzzy logic in NSCT domain. IEEE Sensors Journal, 16(10): 37353745. https://doi.org/10.1109/JSEN.2016.2533864
[12] Vanitha, K., Satyanrayana, D., Giri Prasad, M.N. (2022). Medical image fusion using fuzzy adaptive reduced pulse coupled neural networks. Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology, 43(4):39333946. https://doi.org/10.3233/JIFS213416
[13] Wang, L., Chang, C., Liu, Z., Huang, J., Liu, C., Liu, C. (2021). A medical image fusion method based on sift and deep convolutional neural network in the sist domain. Journal of Healthcare Engineering, 2021: 9958017. https://doi.org/10.1155/2021/9958017
[14] Ding, S., Zhao, X., Xu, H., Zhu, Q., Xue, Y. (2018). NSCT‐PCNN image fusion based on image gradient motivation. IET Computer Vision, 12(4): 377383. https://doi.org/10.1049/ietcvi.2017.0285
[15] Vanitha, K., Satyanarayana, D., Prasad, M.G. (2021). Multimodal medical image fusion algorithm based on spatial frequency motivated PAPCNN in the NSST domain. Current Medical Imaging, 17(5): 634643. https://doi.org/10.2174/1573405616666201118123220
[16] Tan, W., Tiwari, P., Pandey, H.M., Moreira, C., Jaiswal, A.K. (2020). Multimodal medical image fusion algorithm in the era of big data. Neural Computing and Applications. https://doi.org/10.1007/s00521020051732
[17] Wang, G., Li, W., Gao, X., Xiao, B., Du, J. (2022). Multimodal medical image fusion based on multichannel coupled neural P systems and maxcloud models in spectral total variation domain. Neurocomputing, 480: 6175. https://doi.org/10.1016/j.neucom.2022.01.059
[18] Guo, K., Labate, D. (2007). Optimally sparse multidimensional representation using shearlets. SIAM journal on Mathematical Analysis, 39(1): 298318. https://doi.org/10.1137/060649781
[19] Guo, K., Labate, D., Lim, W.Q. (2009). Edge analysis and identification using the continuous shearlet transform. Applied and Computational Harmonic Analysis, 27(1): 2446. https://doi.org/10.1016/j.acha.2008.10.004
[20] Miao, Q., Shi, C., Li, W. (2013). Image fusion based on shearlets. New Advances in Image Fusion, pp. 113133. http://dx.doi.org/10.5772/56945
[21] Yin, M., Liu, X., Liu, Y., Chen, X. (2018). Medical image fusion with parameteradaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Transactions on Instrumentation and Measurement, 68(1): 4964. https://doi.org/10.1109/TIM.2018.2838778
[22] Antonini, M., Barlaud, M., Mathieu, P., Daubechies, I. (1992). Image coding using wavelet transform. IEEE Transactions on Image Processing, 1(2): 205220. https://doi.org/10.1109/83.136597
[23] Wang, Z.G., Wang, W., Su, B. (2018). Multisensor image fusion algorithm based on multiresolution analysis. International Journal of Online Engineering, 14(6): 4457. https://doi.org/10.3991/ijoe.v14i06.8697
[24] Geusebroek, J.M., Van den Boomgaard, R., Smeulders, A.W.M., Geerts, H. (2001). Color invariance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(12): 13381350. https://doi.org/10.1109/34.977559
[25] Prashantha, S.J., Prakash, H.N. (2021). A features fusion approach for neonatal and pediatrics brain tumor image analysis using genetic and deep learning techniques. International Journal of Online & Biomedical Engineering, 17(11): 124140. https://doi.org/10.3991/ijoe.v17i11.25193
[26] Whole brain atlas. http://www.med.harvard.edu/AANLIB/, accessed on 10 January 2022.
[27] Abas, A.I., Baykan, N.A. (2021). Multifocus image fusion with multiscale transform optimized by metaheuristic algorithms. Traitement du Signal, 38(2): 247259. https://doi.org/10.18280/ts.380201