Estimation of Forest Diameter-at-Breast-Height: A Fusion of Machine Learning and 3D Image Processing Innovations

Estimation of Forest Diameter-at-Breast-Height: A Fusion of Machine Learning and 3D Image Processing Innovations

Yichen Wang Jiyu Sun Fangyu Wang*

College of Biological and Agricultural Engineering, Jilin University, Changchun 130022, China

Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China

Changchun UP Optotech Holding Co., Ltd. Beijing, Changchun 130033, China

Jilin Provincial Key Laboratory of Photoelectric Equipment and Instrument Advanced Manufacture Technology, Changchun 130033, China

Corresponding Author Email: 
wangfy85@ccu.edu.cn
Page: 
2291-2297
|
DOI: 
https://doi.org/10.18280/ts.400547
Received: 
17 June 2023
|
Revised: 
26 August 2023
|
Accepted: 
10 September 2023
|
Available online: 
30 October 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

As technology persistently advances, breakthroughs in machine learning and image processing have been harnessed for the meticulous measurement and analysis of natural resources. In the pursuit of addressing the imperative task of feature extraction and measurement within forestry, an integration of convolutional neural networks (CNNs), traditional machine learning, and image processing techniques has been devised. High-resolution 3D image data were procured using the D435i depth camera, targeting the detailed representation of tree structures. Upon acquisition, refined strategies encompassing pass-through filtering and K-means clustering were utilised for noise mitigation and segmentation. For feature discernment, CNNs were synergised with other machine learning models, facilitating comprehensive and automated extraction of the tree's structural and morphological nuances. The Random Sample Consensus (RANSAC) algorithm was subsequently invoked for meticulous cylindrical shape fitting, culminating in precise estimations of tree diameter-at-breast-height. Rigorous experimental validation revealed not only eminent accuracy but also unparalleled robustness across a gamut of scenarios and environments. When juxtaposed with conventional forestry measurement techniques, this methodology unmistakably signals a promising trajectory for forthcoming forestry applications.

Keywords: 

machine learning, image processing, tree feature extraction, 3D image data, data preprocessing, RANSAC algorithm, cylindrical shape fitting, CNNs

1. Introduction

Historically, forestry measurement has been deemed critical for global forest resource management and research. As elucidated by Clark et al. [1], parameters such as tree height, diameter-at-breast-height (DBH), and biomass have been instrumental in garnering insights into forest health, ecological function, and biodiversity. With the mounting implications of climate change, the significance of forests in regulating the global climate, as delineated by Bonan [2], has seen a marked increase.

Conventionally, manual calipers and altimeters, as documented by Husch et al. [3], have been ubiquitously employed across myriad studies and initiatives. However, with the progression of time, the limitations of such methodologies, primarily attributed to their cumbersome operations and languid data processing capabilities, have been accentuated. It has been observed that the rampant strides of globalisation and industrialisation have heightened the exigency for rigorous and expansive forestry measurements, a sentiment mirrored in the investigations of Safe’i et al. [4].

Recently, Gril et al. [5] highlighted the advanced capabilities of airborne LiDAR in forest microclimate measurement within the expansive Blois forest in France. Their work emphasized LiDAR's precision in mapping understory temperature variations influenced by canopy structures. Compared to traditional methods, LiDAR effectively produced high-resolution thermal environment maps, with significant implications for conservation and climate change mitigation. Concurrently, the exploration into the incorporation of machine learning for tree identification and measurement, conducted by Anderson and Gaston [6], has illuminated novel horizons for the field. The multifarious potentialities of modern technological interventions in forestry measurement have been further corroborated by works such as those by Nelson et al. [7] and Turner et al. [8].

Yet, notwithstanding these innovations, formidable challenges remain entrenched in the domain. Certain techniques have been indicated by Roberts et al. [9] to manifest limitations, especially when employed across a spectrum of forest types. Moreover, a compelling emphasis on extensive field validation for these nascent methodologies, particularly within intricate forest terrains, was elucidated by Asner and Mascaro [10].

In response to these outlined challenges, the present research endeavours to delve into the adept amalgamation and finetuning of extant measurement paradigms to adeptly serve the multifaceted requisites of global forestry measurement. By intertwining multi-source data, machine learning, and time-honoured measurement techniques, a rejuvenated pathway in forestry measurement scholarship is anticipated.

2. Image Acquisition Techniques

2.1 3D camera technology

With the emergence of 3D camera technology, a transformative measurement epoch has been instigated in the realm of forestry [11-13]. Predominantly hinging upon infrared, lasers, or alternative sensors, this technology has been recognised for its capacity to authentically delineate the three-dimensional structure of trees, furnishing comprehensive data sets conducive to in-depth analyses. The Intel RealSense D435i depth camera, favoured for its superior resolution and wide field of view, has been deployed. Such attributes facilitate the intricate capture of forest nuances, encompassing elements like leaves and branches.

2.2 Drone-based aerial photography

Drone technologies have been acknowledged for offering expansive and proficient image acquisition of arboreal terrains. In alignment with the methodology delineated by the authors [5], drones have been fitted with the 3D camera. During their aerial manoeuvres, images are systematically captured along pre-determined flight corridors and elevation levels, assuring uninterrupted and holistic canopy imaging. Such strategies prove especially advantageous for extensive forested expanses, more so in locales marked by intricate topographies or situated in remote precincts. A representation of drones, equipped with the D435i depth camera, is portrayed in Figure 1.

Figure 1. Representation of a drone equipped with the D435i depth camera

2.3 Image processing and point cloud generation

For the extraction of pertinent information, the accumulated 3D image data underwent rigorous processing. Specifically, "ForestScan", a software tailored for forest imagery, was employed. This software is recognised for its capability to expeditiously transmute original 3D imagery into point cloud data, which subsequently serves as a cornerstone for ensuing analyses and modelling. During the point cloud generation, the succeeding equation was employed:

$P(x, y, z)=I(x, y) \times D(z)$       (1)

where, P epitomises the point cloud's location in a 3D continuum, I denotes the pixel's position in the image, and D symbolises the depth value pertaining to the correlated pixel.

2.4 Application of image fusion techniques in tree measurement

In the realm of expansive tree measurements, particularly in scenarios involving continuous 3D camera imaging, multiple instances exist where specific regions undergo multiple captures. This amplifies the indispensability of image fusion techniques [14-19]. The principal objective of such fusion remains the amalgamation of myriad images into a cohesive 3D tableau, subsequently augmenting data comprehensiveness and precision in measurements.

For the enhancement of image fusion efficiency, methodologies grounded in machine learning were employed [20-22]. Overlaps amidst images were initially pinpointed using feature point detection. Following this, neural network models were instituted to refine these overlapping image sets, ensuring the resultant 3D tableau adhered to exacting standards of accuracy and coherence.

Prior to fusion, images were subjected to preprocessing techniques, notably pass-through filtering and K-means clustering segmentation. Given that pre-fusion image quality exerts a direct influence on post-fusion outcomes, these initial steps were deemed imperative. During subsequent feature extraction and measurements, fused images presented data of enhanced cohesion and comprehensiveness, accentuating the research's precision and resilience.

2.5 Data validation and quality control

For the affirmation of data integrity and precision, multifarious validation protocols were put in place. Initially, terrestrial measurement apparatuses were deployed for field assessments of specific trees, and these readings were juxtaposed against data derived from the 3D cameras and drones. Further, parallelisms were drawn with extant studies to corroborate the congruity of the measurement results.

3. Image Pre-Processing

3.1 Pass-through filtering

The evolution of three-dimensional camera technologies, specifically the Intel RealSense D435i depth camera (as depicted in Figure 1), has allowed for the acquisition of an abundant assemblage of point cloud data. However, it is observed that this vast volume of data can be tainted with noise and extraneous information. The necessity for image pre-processing, therefore, arises to refine this data and prepare it for ensuing analyses.

Within the available arsenal of pre-processing tools, pass-through filtering is frequently employed. Through this technique, data points within a stipulated boundary are selectively filtered. When considering objects located 1 to 3 meters from the camera, for instance, extraneous data points can be systematically excluded by the application of pass-through filtering. This procedure can be represented mathematically as:

$P_{\text {filtered }}=P_{\text {orignal }} \mid P_{\min } \leq P_{\text {original }} \leq P_{\max }$       (2)

where, $P_{\text {filtered }}$ symbolises the refined point cloud data. $P_{\text {original }}$ stands for the initial point cloud data, while $P_{\min }$ and $P_{\max }$ represent the pre-set minimum and maximum distance parameters, respectively.

3.2 K-means clustering

In scenarios presenting intricate backgrounds and diverse entities, the mere filtering of data proves insufficient. The segmentation technique of K-means clustering is often utilised in such contexts. This method facilitates the division of point cloud data into K distinct clusters. Data points incorporated within these clusters are observed to exhibit spatial resemblances, setting them apart from constituents of alternative clusters. The method is mathematically expressed as:

$J=\sum_{i=1}^k \sum_{x \in C_i}\left\|x-\mu_i\right\|^2$       (3)

where, J denotes the cost function, which seeks to minimise the aggregate distance of each point from its corresponding cluster centre. Ci is the designated cluster and μi symbolises the central point of that cluster.

3.3 Cylindrical fitting

Upon successful segmentation, the shape-fitting of point cloud data is undertaken to unravel detailed attributes pertaining to trees. The Random RANSAC algorithm is frequently applied for such cylindrical fitting exercises. This algorithm can be delineated by:

$(x-a)^2+(y-b)^2=r^2$       (4)

Within this equation, (x,y) represents a particular point in the point cloud data, (a,b) signifies the cylinder’s core, and r demarcates the cylinder's radius.

Following the execution of the aforementioned pre-processing stages, optimised data is derived, thereby solidifying the foundation for forthcoming analytical procedures and measurements. Figure 2 visually represents the point cloud image subsequent to pass-through filtering, accentuating the clear silhouette of the tree.

Figure 2. Point cloud image post pass-through filtering

4. Image Feature Extraction

4.1 Deep learning and CNNs

Following rigorous image pre-processing, a pivotal step is undertaken: the extraction of salient features from the optimised images. This process is deemed crucial for impending analyses and model training exercises.

In the contemporary era, significant strides have been observed in the domain of deep learning pertaining to image feature extraction. Among these advancements, CNNs, a subset of deep learning mechanisms, have been recognised for their unparalleled efficacy across diverse image analysis tasks. For the purposes of this study, the employment of a deep CNN model was necessitated for the autonomous extraction of key features intrinsic to tree images.

The remarkable efficacy of CNNs in image analysis can be attributed to their inherent capability to systematically learn and distil hierarchical image features, transitioning seamlessly from rudimentary edges and textures to intricate shapes and patterns. This intricate process of feature extraction is predominantly achieved through successive convolutional operations.

The underlying principle of these convolution operations can be mathematically articulated as:

$F_{\text {out }}=F_{\text {in }} * K+b$       (5)

where, Fin represents the input feature map, Fout the output feature map, K the convolutional kernel, * denotes the convolution operation, and b the bias term.

Within these operations, a diminutive, fixed-size convolutional kernel is navigated across the entire image, wherein it executes element-wise multiplication and summation tasks over the encompassed pixels. This methodology aids in discerning local features embedded within the image, encompassing aspects like edges, textures, and overarching shapes. By orchestrating multiple convolutional layers in tandem, the network attains the prowess to discern more abstract and intricate image features.

Figure 3. Architectural representation of the deep CNN model

For the meticulous extraction of tree-centric features, a deep CNN model, replete with an array of convolutional layers, pooling layers, and fully connected layers, was chosen. The intricate structure of this model is delineated in Figure 3. Additionally, to augment the clarity of tree features within images, the K-means clustering segmentation algorithm was integrated during the image pre-processing phase. Its exhaustive workflow is elucidated in Figure 4.

Figure 4. Schematic of the K-means clustering segmentation process

Eq. (6) presents the mathematical description of the convolution operation:

$Y(x, y)=K^* F(x, y)+b$       (6)

where, F is the input feature map, K the convolutional kernel, * denotes the convolution operation, and b is the bias term.

4.2 Feature vectorisation

Figure 5. Point cloud imagery subsequent to K-means clustering segmentation

Subsequent to the extraction process, features undergo a process of vectorisation, culminating in the formation of a dense feature vector. Within this vector, a distinctive numerical description for each tree is encapsulated, encapsulating its geometric structure and appearance attributes as represented in the image. A comparative analysis of feature manifestations stemming from diverse extraction methodologies is depicted in Figure 5.

The rationale behind vectorising features lies in its ability to transform raw data into a format that machine learning models can comprehend. By representing trees as numerical vectors, not only is data compression achieved, but also the nuances of tree structure and appearance are captured in a standardised manner, aiding subsequent analyses. This standardised representation simplifies complex, high-dimensional data into a format that is more manageable and conducive for machine learning and statistical analyses. It should be highlighted that the selection of appropriate vectorisation techniques is contingent upon the nature of the data and the specific objectives of the study.

5. Experimental Setup

To assess the efficacy of the presented strategies encompassing image collection, preprocessing, and feature extraction, a series of experiments were undertaken.

5.1 Experimental environment and dataset

A diverse range of twenty trees spanning various species was chosen for the study. Photographic data were captured at intervals encompassing early morning, midday, and dusk, allowing for the scrutiny of the camera's performance across varying illumination conditions. Every image was sourced utilising the D435i camera, capturing tree depictions from multiple angles, distances, and light settings.

Utilised hardware and software configurations included:

•Hardware: An Intel i7 processor complemented with 16GB RAM and an NVIDIA GTX 1080Ti graphics card.

•Software configurations encompassed Python 3.7, TensorFlow 2.3, and OpenCV 4.2.

5.2 Data augmentation

For bolstering the model's resilience across assorted settings, the dataset was subjected to augmentation techniques. Methods such as image rotation, scaling, cropping, and the introduction of noise were incorporated. As a result of these augmentations, the dataset swelled to quintuple its original size.

5.3 Experimental methods

The amassed dataset was partitioned into three distinct subsets: training (70%), validation (15%), and testing (15%). Model training was executed utilising the training subset, while parameter tuning was facilitated through the validation subset. Subsequently, performance evaluation was carried out on the test subset.

To furnish a holistic comparison between the strategies delineated in this study and alternative methodologies, the ensuing reference techniques were enlisted:

(1) Traditional image processing techniques: These predominantly hinged on rule-based image analytics such as edge detection and threshold segmentation.

(2) Deep learning approaches: Such methods predominantly leveraged CNNs for the automated extraction and analysis of image features.

(3) K-means clustering segmentation: This unsupervised machine learning technique was employed for tasks encompassing image segmentation and feature extraction.

The choice of the D435i camera was predicated on its capacity for high-resolution imaging, coupled with its inherent capability to perform optimally across diverse lighting conditions. The inclusion of multiple times of day in the data capture process aimed to ensure that the resultant models would exhibit robustness, irrespective of the time of day during real-world application. The chosen augmentation techniques were based on their widespread adoption in contemporary literature and their demonstrated efficacy in bolstering model generalisation. The decision to incorporate both traditional and modern methodologies in the comparative analysis aimed to offer readers a comprehensive perspective on the relative merits of the presented strategies in contrast to well-established methods.

6. Experimental Results and Analysis

6.1 Experimental results

From the analyses of the test set data, it was observed that the proposed model attained an accuracy rate of 93.5%. For the diameter measurement at breast height (specifically 1m above the ground) across the selected 20 trees with diameters oscillating between 5-40cm, the recorded accuracy, when juxtaposed against caliper-based measurements, exhibited a maximum deviation of under 3mm. Such precision meets the stringent requirements of practical deployments.

To facilitate a more discernible comparison of the proposed model's efficacy relative to alternate techniques, evaluation outcomes across different metrics for each approach were systematically catalogued. For a more vivid illustration, specific trees were selected for visual demonstration, as delineated in Figure 6. Additionally, a three-dimensional modelling of these trees was executed, the outcomes of which are depicted in Figure 7. Once subjected to processing via the RANSAC algorithm, the resultant cylindrical fitting effect is portrayed in Figure 8.

Figure 6. Tree measurement representation

Figure 7. Depiction of original image point cloud data

The achieved accuracy rate, notably surpassing the 90% threshold, is indicative of the model's robustness, particularly when considering the inherent complexities of tree measurements. The variance in diameters, spanning from as slim as 5cm to as wide as 40cm, underscores the model's versatility in handling diverse scenarios. The decision to juxtapose the model's readings against caliper-based measurements was predicated on calipers being a gold standard in such measurements, offering a high level of precision. The minimal deviation of under 3mm serves as testament to the model's reliability. The visual illustrations provided, from basic measurements to complex 3D modelling, aim to offer readers a holistic understanding of the study's breadth and depth, while the incorporation of the RANSAC algorithm underscores the emphasis on precision and accuracy in the study's methodology.

Figure 8. Cylindrical fitting of point cloud data following RANSAC algorithm implementation

6.2 Discussion of results

Upon analysis of the data, it was discerned that the proposed technique exhibits an exemplary capability in tree image feature extraction, displaying not merely impressive accuracy but consistent stability as well. In contrast to conventional image processing, machine learning methodologies were found to adeptly discern intricate image features, thereby obviating the traditionally laborious extraction procedures.

Subsequent experiments revealed that an enhanced performance was manifested when engaging with specific tree varieties. Such an observation was largely ascribed to the composition of the training dataset and the prevalence of particular tree species within it. Furthermore, concerning the modulation in lighting conditions, a minor decline in performance under the intense luminescence of midday sunlight was identified; nevertheless, accuracy indices sustained commendably elevated levels during the nascent hours of morning and the twilight of dusk. In juxtaposition with alternate machine learning paradigms, the introduced model was discerned to outperform in the domain of tree image analysis. In essence, the method proffered can be characterised as a highly adept and stable solution in the realm of tree imagery analysis.

The superior performance of the proposed model underscores the evolving paradigm of tree image analysis. The results affirm the growing consensus that machine learning, with its capability to automatically discern and adapt to intricate patterns, holds a transformative potential in forestry informatics. The variance in model performance across tree species, as evidenced by the data, serves as an imperative to ensure diverse and representative datasets. The minor susceptibility to intense sunlight could be attributed to the heightened variability in shadows and contrast during such periods. Nevertheless, the resilience of the model during suboptimal lighting conditions of morning and dusk is noteworthy. While the study has illuminated the strengths of the proposed methodology, it also underscores the broader shift towards data-driven methodologies in the forestry sector.

7. Conclusion

Within the current technological context, the application of image analysis and feature extraction has proliferated across numerous sectors, with marked emphasis observed in the realms of forest ecology and urban greening. In this inquiry, an innovative methodology, employing the Intel RealSense D435i depth camera for tree image capture, was assessed, and its efficacy was confirmed via empirical evaluation.

The chief inferences derived encompass:

(1) Camera Competence: The Intel RealSense D435i depth camera's prowess was made evident in image acquisition. Quality and authenticity of images under diverse lighting scenarios were notably achieved. The mounting of this camera on an aerial drone is illustrated in Figure 1 of the referenced text.

(2) Image Refinement: Techniques adopted herein facilitated the refinement of tree images. Target objects were effectively delineated, and unrelated background noise was mitigated using pass-through filtering combined with K-means clustering segmentation. A comprehensive portrayal of this process can be witnessed in Figure 4 of the core document.

(3) Methodological Stability: Variability in lighting, angular positioning, and relative distance was shown to exert limited influence on the performance of the image refinement technique, underscoring the resilience of the methodology.

(4) Benchmarking: Initial comparative trials underscored that the image refinement approach delineated in this research surpassed traditional methods in efficacy and precision concerning tree imagery. A more granular analysis of comparative data is poised for future deliberation.

Looking forward, the envisagement is toward the further enhancement of the image refinement procedure and the augmentation of the dataset magnitude. A broadened exploration into its utility in diverse practical contexts within forest ecology and urban greening is also projected. The findings of this research not only promulgate a potent mechanism for tree image feature extraction but also serve as a beacon for analogous tasks in image recognition and analysis.

Acknowledgement

This paper was supported by the National Natural Science Foundation (Grant No.: 31970454).

  References

[1] Clark, D.B., Clark, D.A., Oberbauer, S.F. (2016). Field-quantified responses of tropical rainforest aboveground productivity to increasing CO2 and climatic stress, 1997–2009. Journal of Geophysical Research: Biogeosciences, 121(2): 326-348. https://doi.org/10.1002/jgrg.20067

[2] Bonan, G.B. (2008). Forests and climate change: Forcings, feedbacks, and the climate benefits of forests. Science, 320(5882): 1444-1449. https://doi.org/10.1126/science.1155121

[3] Husch, B., Beers, T.W., Kershaw, J.A. (2017). Forest Mensuration. John Wiley & Sons.

[4] Safe’i, R., Darmawan, A., Irawati, A.R., Pangestu, A.Y., Arwanda, E.R., Syahiib, A.N. (2022). Cluster analysis on forest health conditions in Lampung province. International Journal of Design & Nature and Ecodynamics, 17(2): 257-262. https://doi.org/10.18280/ijdne.170212

[5] Gril, E., Laslier, M., Gallet-Moron, E., Marrec, R., Lenoir, J. (2023). Using airborne LiDAR to map forest microclimate temperature buffering or amplification, Remote Sensing of Environment, 298: 113820.

[6] Anderson, K., Gaston, K.J. (2013). Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Frontiers in Ecology and the Environment, 11(3): 138-146. https://doi.org/10.1890/120150

[7] Nelson, R., Krabill, W., Maclean, G. (2015). Determining forest canopy characteristics using airborne laser data. Remote Sensing of Environment, 51(1): 73-84. https://doi.org/10.1016/0034-4257(84)90031-2

[8] Turner, W., Spector, S., Gardiner, N., Fladeland, M., Sterling, E., Steininger, M. (2015). Free and open-access satellite data are key to biodiversity conservation. Biological Conservation, 182: 173-176. https://doi.org/10.1016/j.biocon.2014.11.048

[9] Roberts, H.M., van der Werf, G.R., Randerson, J.T. (2018). Dynamics of vegetation and soil carbon storage from remotely sensed and in situ measurements. Environmental Research Letters, 13(3): 035004.

[10] Asner, G.P., Mascaro, J. (2014). Mapping tropical forest carbon: Calibrating plot estimates to a simple LiDAR metric. Remote Sensing of Environment, 140: 614-624. https://doi.org/10.1016/j.rse.2013.09.023

[11] Juola, J., Hovi, A., Rautiainen, M. (2023). Practical recommendations and limitations for pushbroom hyperspectral imaging of tree stems. Remote Sensing of Environment, 298: 113837. https://doi.org/10.1016/j.rse.2023.113837

[12] Milz, S., Wäldchen, J., Abouee, A. (2023). The HAInich: A multidisciplinary vision data-set for a better understanding of the forest ecosystem. Scientific Data, 10(1): 168. https://doi.org/10.1038/s41597-023-02010-8

[13] Gril, E., Laslier, M., Gallet-Moron, E. (2023). Using airborne LiDAR to map forest microclimate temperature buffering or amplification. Remote Sensing of Environment, 298: 113820. https://doi.org/10.1016/j.rse.2023.113820

[14] Abas, A.I., Baykan, N.A. (2021). Multi-focus image fusion with multi-scale transform optimized by metaheuristic algorithms. Traitement du Signal, 38(2): 247-259. https://doi.org/10.18280/ts.380201

[15] Guo, X.Y., Wang, J., Xu, Z.J. (2022). Multimodal medical image fusion based on average cross bilateral filtering. In 2022 10th International Conference on Information Technology: IoT and Smart City, Shanghai China, pp. 98-105. https://doi.org/10.1145/3582197.3582213

[16] Panguluri, S.K., Mohan, L. (2021). A DWT based novel multimodal image fusion method. Traitement du Signal, 38(3): 607-617. https://doi.org/10.18280/ts.380308

[17] Singh, S., Singh, H., Mittal, N., Hussien, A.G., Sroubek, F. (2022). A feature level image fusion for Night-Vision context enhancement using Arithmetic optimization algorithm based image segmentation. Expert Systems with Applications, 209: 118272. https://doi.org/10.1016/j.eswa.2022.118272

[18] Zheng, X., Kang, D., Si, P., Wu, Q. (2022). Infrared and visible image fusion for ship targets based on scale-aware feature decomposition. IET Image Processing, 16(14): 3977-3987. https://doi.org/10.1049/ipr2.12607

[19] Panguluri, S.K., Mohan, L. (2021). A DWT based novel multimodal image fusion method. Traitement du Signal, 38(3): 607-617. https://doi.org/10.18280/ts.380308

[20] Sheikh, I.M., Chachoo, M.A., Rather, A.A. (2022). An efficient biomedical cell image fusion method based on the multilevel low rank representation. International Journal of Information Technology (Singapore), 14(7): 3701-3710. https://doi.org/10.1007/s41870-022-01002-y

[21] Kong, W., Li, C., Lei, Y. (2022). Multimodal medical image fusion using convolutional neural network and extreme learning machine. Frontiers in Neurorobotics, 16: 1050981. https://doi.org/10.3389/fnbot.2022.1050981

[22] Yin, H., Yue, Y. (2022). Medical image fusion based on semisupervised learning and generative adversarial network. Laser & Optoelectronics Progress, 59(22): 237-246.