Enhanced Canny Algorithm for Image Edge Detection in Print Quality Assessment

Enhanced Canny Algorithm for Image Edge Detection in Print Quality Assessment

Nana Tao

Institute of Intelligent Manufacturing, Zibo Vocational Institute, Zibo 255000, China

Corresponding Author Email: 
10776@zbvc.edu.cn
Page: 
1281-1287
|
DOI: 
https://doi.org/10.18280/ts.400347
Received: 
27 February 2023
|
Revised: 
23 April 2023
|
Accepted: 
5 May 2023
|
Available online: 
28 June 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The growing demand for high-quality print output in the digital printing era underscores the importance of refining detection algorithms essential for print quality assessment systems. This study focuses on the analysis and optimization of the classical image edge detection algorithm, the Canny algorithm. A novel method is presented, which incorporates an improved adaptive median filter (AMF) for the initial processing of images, resulting in increased efficiency and better handling of noise points. Furthermore, the gradient calculation direction has been expanded, and the threshold has been fine-tuned using an enhanced OTSU algorithm. The optimal threshold selection relies on a preliminary judgement, leading to more comprehensive and accurate image edge information capture. Comparative analysis with the Sobel operator and the traditional Canny edge detection highlights the advantages of the optimized Canny algorithm. This improved approach succeeds in preserving a greater amount of graphical edge information and exhibits a superior ability to identify false edges, significantly increasing detection accuracy. The findings of this study contribute to the development of print quality detection, promoting a more automated, digital, and systematic approach.

Keywords: 

print quality inspection, improved Canny algorithm, adaptive median filter, OTSU algorithm

1. Introduction

Amidst the relentless progress of global economies, a commensurate advancement in the quality of printed material has become evident. Traditional single printed matter no longer suffices to meet consumer demands, thus necessitating an industry shift toward digitalisation and intelligence [1-3]. This transition is manifest in the rise of digital printing, now widely regarded as the mainstream mode in modern printing. Unsurprisingly, the advantages of digital printing over its traditional counterparts are numerous and include features such as personalized printing, variable data, environmental sustainability, and digital functionality, all of which contribute to an overall superior quality.

Digital printing fundamentally represents an entirely digitized and networked production process, encompassing the identification, handling, transmission, and control of all digital data from the moment of input until final print [4-6]. This digital procedure pervades every stage of production.

In response to the trend towards digitalisation, automation, and systematic printing, significant efforts have been invested in the exploration of quality detection technologies based on digital image processing. An automated print quality detection system, for example, captures an image of a standard printed matter devoid of defects via a CCD camera, establishes particular criteria, and stores it within a computer. Subsequently, the image to be examined is captured and compared continuously against the stored standard. Any disparity results in the image being deemed as subpar. Such a system allows for the categorization and statistical measurement of defect images, effectively guiding the printing process and meeting quality detection needs.

The development and application of an image edge detection algorithm within the defect detection system is paramount to the operation of the printed matter quality detection system. As a significant constituent of digital image processing technology [7], edge detection has found extensive application across various image processing sectors such as target recognition, image enhancement, and robot vision. Notably, the image edge of a printed matter harbors copious amounts of vital information. Consequently, any method for edge detection must not only accurately detect the position of the edge but also effectively suppress irrelevant details and noise. The necessity of such measures arises from the fact that image data is often contaminated with noise during practical applications [8].

Historically, traditional edge detection algorithms such as the Sobel operator [9], Roberts operator [10], and Prewitt operator [11] utilized directional derivatives of mathematical processing methods to discern edges based on gray value variations in each pixel neighborhood of the original image. Despite their overall simplicity and relatively high image pixel processing speeds, these methods were found to be inadequately sensitive to noise reflection during processing. Conversely, the Canny operator, first proposed by John Canny [12] and further detailed in referencen [13], incorporated the benefits of the aforementioned operators while also demonstrating superior anti-interference capabilities and advantages in signal-to-noise ratio and accuracy.

2. Literature Review

2.1 Basic principles of Canny operators

In reviewing the existing literature on edge detection, attention is drawn to the principles underlying the Canny operator, as proposed by Canny [14]. This operator is an edge detection method that is multi-step in nature and relies on three key indices: a low error rate, superior positioning accuracy, and effective suppression of false edges.

The operator's first step involves the application of Gaussian filtering to images. The principle of operation is grounded on the fact that convolution operations can be interchanged and combined. Hence, a two-dimensional, zero-mean Gaussian function is applied initially, followed by the convolution operation on the image matrix. This process serves to smooth the image by eliminating noise. The Gaussian function expression is as follows:

$G(x, y)=\frac{1}{2 \pi \sigma^2} \exp \left(-\frac{x^2+y^2}{2 \sigma^2}\right)$     (1)

$I(x, y)=G(x, y)^* f(x, y)$    (2)

The function f(x,y) pertains to the original image, while I(x,y) denotes the image which has been smoothed by the two-dimensional Gaussian filter. The parameter σ represents the standard deviation of the Gaussian filter function. If σ is small, the precision of the Gaussian filter positioning is augmented, but the signal-to-noise ratio is lowered. Conversely, larger values of σ reduce positioning accuracy but enhance the signal-to-noise ratio.

Following Gaussian filtering, the algorithm proceeds to calculate the amplitude and direction of the image gradient. This step employs a 2×2 template to determine the gradient of the gray image [15]. The gradient amplitude of the pixel is evaluated by obtaining the first derivative of the pixel in the X and Y directions. The equations given below provide the first partial derivatives in these directions:

$\begin{aligned} & G_x(x, y)=[I(x, y+1)-I(x, y)+I(x+1, y+1)-I(x+1, y)] / 2\end{aligned}$     (3)

$\begin{aligned} & G_y(x, y)=[I(x, y)-I(x+1, y)+I(x, y+1)-I(x+1, y+1)] / 2\end{aligned}$     (4)

After these equations have been convolved with the image, the output equations can be deduced:

$E_x=\frac{\partial G}{\partial x} f(x, y)$     (5)

$E_y=\frac{\partial G}{\partial y} f(x, y)$    (6)

The gradient amplitude and direction at the point (x,y) are defined by $A(x, y)$  and $\theta(x, y)$ , respectively. These quantities are obtained by the following equations:

$A(x, y)=\sqrt{E_x^2(x, y)+E_y^2(x, y)}$    (7)

$\theta(x, y)=\arctan \left[\frac{E_y(x, y)}{E_x(x, y)}\right]$   (8)

Next, the operator carries out non-maximum signal suppression. This step entails segmenting gradient values to ensure the precision of the edge. In the complete gradient amplitude graph, a ridge band is observed in the vicinity of the maximum value. In such cases, the algorithm sets the pixel values of these non-maximum values to zero, while only retaining the maximum local gradient value. This is achieved by computing the local maximum pixel values, which results in the preservation of the pixel with the maximum local gradient.

The final step of the operator involves double threshold edge connection processing. Here, Canny connects edge pixels by setting two thresholds, denoted by $T_H$  and $T_L$  as the high and low thresholds, respectively. Any edge point with an edge pixel value less than $T_L$  is discarded, while those with an edge pixel value greater than T_H are retained. An edge pixel P is kept if it satisfies   $T_L<P<T_H$ , and discarded otherwise. The selection of the high and low threshold values critically influences the quality of the detected edge.

This literature review seeks to encapsulate the basic principles and operational steps of the Canny operator, which underpins many edge detection algorithms currently in use. A comprehensive understanding of these mechanisms is integral to the continued development and refinement of edge detection methodologies.

2.2 Defect analysis of Canny operator

While the traditional Canny operator effectively eliminates Gaussian noise by employing Gaussian filtering, and discerns between strong and weak edges in the image [16], it exhibits limitations in addressing salt and pepper noise. Such noise can compromise the integrity of image edges through over-smoothing [17]. To address this challenge, Farahaniard et al. [18] suggested a fusion of Fuzzy Neural Networks (FNN) and Adaptive Median Filtering (AMF). This amalgamation aimed to resolve issues arising from image edge detection marred by salt and pepper noise.

Similarly, proposals have been made to enhance image denoising and edge detection. Rafsanjani et al. [19] proposed the utilization of double-sided filters for denoising images and adaptive selection of high and low thresholds for edge detection. A self-adaptive approach to defining high and low double thresholds was introduced by Truong and Kim [20] through the implementation of the OTSU algorithm. Furthermore, the adoption of a guide filter exhibiting edge preserving characteristics to replace the Gaussian filter was put forward by Gan et al. [21]. In concert with this, the big law method was employed to adaptively select high and low thresholds.

Upon comprehensive review of the literature [16-21], the Canny algorithm's limitations have been identified as follows:

(1) The removal of salt and pepper noise during the filtering stage results in the over-smoothing of the image edge, leading to the loss of key edge information.

(2) The algorithm's reliance on a 2×2 neighborhood for calculations renders it vulnerable to noise. Consequently, critical edge information can be lost and significant interference information detected, compromising the accuracy of detection.

(3) Subjective selection of high and low thresholds is typically limited by human experience and the number of images. Experiential judgment plays a substantial role in shaping the continuity of image edge information. Manual adjustment of each image in the dataset is time-consuming and lacks adaptive effectiveness. In addition, adaptive thresholds often exhibit sensitivity to salt and pepper noise, warranting further solution for the interference point in the image.

3.Methodology

The methodology adopted in this study integrates an advanced AMF into the Canny operator, as a strategy for minimizing image noise. This departure from the conventional Gaussian filter showcases a noteworthy progression in the technology. The gradient algorithm was further refined, paving the way for improved precision. Alongside these advancements, an OTSU algorithm was harnessed to generate high and low thresholds automatically, an approach contingent on the image's grayscale. By embracing this approach, the improved Canny algorithm ensures an inherent adaptability that successfully circumvents the need for repeated threshold adjustments, typically associated with iterative testing processes. The procedure followed in the implementation of the improved Canny algorithm is depicted in Figure 1.

Figure 1. Flow chart of improved Canny edge detection algorithm

3.1 Improved AMF

The process of edge detection necessitates the initial steps of image smoothing and denoising. These measures aim to inhibit the detection of noisy pixels as false edges, thereby ensuring the extraction of accurate image edges. As the Gaussian filter employed in the conventional Canny edge detection algorithm underperforms in the context of salt and pepper noise processing, an enhanced AMF was embraced in this study. This filter was found to effectively eliminate salt and pepper noise while preserving image details.

A series of steps were undertaken during the application of the improved AMF. The initial step involved the creation of an initial population. Here, $W_{i j}$  represented the moving window's size, initially set at 3, with W_max indicating its maximum size. The gray value within the moving window had a minimum, maximum, and median represented as $f_{\min }, f_{\max }$ and $f_{m i d}$ , respectively.

The subsequent step allowed for the output of $f(x, y)$  when fmax > $f(x, y)$  > fmin, implying that the current pixel was devoid of noise. If this condition was not met, the algorithm progressed to the third step. Should the current pixel be classified as noise, the median gray value fmid within the moving window was subject to further evaluation. In this case, the output was $f(x, y)$  when $f_{\max }> f_{\text {mid }}>f_{\text {min }}$; otherwise, the fourth step was skipped.

In the fourth step, if the current moving window's median value was classified as noise, the moving window $W_{x y}$  was expanded by one unit, with $W_{x y}$  incrementing to $W_{x y}+1$ . If $-W_{x y}$  was smaller than $W_{\max }$ , the process returned to the second step; otherwise, it advanced to the fifth step.

The fifth step necessitated the removal of all minimax points within the current window, with a weighted average calculation for the remaining pixel values relative to their distance from the center. This process is depicted in Eq. (9):

$f_{\mathrm{ag}}=\frac{f_1 n_1+f_2 n_2+\cdots+f_k n_k}{\sum_k^k n_i}$     (9)

where, $f_i$  is the remaining pixel and $n_i$  represents the corresponding weight $f_{a g}$ . The weight is inversely proportional to the distance from the center, implying that closer pixels bear greater weight.

In comparison to the original AMF, the enhanced AMF first ascertains whether the current pixel is noise before proceeding to output it directly if it is not. Thereafter, the window expands to obtain the median value for additional scrutiny. This contrasts with the traditional AMF, where the median noise, rather than the current pixel, potentially enlarges the window, resulting in imprecise image information and blurring. The superior AMF's design optimizes these concerns, thus improving the efficiency of the algorithm. Furthermore, the enhanced algorithm eliminates all noise (maximum and minimum values) within the window, computes a weighted average in conjunction with the remaining pixels' distance from the current pixel, and ultimately acquires a weighted average to substitute the current pixel. This adjustment prevents the median value of the noise points from remaining as noise even after adjusting to the maximum window in the original algorithm.

3.2 Improved method for calculating the image gradient amplitude

To enhance the precision of edge detection and effectively suppress noise, an advanced methodology was proposed in this study. This approach extends the traditional procedure by incorporating calculations of the first partial derivative of the pixel in four directions: X, Y, 45°, and 135°. These additional directions, deviating from the traditional X and Y axes, allow for a more refined interpolation process, potentially capturing edge features otherwise overlooked by the original algorithm.

The partial derivatives in the X, Y, 45°, and 135° directions are calculated using specified equations as follows

$G_x(x, y)=I(x+1, y)-I(x-1, y)$     (10)

$G_y(x, y)=I(x, y+1)-I(x, y-1)$     (11)

$G_{45^0}(x, y)=I(x-1, y+1)-I(x+1, y-1)$     (12)

$G_{135^{\circ}}(x, y)=I(x+1, y+1)-I(x-1, y-1)$     (13)

The expressions for calculating the gradient amplitude and direction were as follows:

$\begin{aligned} & G(x, y)=\sqrt{G_x(x, y)^2+G_y(x, y)^2+G_{45^0}(x, y)^2+G_{135^0}(x, y)^2}\end{aligned}$     (14)

$\theta(x, y)=\arctan \left[\frac{G_y(x, y)}{G_x(x, y)}\right]$     (15)

The improved gradient algorithm suppressed the noise well and improved the accuracy of edge point detection.

3.3 Non-maximum suppression of gradient amplitude

In the Canny algorithm, the magnitude of the image gradient holds significance as it determines the gradient value at each point within the image. However, it is important to note that this principle is only applicable in the context of image enhancement and does not necessarily indicate the presence of an edge at that point [22].

The algorithm involves a process referred to as non-maximum suppression of gradient amplitude. This procedure ensures that only the points with the maximum gradient values in their respective gradient directions are considered as potential edges. To achieve this, the algorithm examines each pixel in the image and compares its gradient value with that of the two neighboring pixels located in the gradient direction of the pixel in question. Should the gradient value of the examined pixel be less than that of either of its neighbors, it is not regarded as an edge. Consequently, the gradient value at these non-maximum points is reset to zero.

This systematic approach to gradient amplitude suppression enables a more precise detection of edge points within an image, mitigating the potential for erroneously identifying non-edge points as edges due to their relatively high gradient values. The use of non-maximum suppression thus significantly enhances the reliability and accuracy of the Canny algorithm in image edge detection tasks.

3.4 Improved OTSU algorithm

The OTSU method, originally designated as the maximal inter-class variance technique, is widely renowned for its autonomous execution of threshold selection predicated on the image's grayscale histogram data [23]. To enhance the edge connectivity within this study, a two-dimensional Otsu algorithm has been implemented. This algorithm ingeniously integrates both the grayscale value distribution of the initial image and the neighborhood image's mean grayscale value distribution. Consequently, a two-dimensional threshold vector is established, facilitating the acquisition of the optimal threshold upon identification of the maximum value under a two-dimensional criterion.

Figure 2. Traditional 2D OTSU method

Traditional practice of the two-dimensional Otsu algorithm involves division of the image into four distinct regions: the target region (A), the background region (B), and two noise regions (C and D) using a two-dimensional threshold vector (S,F) that is dependent on both gray and neighborhood gray mean. An illustrative depiction of this can be found in Figure 2. Despite its efficacy, the algorithm's accuracy is often compromised due to the insignificant probability of regions B and C being situated far from the primary diagonal during calculation.

To address this shortcoming, this study presents an enhanced two-dimensional Otsu algorithm. An innovative approach of partitioning the two-dimensional histogram into target and background regions via the equation x+y=T has been adopted, as demonstrated in Figure 3. The enhanced algorithm, as compared to its traditional counterpart, utilizes all pixel points within the region, significantly augmenting its accuracy.

Figure 3. Improved 2D OTSU method

Let f(x,y) represent the pixel value at a given pixel point, and g(x,y) denote the average pixel value of the neighborhood surrounding point (x,y). Consider a grayscale image with dimensions M×N and L gray levels. Similarly, the neighborhood mean image g(x,y) has dimensions M×N and L gray levels. For any point within the image, a binary pair (i,j) can be formed, representing the gray value and the average gray value of the neighboring area.

Let T be the threshold, then pixel frequency $p_K$  was denoted by Eq. (16) as follows:

$p_K=\frac{n_K}{M \times N}, K=0,1,2, \cdots, 2(L-1)$     (16)

where, $n_T$  is the number of satisfied pixels $x+y=T$ , and $p_K=\frac{n_K}{M \times N}$ , $K=0,1,2, \cdots, 2(L-1)$ is the total number of pixels. According to the threshold T, the image pixel was divided into two parts: greater and less than T. Let $w_1$  and $w_2$  be the probabilities of being less and greater than the threshold T part, respectively, $m_1$ and $m_2$ be the gray means of the part less and greater than T, and $m_T$  be the gray mean of the whole image. According to the probability distribution of T, the maximum inter-class variance criterion was given as follows:

$\sigma_1^2(T)=w_1\left(m_1-m_T\right)^2+w_2\left(m_2-m_T\right)^2$     (17)

$w_1=\sum_{K=0}^{T-1} P_K$     (18)

$w_2=1-w_1$     (19)

$m_1=\sum_{K=0}^{T-1}\left[K \frac{P_K}{w_1}\right]$     (20)

$m_2=\sum_{K=T+1}^{2(L-1)}\left[K \frac{P_K}{w_2}\right]$     (21)

$m_T=w_1 m_1+w_2 m_2$     (22)

Then the maximum $\sigma_1$  of threshold T was the best threshold $T_b$ , as shown in Eq. (23).

$\sigma_1^2\left(T_b\right)=\max \sigma_1^2(T), 0 \leq T \leq 2(L-1)$    (23)

The enhanced OTSU algorithm, devised in this study, identified the optimal high threshold through a singular criterion, ensuring an accurate and efficient process. This algorithm then determined the optimal low threshold based on the premise that the ratio between the high and low thresholds, as suggested by the Canny algorithm, should range from 2:1 to 3:1.

It's crucial to note that the optimal threshold defined in this study significantly contributed to the identification of the majority of edge points. Through comprehensive experimental analysis, we found that the relationship between the high and low thresholds in this improved algorithm was approximately 2.5 times. This innovative approach helps optimize image processing, leading to more accurate and robust outcomes.

Figure 4. Comparison of several image edge detection algorithms

4. Experimental Validation

In the pursuit of identifying the effectiveness of the enhanced Canny algorithm, multiple simulations were conducted employing a range of algorithms - the classical Canny edge detection algorithm [18], Sobel operator [9], and the improved algorithm presented in this research. The various outcomes are illustrated in Figure 4. To achieve this, Adaptive Median Filtering (AMF) was applied to the image, followed by the calculation of an improved gradient. The enhanced OTSU threshold segmentation method was then employed to acquire an adaptive threshold (T) essential for the refined Canny edge detection and verification process.

The results derived from the classical Canny algorithm for the Lena image primarily captured the broad figure outline, however, several edge discontinuities were observed, leading to instances of detection omissions. The Sobel operator displayed a greater efficacy in comparison to the traditional Canny edge detection, but a number of lines were less pronounced. The proposed algorithm in this research, on the other hand, accurately delineated the character outline, exhibiting robust resistance to interference. It was found to be superior to the previously mentioned algorithms overall, thereby indicating an improvement in accuracy.

As presented in Figure 4(b), the outcomes of Sobel edge detection indicated satisfactory detection performance, with minimal impact of noise. However, the precision of edge positioning was compromised due to the coarseness of the edges obtained and the detection of false edges. In contrast, the traditional Canny algorithm, depicted in Figure 4(c), displayed a greater sensitivity to noise, culminating in the introduction of various interference factors within the image. This, in turn, adversely affected the precision of edge information, resulting in a less than optimal detection performance. The enhanced Canny edge detection algorithm implemented in this research, illustrated in Figure 4(d), exhibited a superior capability in noise reduction while preserving more edge information. This led to an effective mitigation of noise and a more precise acquisition of optimal edge information.

A quantitative analysis was conducted to objectively evaluate the efficacy of image edge contour extraction performed in this research. The quality coefficient of edge detection effect evaluation index [24] was employed. The equation for the quality coefficient is provided as follows:

$P_{F O M}=\frac{1}{\max \left\{I_L, I_S\right\}} \sum_{i=1}^{I_s} \frac{1}{1+\delta d_i^2}$     (24)

where, $I_L$  denotes the count of ideal edge pixels, $I_S$  represents the count of edge pixels actually extracted, δ is an adjustment coefficient with a fixed value of 1/9, and $d_i$ is the normal distance between the initial actual extracted edge point and the nearest real edge point. The value of $P_{F O M} \in(0,1)$  falls within the range of 0 to 1, and the larger the value, the better the test outcome. To minimize data errors, the mean value of the edge contour extraction results $\left(P_{F O M}\right)$ from several distinct edge detection algorithms applied to the Lena image was considered in this study. Figure 5 showcases a comparison of mean values of $P_{\text {FOM }}$  as extracted by the Sobel operator [9], the original Canny algorithm [18], and the enhanced Canny algorithm developed in this research.

Figure 5. Comparison of quality coefficients of different edge detection algorithms in Lena image

5. Conclusion

This study culminated in the enhancement of the traditional Canny algorithm, particularly addressing its shortcomings in the face of significant noise interference and inadequate threshold adaptability.

(1) An enhanced version of the AMF was implemented, which abstained from Gaussian filtering. As a result, edge information was preserved while noise was reduced and smoothed, demonstrating an improved ability to suppress noise.

(2) The advanced OTSU algorithm was applied to all pixel points within the region, leading to an enhanced degree of accuracy.

(3) The superior high threshold was ascertained for threshold segmentation. This allowed for more precise separation between the target and background, reducing the loss of edge information.

The incorporation of the improved Canny edge detection markedly diminished the time consumed in print image edge detection. This achievement underscores the expedited detection accuracy, thereby fostering automated and digitized quality assessment of printed materials.

Future studies may build on this foundation to explore other avenues for improvement. The success of the enhanced algorithm lays the groundwork for potential applications in other complex imaging scenarios, where accurate edge detection can offer significant benefits. Future research may also examine the scalability of this approach, as well as the potential for integrating the improved Canny algorithm with other image processing techniques. Such integrations could unlock novel solutions and provide even more robust performance in a wide range of applications.

These findings offer a compelling case for the effectiveness of the improved Canny algorithm, contributing to its broader adoption in the field of image edge detection and beyond. This study underscores the value of continually refining and improving existing methodologies, emphasizing that even established techniques such as the Canny edge detection algorithm can be significantly enhanced.

In closing, this research serves as a catalyst for continued advancements in the field of image edge detection, spotlighting the potential of algorithmic improvements to enhance performance and accuracy.

  References

[1] Rindfleisch, A., O'Hern, M., Sachdev, V. (2017). The digital revolution, 3D printing, and innovation as data. Journal of Product Innovation Management, 34(5): 681-690. https://doi.org/10.1111/jpim.12402

[2] Zhao, Z., Kumar, J., Hwang, Y., Deng, J., Ibrahim, M. S. B., Huang, C., Suresh, S., Cho, N.J. (2021). Digital printing of shape-morphing natural materials. Proceedings of the National Academy of Sciences, 118(43): e2113715118. https://doi.org/10.1073/pnas.2113715118

[3] Fang, E., Yang, S., Kong, L., Ge, J. (2018). Study on the registration testing of color digital printing machine. In Applied Sciences in Graphic Communication and Packaging: Proceedings of 2017 49th Conference of the International Circle of Educational Institutes for Graphic Arts Technology and Management & 8th China Academic Conference on Printing and Packaging, pp. 401-409. https://doi.org/10.1007/978-981-10-7629-9_49

[4] He, M. (2019). Research on the status quo and development of digital printing technology. Journal of Physics: Conference Series, 1168(2): 022037. https://doi.org/10.1088/1742-6596/1168/2/022037

[5] Labrada-Nueva, Y., Cruz-Rosales, M.H., Rendón-Mancha, J.M., Rivera-López, R., Eraña-Díaz, M.L., Cruz-Chávez, M.A. (2021). Overlap detection in 2D amorphous shapes for paper optimization in digital printing presses. Mathematics, 9(9): 1033. https://doi.org/10.3390/math9091033

[6] Verano, D.A., Husnawati, H., Ermatita, E. (2020). Implementation of autoregressive integrated moving average model to forecast raw material stock in the digital printing industry. Journal of Information Technology and Computer Science, 5(1): 13-22. https://doi.org/10.25126/jitecs.202051117

[7] Su, Z., Yang, J., Li, P., Jing, J., Zhang, H. (2022). A precise method of color space conversion in the digital printing process based on PSO-DBN. Textile Research Journal, 92(9-10): 1673-1681.  https://doi.org/10.1177/00405175211067287

[8] Guo, Y., Şengür, A. (2014). A novel image edge detection algorithm based on neutrosophic set. Computers & Electrical Engineering, 40(8): 3-25. https://doi.org/10.1016/j.compeleceng.2014.04.020

[9] Ravivarma, G., Gavaskar, K., Malathi, D., Asha, K.G., Ashok, B., Aarthi, S. (2021). Implementation of Sobel operator based image edge detection on FPGA. Materials Today: Proceedings, 45: 2401-2407. https://doi.org/10.1016/j.matpr.2020.10.825

[10] Wang, Z. X., & Wang, W. (2018). The research on edge detection algorithm of lane. EURASIP Journal on Image and Video Processing, 2018: 98. https://doi.org/10.1186/s13640-018-0326-2

[11] Balochian, S., Baloochian, H. (2022). Edge detection on noisy images using Prewitt operator and fractional order differentiation. Multimedia Tools and Applications, 81(7): 9759-9770. https://doi.org/10.1007/s11042-022-12011-1

[12] Canny, J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-8(6): 679-698. https://doi.org/10.1109/TPAMI.1986.4767851

[13] Song, R., Zhang, Z., Liu, H. (2017). Edge connection based Canny edge detection algorithm. Pattern Recognition and Image Analysis, 27: 740-747. https://doi.org/10.1134/S1054661817040162

[14] Wu, F., Zhu, C., Xu, J., Bhatt, M.W., Sharma, A. (2021). Research on image text recognition based on canny edge detection algorithm and k-means algorithm. International Journal of System Assurance Engineering and Management, 13: 72-80. https://doi.org/10.1007/s13198-021-01262-0

[15] Li, M.L., Zhang, G.W. (2013). Advanced Information and Computer Technology in Engineering and Manufacturing, Environmental Engineering. Switzerland: Trans Tech Publications Ltd.

[16] Islam, M.T., Rahman, S.M., Ahmad, M.O., Swamy, M.N.S. (2018). Mixed Gaussian-impulse noise reduction from images using convolutional neural network. Signal Processing: Image Communication, 68: 26-41. https://doi.org/10.1016/j.image.2018.06.016

[17] Chen, Y., Xu, M., Liu, H.L., Huang, W.N., Xing, J. (2014). An improved image mosaic based on Canny edge and an 18-dimensional descriptor. Optik, 125(17): 4745-4750. https://doi.org/10.1016/j.ijleo.2014.04.069

[18] Farahanirad, H., Shanbehzadeh, J., Pedram, M.M., Sarrafzadeh, A. (2011). A hybrid edge detection algorithm for salt and-pepper noise. In Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol I, Hong Kong.

[19] Rafsanjani, H.K., Sedaaghi, M.H., Saryazdi, S. (2017). An adaptive diffusion coefficient selection for image denoising. Digital Signal Processing, 64: 71-82. https://doi.org/10.1016/j.dsp.2017.02.004

[20] Truong, M.T.N., Kim, S. (2018). Automatic image thresholding using Otsu’s method and entropy weighting scheme for surface defect detection. Soft Computing, 22: 4197-4203. https://doi.org/10.1007/s00500-017-2709-1

[21] Gan, W., Wu, X., Wu, W., Yang, X., Ren, C., He, X., Liu, K. (2015). Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter. Infrared Physics & Technology, 72: 37-51. https://doi.org/10.1016/j.infrared.2015.07.003 

[22] Gu, B., Li, W., Wong, J., Zhu, M., Wang, M. (2012). Gradient field multi-exposure images fusion for high dynamic range image visualization. Journal of Visual Communication and Image Representation, 23(4): 604-610. https://doi.org/10.1016/j.jvcir.2012.02.009

[23] Satapathy, S.C., Sri Madhava Raja, N., Rajinikanth, V., Ashour, A.S., Dey, N. (2018). Multi-level image thresholding using Otsu and chaotic bat algorithm. Neural Computing and Applications, 29: 1285-1307. https://doi.org/10.1007/s00521-016-2645-5

[24] Abdou, I.E., Pratt, W.K. (1979). Quantitative design and evaluation of enhancement/thresholding edge detectors. Proceedings of the IEEE, 67(5): 753-763. https://doi.org/10.1109/PROC.1979.11325