Data Augmentation by Wavelet Transform for Breast Cancer Based on Deep Learning

Data Augmentation by Wavelet Transform for Breast Cancer Based on Deep Learning

Hossena Djouima* Athmane Zitouni Ahmed Chaouki Megherbi Salim Sbaa

VSC Lab, University Mohamed Khider, Biskra 07000, Algeria

LICCC Lab, University Mohamed Khider, Biskra 07000, Algeria

Corresponding Author Email: 
hossena.djouima@univ-biskra.dz
Page: 
1097-1107
|
DOI: 
https://doi.org/10.18280/ria.380405
Received: 
28 November 2023
|
Revised: 
2 March 2024
|
Accepted: 
15 April 2024
|
Available online: 
23 August 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Automated diagnosis and evolving CNN architectures are improving diagnostic quality in digital breast cancer histopathology images. The study predominantly focuses on classifying the histopathological images of the BreakHis breast cancer dataset into distinct categories: benign and malignant. A primary challenge in this task is the uneven class distribution and limited training samples, which introduce bias and compromise the model’s non-malignant classification accuracy. The study utilizes wavelet decomposition on benign images to address class imbalance and enhance the model's ability to accurately classify breast cancer histopathological images. This technique begins by filtering the image with high-pass and low-pass filters, followed by downsampling. The process is then repeated to generate four images representing different components of the original image, enabling precise localization of essential features and denoising. The DenseNet201 convolutional network is chosen for image classification due to its efficiency and accuracy. Our proposal involves concatenating features extracted from specific blocks of the pre-trained DenseNet201 model: pool3_pool, pool4_pool, and conv5_block32_conca. The proposed framework achieves an impressive overall accuracy in classifying both benign and malignant images, maintaining high accuracy rates of 99% in both multi-scale and magnification-independent classifications. These promising results indicate the potential clinical application of this approach in diagnosing diseases.

Keywords: 

classification, data augmentation, deep learning, imbalance, transfer learning, wavelet

1. Introduction

The development of breast cancer involves uncontrolled multiplication of cells, forming tumors that spread and invade blood vessels, damaging other tissues [1]. Globally, breast cancer, the most common cancer type, affects women post-puberty, with 2.3 million cases and 685,000 deaths in 2020 according to WHO statistics. Historically, breast cancer mortality rates remained stagnant until the 1980s. Improved survival rates followed the introduction of early detection programs and diverse treatments, highlighting the impact of evolving medical approaches in combatting breast cancer [2]. The classification of breast cancer using histopathological images presents challenges due to subtle histological pattern variations, requiring skilled pathologists for accurate lesion distinction. Manual histological biopsy processes, often taking two weeks or more, further delay diagnoses. These complexities underscore the need for automated systems to improve accuracy and expedite breast cancer diagnosis [3]. Computer-Aided Design (CAD) systems play a vital role in breast cancer diagnosis by using advanced algorithms to interpret medical images rapidly and accurately. Their integration has significantly improved the detection of subtle anomalies, thereby positively impacting diagnostic practices in breast oncology [4, 5]. Recent research has demonstrated the effectiveness of CNN-based algorithms in deep learning, particularly in automating feature extraction and tumor type classification within models demanding substantial data and time [3, 6-8]. Transfer learning complements these advancements by enabling models to learn from smaller datasets while maintaining generalization capabilities, particularly in medical imaging. This approach allows limited datasets like BreakHis, chosen for our study, to benefit from the knowledge gained by pre-trained models on larger datasets such as ImageNet, thus enhancing their classification performance [6, 9, 10]. Our study introduces an automated system for classifying breast cancer utilizing pre-trained DenseNet-201 CNNs. Within this framework, features are extracted from breast microscopy images and classified as either non-malignant or malignant based on the magnification factor, or without regard to it [11]. Due to an uneven distribution of classes and a limited number of training samples, the BreakHis dataset exhibits an imbalance issue that needs addressing. This imbalance leads to majoritarian classes being classified more accurately than minority classes. After reviewing the existing literature, it becomes evident that numerous researchers have attempted to fix the imbalance of the BreakHis dataset used for breast cancer diagnosis, using techniques such as oversampling, under-sampling, hybrid sampling, generative adversarial networks (GANs), and Deep Convolution Generative Adversarial Networks (DCGAN) [3]. More, the number of images in BreakHis is not sufficient to obtain promising classification results, as the parameters are underdetermined and the learned networks are poorly generated. For this reason, increasing the data alleviates the problem by applying affine transformations, such as image rotation, scaling, translation, etc. [6]. Our contribution employs wavelets to balance the distribution of images between minority and majority classes, thereby enhancing the overall accuracy of the classification model. This capability is particularly beneficial in medical images analysis, where spectral features play a similar role as spatial features. A four-level wavelet decomposition is obtained from benign class images using this transformation, which includes an approximation image (LL) and three detail images (HH, HL, and LH) [12]. In this article, we propose a novel classification model for breast cancer histopathology images based on wavelets and DenseNet201. Overall, the present study is structured around these significant points:

(1) Classification of breast histopathological images (BREAKHIS) according to benignity or malignancy.

(2) Using wavelet augmentation to overcome the lack of minority-class training samples in histopathological images of breast cancer.

(3) A transfer learning approach is applied to the ImageNet object dataset to provide appropriate pre-trained neural network parameters for classifying breast cancer histopathology images and achieving excellent binary class accuracy.

(4) To address the problem of overfitting and reduce the dimensionality of the feature maps while retaining important spatial information, Global Average Pooling can be applied independently to layers: pool3_pool of dense block2, pool4_pool of dense block3, and conv5_block32_concat of block4.

The general framework of the proposed model is depicted in Figure 1.

Figure 1. General breast cancer histopathology image classification model structure

2. Materials and Methods

2.1 Breast cancer dataset (BreakHis)

Diagnosis of most tumors relies on histopathological analysis of the tissues, which involves a biopsy, followed by microscopic analysis of the breast tissue [13, 14]. The complete process of the biopsy technique is shown in Figure 2. Under different magnifications, pathologists evaluate the tissue biopsies through the microscope [15]. Thus, we obtain images in three channels red, green, and blue (RGB) [6]. To allow the optimization and evaluation of the utility of the proposed method, we selected a BreaKhis database that was created with the collaboration of the P&D Laboratory-Anatomy Dctal Carcinoma Pathological and Cytopathology, Parana, Brazil (http: /www.prevencaoediagnose.com.br). The dataset is an open-source and is available to community members for use. The BreaKHis database contains 7909 (700x460 pixel) histopathological biopsy microscopic images from 82 patients [11, 15]. The dataset comprises 2480 images of benign tumors and 5429 of malignant tumors, distributed across four magnification tiers (40x, 100x, 200x, 400x). Images are categorized as benign (Adenosis [A], Fibroadenoma [F], Phyllodes Tumor [PT], and Tubular Adenoma [TA]) and malignant (Ductal carcinoma [DC], Lobular Carcinoma [LC], Mucinous Carcinoma [MC] and Papillary Carcinoma [PC]) [11, 16].

Figure 3 shows four magnifications of a single breast tissue section with a benign tumor (BreakHis). The distribution of images between the malignant and benign categories for each magnification is presented in Table 1.

Figure 2. The complete process of a biopsy

Figure 3. Benign breast tumor slides [1]

Table 1. Image categorization by benign/malignant categories across various magnifications [1]

Main Category

Benign

Malignant

 

Subclass

A

F

PT

TA

Total Benign

DC

LC

MC

PC

Total Malignant

Total

Number of images per magnification ratio

40X

114

253

109

149

625

864

156

205

145

1370

1995

100X

113

260

121

150

644

903

170

222

142

1437

2081

200X

111

264

108

140

623

896

163

196

135

1390

2013

400X

106

237

115

130

588

788

137

169

138

1232

1820

2.2 Data augmentation

Effective deep learning training necessitates ample data to avoid overfitting and improve performance, which can be achieved through data augmentation or geometric transformations. Figure 4 reveals an imbalance in the BreakHis database's data distribution across levels. To mitigate biases caused by unequal class distribution, we opted for a wavelet image generation from minority class images, equalizing the number of benign and malignant images. To increase the data, the following procedure has been adopted:

The images were resized from 700×460×3 to 224×224×3 pixels, reducing computational complexity and meeting DenseNet-201 input size requirements [17]. The dataset for each magnification level is split into training (96%) and testing (4%) sets through random shuffling [6]. Balancing the 96% benign and malignant classes involved adding wavelet-generated images to the benign class. Table 2 displays the image count after this balance adjustment. Data augmentation, including flipping, cropping, rotation, scaling, and zooming, was also applied to enhance accuracy and prevent overfitting [14, 18-21]. Examples of augmented images are presented in Figure 5.

Figure 4. Distribution of the BreaKhis classes

Figure 5. Examples of augmented images

Table 2. Image distribution before and after wavelet transform (DWT)

Magnification Factor

Samples

Original

Training after Wavelet Augmentation

Testing

 

Benign

2480

5429

100

Binary

Malignant

5429

5429

217

 

Total

7909

10858

317

Multi-Scale

40X

Benign

625

1315

25

Malignant

1370

1315

55

Total

1995

2630

80

100X

Benign

644

1379

26

Malignant

1437

1379

58

Total

2081

2758

84

200X

Benign

623

1334

25

Malignant

1390

1334

56

Total

2013

2668

81

400X

Benign

588

1182

24

Malignant

1232

1182

50

Total

1820

2364

74

2.3 Discrete wavelet transform (DWT)

Figure 6. Mallat's algorithm

The DWT, is a mathematical analysis tool that describes an image in terms of both its spatial and frequency characteristics. It uses filters with different cutoff frequencies. The image is subjected to both a low- pass filter known as a 'step function', and a high-pass filter, known as a 'wavelet function'. These two filters are applied sequentially to the entire image. At the output, we get four frequency bands: the first low-frequency (LL) part is a kind of average of the original signal, called an approximate image, and a downscaled version of the original image and a smooth version. The second part is a set of three high-frequency sub-bands characterized by their spatial orientation (HL (horizontal), LH (vertical), HH (diagonal)). The detail images (HH, HL, and LH) are usually referred to wavelet coefficients and outlines of image regions. This process can be repeated any number of times [22]. Alfred Haar introduced wavelets in 1909 and applied them to represent one-dimensional signals. Stephane G. Mallat extends the application of wavelet transforms to images (or 2D signals). He, therefore, introduced a fast wavelet decomposition/reconstruction algorithm. The algorithm is recursive and mainly based on two operations [23]:

Filtering: Convolution of a signal with a low-pass filter (h0) or a high-pass filter (g0).

Downsampling: Reduces the number of signal samples. In fact, horizontally subsampling (1:2) an image is equivalent to removing one column from two, reducing the number of pixels per row by half. Figure 6 shows the MALLAT algorithm, which can be explained as follows:

Let $S_j$ represent the approximate image at resolution level j, and let $D^X_j$ represent the subband at orientation x (where $x \in\{H, V, D\}$) that is extracted at resolution level j. In the algorithm, the input image $S_j$ is first passed to both high-pass and low-pass filtering. The resulting images are then under- sampled on the lines, and each of the sub-sampled images is again filtered by both high-pass and low-pass filters, resulting in a total of four images. These four images are once again sub-sampled, resulting in four images of the same size: an approximation image $S_{j+1}$ and three detail images $D^X_{j+1}$, where $x \in\{H, V, D\}$ [22, 23].

To generate wavelet images from histopathological images of benign breast cancer, we followed several steps. First, we loaded these images in PNG format. Next, we converted them to grayscale because wavelet decomposition is more effective on one-dimensional grayscale images than on three-dimensional color images with red, green, and blue channels. In grayscale, the wavelet coefficients better capture intensity variations within the image, making details such as contours, textures, and structures more visible in the wavelet coefficients. We specifically chose the ‘bior1.3’ wavelet filter, which belongs to the family of biorthogonal wavelets, because it strikes a balance between efficiency and precision, making it suitable for image decomposition. ‘bior1.3’ consists of a low-pass filter (h0) and a high-pass filter (g0). The low-pass filter captures approximation components (low frequencies), while the high-pass filter detects details (high frequencies). Finally, we obtained the wavelet coefficients (LL, LH, HL, HH). The LL coefficient represents low-frequency approximation, containing the overall intensity variations within the image. In other words, it captures the general structures and trends of the image. Meanwhile, LH, HL, and HH represent high-frequency details in different directions, revealing fine contours, textures, and structures of the image. Together, these coefficients allow us to decompose the image into spatial frequencies and orientations, providing a rich representation for analysis and characterization [12].

Figure 7 illustrates the histology image after performing wavelet transform (DWT).

Figure 7. An example of applying the wavelet transform (DWT) to BreakHis images

2.4 Feature extraction by deep learning

Classification of breast cancer histopathology images requires feature extraction, a necessary step to identify and perfectly understand the image features. To this end, we utilize convolutional neural networks (CNNs), to automatically extract relevant information from raw images based on each pixel of the image, and use them effectively in the Classification process [10]. In addition, the application of CNN’s requires a large amount of data, which makes the task of obtaining corresponding training and testing data very difficult. Therefore, transfer learning is a concept introduced in machine learning to achieve numerous goals. Its main purpose is to improve classification accuracy by utilizing the knowledge from pre-trained models learned on huge datasets. This method allows neural networks to perform better on tasks that need fewer data [24, 25]. In classification tasks, transfer learning often involves training on ImageNet, a dataset comprising 14 million real-world, annotated photographs used in computer vision research, and the model is trained over approximately 1000 distinct classes. Transfer learning has gained popularity across various computer vision tasks, such as object detection, image classification, and segmentation, proving valuable for machine learning researchers. Popular transfer learning models for classification include AlexNet, VGG, GoogleNet, ResNet, DenseNet, MobileNet, and Inception [6].

2.5 DenseNet structure

In a neural network, the longer the path between the input layer and the output layer, the more information tends to vanish before reaching its destination. DenseNet, on the other hand, adopts a compact design that allows the neural network to receive inputs from all preceding layers and transmit them to subsequent layers. This approach emphasizes the sequential concatenation of feature maps, as expressed in Eq. (1).

where, l represents the layer index, H stands for the nonlinear operation, xl represents the functionality of the lth layer [26].

$\mathrm{xl}=\mathrm{Hl}([\mathrm{x} 0, \mathrm{x} 1, \mathrm{x} 2, \ldots \ldots, \mathrm{xl}-1])$                   (1)

DenseNet, demonstrated superior classification performance on benchmark datasets such as CIFAR-100 and ImageNet. Figure 8 summarizes the top-1 validation errors of single-crop evaluations for various widely-used pre-trained models in the ImageNet classification task sourced from [27].

The comparison clearly shows that DenseNet outperforms other pre-trained models, leading to its selection as the foundational model for this study.

Figure 8. Highest performance achieved through the ImageNet classification task [24]

3. Proposed Ensemble Approach

In this section, we outline our proposed deep learning architecture for classifying breast cancer categories using histopathology images from the BreakHis dataset. Given that our dataset comprises images captured at various magnification factors, it’s crucial to consider the impact of these factors on the visual characteristics of histopathological features associated with breast cancer. In other words, the magnification level can impact the clarity, resolution, and quality of details that we can observe in histopathological images. Classifying images based on magnification-specific criteria allows us to account for these variations in tissue structure and cellular morphology across different magnification levels. Furthermore, we propose an additional approach, termed Magnification-independent classification, where images are classified irrespective of their magnification factors. This approach enables us to develop a more robust model that can generalize effectively across different magnification levels, ultimately improving diagnostic accuracy and model performance. We started by reserving 4% of the dataset to validate the trained network's reliability. Next, we performed data augmentation on the remaining 96% of images using wavelet decomposition to address class imbalance, equalizing minority and majority class populations. The dataset was then randomly shuffled and divided into training and validation sets, with proportions of 80% and 16% respectively. Geometric transformations were applied to create multiple image versions for feature extraction using DenseNet201. Yosinski et al. [28] found that network performance may decline when extracting features from higher layers. A ConvNet structure is employed to extract features from various positions, with global average pooling (GAP) applied in each layer for dimensionality reduction. This approach avoids using Flatten from the original DenseNet201 architecture, which necessitates numerous parameters and may cause overfitting, rendering the model difficult to train. GAP reduces a feature map from wxwxc to 1x1xc [20]. The DenseNet201 architecture comprises four dense blocks. Features were extracted from the pool3_pool layer of dense block2, pool4_pool layer of dense block3, and conv5_block32_concat layer of block4. The pre-trained DenseNet201 undergoes refinement with GAP applied to the final output of the last three modules, resulting in a more robust model. Combining these vectors creates a 3072-dimensional representation of the disease image sample. To prevent overfitting, dropout and batchNormalization layers are added for regularization. Finally, the network is adapted for binary classification by incorporating a dense layer with a Softmax function at the end of the architecture [24]. The histopathology image classification framework, detailed in Algorithm 1, is further illustrated in Figure 9.

Algorithm 1 Automated classification algorithm for histopathology of breast cancer (BreaKHis).

1: Input:

2: Breast cancer dataset used for training: df1, Breast cancer dataset used for validation: df2, Breast cancer dataset used for Testing: df3, Ep: Epochs, bch: Batch size, Lr: Learning rate, N: coverage per batch size, X: CNN pre-trained model’s weight.

3: Begin: Framework Training

4: Resize each microscopy image in the dataset to 224x224 pixels.

5: Apply data augmentation to achieve a balanced class distribution.

6: Apply a data augmentation technique to expand the size of the dataset.

7: Retrieve the features from the lower layers of DensNet201, including pool3_pool of dense block2, pool4_pool of dense block3, and conv5_block32_concat of block4.

8: Merge the extracted features using the concatenate layer.

9: Apply batch normalization, dropout, and softmax to the fine-tuned layers of the CNN.

10: Set the parameters for the pre-trained CNN model initialization: learning rate (Lr), epochs (Ep), batch size (bch), and total samples (N).

11: Train the framework and determine the initial weights.

12: for Ep=1 to Ep do

13: Choose N for training the model on the df1 training set.

14: Perform forward prop and calculate the cost.

15: Perform backprop and Update X.

16: end for

Figure 9. The framework suggested for classifying histopathology images

4. Results and Discussion

4.1 Experimental setting

Figure 10 and Figure 11 represent the simulation results of the learning process. Various hyperparameters are utilized to train the proposed framework.

Weights trained: The proposed model utilizes pre-trained ImageNet weights at the beginning of the network.

Optimizer function: We chose RMSprop as the Optimizer function.

Loss function: The categorical cross-entropy metric, commonly applied in classification tasks involving two or more label classes, is employed as the evaluation metric to calculate the difference between two probability distributions (0 and 1).

Activation function: After training, classification is performed employing the Softmax activation function, which produces a vector of probabilities for each sample of the predicted class length.

Dropout: To improve network performance, a dropout layer was applied with a probability of P=0.5.

Early stopping: A learning rate at 1e-7 is used to decrease the loss function and gradually lowered to approach zero.

Furthermore, a batch size of 35 was used and the training consists of 100 epochs.

Figure 10. BreakHis dataset training progress independent of magnification factors

Figure 11. Training progress of BreakHis dataset based on its magnification factors (40X, 100X, 200X, and 400X)

4.2 Performance metrics

Common statistical parameters are utilized to evaluate the performance of the proposed method.

These parameters are based on various metrics that are derived from the elements of the confusion matrix. The metrics correspond to four terms, namely true positive (TP), false positive (FP), false negative (FN), and true negative (TN).

One can express the computation and evaluation of these statistical metrics as follows:

$Accuracy =\frac{T P+T N}{T P+T N+F P+F N}$                     (2)

$\text{Pr }ecision =\frac{T P}{T P+F P}$                    (3)

$\operatorname{Re}call =\frac{T P}{T P+F N}$                  (4)

$F1-score=2 * \frac{( { precision } * \text {Re }call)}{T P}$                   (5)

4.3 Discussion

Figure 12 and Table 3 respectively present the full report of our classification approach applied to the breast cancer test set and the ROC curve used for the diagnostic evaluation of breast cancer. The BreakHis dataset comprises 7909 microscopic images of breast tumor tissue, across various magnification factors (40x, 100x, 200x, and 400x). It encompasses 2480 benign samples and 5429 malignant samples. Notably, each magnification factor exhibits a class imbalance, with a higher count of malignant samples compared to benign ones. To address this, additional wavelet-generated images were introduced into the benign class. With an imbalanced dataset, the majority classes are consistently classified correctly more often than the minority classes. Wavelet balancing has helped mitigate this effect by providing more data for the minority class, thus improving the chances of accurate classification. From Table 3, our approach yields excellent results in both magnification-independent and magnification-specific binary classifications. It's noteworthy that 212 and 218 microscopic images are accurately diagnosed with Magnification Independence. Only five benign microscopic images were misclassified, with no malignant images misclassified by our model. The key concern is to avoid misclassifying malignant cancer as benign, as it could result in harm due to missed opportunities for early treatment if symptoms arise, creating a false sense of security. The magnification-independent approach showcases outstanding performance, achieving an overall accuracy of 99%, highlighting the model's robustness in accurately classifying samples regardless of magnification scale. For the benign class, the model exhibits high precision and Recall of approximately 98%, along with an F1-Score of 99%, indicating reliable identification of benign samples and effectively capturing 98% of actual benign cases. Similarly, for the malignant class, the model exhibits a precision of approximately 98%, a perfect recall of 100%, and an F1-Score of 99%. These results highlight the remarkable consistency of the model in detecting real malignant cases. Based on the ROC curve in Figure 11, we achieved a remarkable accuracy of 99.98%, indicating the model's exceptional stability. It can be observed from Table 3 that a total of 0, 1, 0, and 0 microscopic images are false negative classified for 40x, 100x, 200x, and 400x breast cancer categories, respectively. A total of 1, 0, 1, and 1 microscopic image are misclassified for the malignant class for 40x, 100x, 200x, and 400x breast cancer categories, respectively. Displaying an outstanding overall accuracy of 99% across all scales, these results highlight the consistent robustness of the model, reinforcing its reliability for early cancer detection. For the benign class, high performances are maintained at each scale, with respective precisions of 96%, 100%, 96%, and 96%. A high Recall of 96% for the 100x magnification and 100% for other scales confirms the model's ability to effectively capture all real benign cases. F1 scores, reaching 98%, indicate an optimal balance between precision and Recall for this class. Similarly, for the malignant class, the model exhibits perfect precisions of 96% at 100x magnification and 100% at other scales, demonstrating its reliable identification of malignant samples. High Recalls, ranging between 98% and 100%, underscore the successful detection of the vast majority of real malignant cases. F1 scores, averaging at 99%, highlight the model's excellent performance across all magnifications. These overall performances demonstrate the robustness of the cancer detection model, showcasing its promising potential for practical application in cancer detection. A comparison between existing literature works and our approach to the classification task of BreakHis images based on similar conditions has been summarized in Table 4.

Table 3. Full classification report of our classification approach applied to the breast cancer test set

 

 

 

Confusion Matrix

Performance Metrics (%)

Magnification Factors

Predict®

Actual↓

Benign

Malignant

Support

Precision

Recall

F1-Score

Accuracy

Magnification Specific

40X

Benign

25

0

25

0.96

1.00

0.98

0.99

Malignant

1

54

55

1.00

0.98

0.99

100X

Benign

25

1

26

1.00

0.96

0.98

0.99

Malignant

0

58

58

0.98

1.00

0.99

200X

Benign

25

0

25

0.96

1.00

0.98

0.99

Malignant

1

55

56

1.00

0.98

0.99

400X

Benign

24

0

24

0.96

1.00

0.98

0.99

Malignant

1

49

50

1.00

0.98

0.99

Magnification Independent

Without magnification factors

Benign

212

5

217

1.00

0.98

0.99

0.99

Malignant

0

218

218

0.98

1.00

0.99

Table 4. A comparison of the proposed framework with other methods on the BreaKHis dataset for the classification task

Number

Method

Accuracy (%)

Classification Type

Classification Method

1

Khan et al. (2021) [2]

99

Magnification independent-binary classification

Data augmentation+MultiNet

2

Saini and Susan (2020) [3]

40X: 96.5

100X: 94.0

200X: 95.5

400X: 93.0

Magnification specific-binary classification

w/BatchNormalization, w/DCGAN

samples and w/hyperparameter tuning)

3

Liew et al. (2021) [6]

97

Magnification independent-binary classification

data resampling+DenseNet201 and XGBoost

4

Toğaçar et al. (2020) [29]

98.80

Magnification independent-binary classification

Data augmentation/BreastNet

5

Han et al. (2017) [30]

40X: 95.8±3.1

100X: 96.9±1.9

200X: 96.7±2.0

400X: 94.9±1.8

Magnification specific-binary classification

Data over-sampling+CSDCNN model

6

Djouima et al. (2022) [1]

40X:   96

100X: 95

200X: 88

400X: 92

Magnification specific-binary classification

DCGAN augmentation

7

proposed

99

Magnification independent-binary classification

Wavelet transform data augmentation+densnet201blocks

8

proposed

40X:   99

100X: 98

200X: 99

400X: 99

Magnification specific-binary classification

Wavelet transform data augmentation+densnet201 blocks

Figure 12. Binary Classification Performance for BreaKHis Dataset with and without Magnification Factors

5. Conclusion

This paper proposes to enhance the accuracy of classifying H & E-stained breast cancer histology images by employing a wavelet decomposition function and the pre-trained DenseNet201 model. Wavelet decomposition augmentation, similar to spatial features, addresses class imbalance, which negatively impacts deep learning network performance in image classification. Previous studies did not use wavelet transform to tackle the class imbalance issue in deep learning with the BREAKHIS breast cancer dataset. Instead, they focused solely on wavelet fusion of spectral and spatial features to enhance breast cancer classifier performance. One additional advantage of wavelet-based image decomposition is its efficiency in reducing convolution time and conserving computational resources in CNNs, all while maintaining performance and enhancing accuracy. The effectiveness of an approach often relies on extracting the most pertinent features. We opted for the DenseNet201 transfer-learning model, known for its robust feature extraction capabilities and superior accuracy compared to other deep transfer learning models. Our proposal involves concatenating the features extracted from the layers of the pre-trained DenseNet201 model by working on each of the three blocks of DenseNet201 separately. The implementation of a global average pooling proposed approach was conducted using two taxonomies applied in the classification of H&E-stained histological images of breast cancer: Magnification specific binary classification and Magnification independent binary classification. The experimental results reached a classification accuracy of 99% on all magnification factors (Magnification Independent), and 99%, 98%, 99%, and 99% respectively in multi-scales. We can thus conclude that using wavelet transform for data augmentation is an effective preliminary measure to address the class imbalance and has led to a significant improvement in the accuracy of our classification model. This demonstrates the usefulness of wavelet transforms in medical image analysis and could be used in healthcare and clinical settings to aid in disease diagnosis. The integration of feature concatenation played a crucial role in improving our model's performance, allowing for a more comprehensive capture of discriminative features at different levels of abstraction. This approach helped decrease the network's parameter count, prevent overfitting, and enhance the robustness and generalization of our model. Additionally, we achieved better performance compared to solely extracting features from the last layer, emphasizing the effectiveness of our approach over other methods. These results underscore the significance of feature concatenation in improving deep learning model performance for histopathology image classification. Future work should focus on evaluating the generalization capabilities of our model to other histopathology image datasets beyond BREAKHIS, such as the BACH dataset and datasets covering different cancer types.

  References

[1] Djouima, H., Zitouni, A., Megherbi, A.C., Sbaa, S. (2022). Classification of breast cancer histopathological images using DensNet201. In 2022 7th International Conference on Image and Signal.Processing and their Applications (ISPA), Mostaganem, Algeria, IEEE, pp. 1-6. https://doi.org/10.1109/ISPA54004.2022.9786028

[2] Khan, S.I., Shahrior, A., Karim, R., Hasan, M., Rahman, A. (2022). MultiNet: A deep neural network approach for detecting breast cancer through multi-scale feature fusion. Journal of King Saud University-Computer and Information Sciences, 34(8): 6217-6228. https://doi.org/10.1016/j.jksuci.2021.08.004

[3] Saini, M., Susan, S. (2020). Deep transfer with minority data augmentation for imbalanced breast cancer dataset. Applied Soft Computing, 97: 106759. https://doi.org/10.1016/j.asoc.2020.106759

[4] Yari, Y., Nguyen, H. (2020). A state-of-the-art deep transfer learning-based model for accurate breast cancer recognition in histology images. In 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE), Cincinnati, OH, USA, pp. 900-905. https://doi.org/10.1109/BIBE50027.2020.00153

[5] Vo, D.M., Nguyen, N.Q., Lee, S.W. (2019). Classification of breast cancer histology images using incremental boosting convolution networks. Information Sciences, 482: 123-138. https://doi.org/10.1016/j.ins.2018.12.089

[6] Liew, X.Y., Hameed, N., Clos, J. (2021). An investigation of XGBoost-based algorithm for breast cancer classification. Machine Learning with Applications, 6: 100154. https://doi.org/10.1016/j.mlwa.2021.100154

[7] LeCun, Y., Bengio, Y., Hinton, G. (2015). Deep learning. Nature, 521(7553): 436-444. https://doi.org/10.1038/nature14539

[8] Hameed, Z., Zahia, S., Garcia-Zapirain, B., Javier Aguirre, J., Maria Vanegas, A. (2020). Breast cancer histopathology image classification using an ensemble of deep learning models. Sensors, 20(16): 4373. https://doi.org/10.3390/s20164373

[9] Lu, S., Lu, Z., Zhang, Y.D. (2019). Pathological brain detection based on AlexNet and transfer learning. Journal of Computational Science, 30: 41-47. https://doi.org/10.1016/j.jocs.2018.11.008

[10] Khan, S., Islam, N., Jan, Z., Din, I.U., Rodrigues, J.J.C. (2019). A novel deep learning based framework for the detection and classification of breast cancer using transfer learning. Pattern Recognition Letters, 125: 1-6. https://doi.org/10.1016/j.patrec.2019.03.022

[11] Benhammou, Y., Achchab, B., Herrera, F., Tabik, S. (2020). BreakHis based breast cancer automatic diagnosis using deep learning: Taxonomy, survey and insights. Neurocomputing, 375: 9-24. https://doi.org/10.1016/j.neucom.2019.09.044

[12] Mewada, H.K., Patel, A.V., Hassaballah, M., Alkinani, M.H., Mahant, K. (2020). Spectral-spatial features integrated convolution neural network for breast cancer classification. Sensors, 20(17): 4747. https://doi.org/10.3390/s20174747

[13] Aksac, A., Demetrick, D.J., Ozyer, T., Alhajj, R. (2019). BreCaHAD: A dataset for breast cancer histopathological annotation and diagnosis. BMC Research Notes, 12(1): 1-3. https://doi.org/10.1186/s13104-019-4121-7

[14] Bayramoglu, N., Kannala, J., Heikkilä, J. (2016). Deep learning for magnification independent breast cancer histopathology image classification. In 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, IEEE, pp. 2440-2445. https://doi.org/10.1109/ICPR.2016.7900002

[15] Bardou, D., Zhang, K., Ahmad, S.M. (2018). Classification of breast cancer based on histology images using convolutional neural networks. IEEE Access, 6: 24680-24693. https://doi.org/10.1109/ACCESS.2018.2831280

[16] Zhu, C., Song, F., Wang, Y., Dong, H., Guo, Y., Liu, J. (2019). Breast cancer histopathology image classification through assembling multiple compact CNNs. BMC Medical Informatics and Decision Making, 19(1): 1-17. https://doi.org/10.1186/s12911-019-0913-x

[17] Lu, J., Behbood, V., Hao, P., Zuo, H., Xue, S., Zhang, G. (2015). Transfer learning using computational intelligence: A survey. Knowledge-Based Systems, 80: 14-23. https://doi.org/10.1016/j.knosys.2015.01.010

[18] Gandomkar, Z., Brennan, P.C., Mello-Thoms, C. (2018). MuDeRN: Multi-category classification of breast histopathological image using deep residual networks. Artificial Intelligence in Medicine, 88: 14-24. https://doi.org/10.1016/j.artmed.2018.04.005

[19] Alom, M.Z., Taha, T.M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, M.S., Hasan, M., Van Essen, B.C., Awwal, A.A.S., Asari, V.K. (2019). A state-of-the-art survey on deep learning theory and architectures. Electronics, 8(3): 292. https://doi.org/10.3390/electronics8030292

[20] Rakhlin, A., Shvets, A., Iglovikov, V., Kalinin, A.A. (2018). Deep convolutional neural networks for breast cancer histology image analysis. In Image Analysis and Recognition: 15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, 15: 737-744. https://doi.org/10.1007/978-3-319-93000-8_83

[21] Toğaçar, M., Ergen, B., Cömert, Z. (2020). COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches. Computers in Biology and Medicine, 121: 103805. https://doi.org/10.1016/j.compbiomed.2020.103805

[22] Lipiński, P. (2012). On domain selection for additive, blind image watermarking. Bulletin of the Polish Academy of Sciences. Technical Sciences, 60(2): 317-321. https://doi.org/10.2478/v10175-012-0042-5

[23] Mallat, S.G. (1989). A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7): 674-693. https://doi.org/10.1109/34.192463

[24] Kumar, A., Singh, S.K., Saxena, S., Lakshmanan, K., Sangaiah, A.K., Chauhan, H., Shrivastava, S., Singh, R.K. (2020). Deep feature learning for histopathological image classification of canine mammary tumors and human breast cancer. Information Sciences, 508: 405-421. https://doi.org/10.1016/j.ins.2019.08.072

[25] Yang, L., Hanneke, S., Carbonell, J. (2013). A theory of transfer learning with applications to active learning. Machine Learning, 90: 161-189. https://doi.org/10.1007/s10994-012-5310-y

[26] Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700-4708.

[27] Wang, S.H., Zhang, Y.D. (2020). DenseNet-201-based deep neural network with composite learning factor and precomputation for multiple sclerosis classification. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 16(2s): 1-19. https://doi.org/10.1145/3341095

[28] Yosinski, J., Clune, J., Bengio, Y., Lipson, H. (2014). How transferable are features in deep neural networks? Advances in Neural Information Processing Systems, 27.

[29] Toğaçar, M., Özkurt, K.B., Ergen, B., Cömert, Z. (2020). BreastNet: A novel convolutional neural network model through histopathological images for the diagnosis of breast cancer. Physica A: Statistical Mechanics and its Applications, 545: 123592. https://doi.org/10.1016/j.physa.2019.123592

[30] Han, Z., Wei, B., Zheng, Y., Yin, Y., Li, K., Li, S. (2017). Breast cancer multi-classification from histopathological images with structured deep learning model. Scientific Reports, 7(1): 4172. https://doi.org/10.1038/s41598-017-04075-z