Deep Learning Networks for Non-Destructive Detection of Food Irradiation

Deep Learning Networks for Non-Destructive Detection of Food Irradiation

Heba Nada* | Osama Omer | Hamada Esmaiel | Mahmoud Ashour  | Amany Arafa

Department of Electrical Engineering, Faculty of Engineering, Aswan University, Aswan 81528, Egypt

Radiation Engineering Department, National Center for Research and Radiation Technology, Egyptian Atomic Energy Authority, Cairo 11787, Egypt

Corresponding Author Email: 
heba_m_nada@yahoo.com
Page: 
551-556
|
DOI: 
https://doi.org/10.18280/ria.370303
Received: 
21 March 2023
|
Revised: 
12 May 2023
|
Accepted: 
22 May 2023
|
Available online: 
30 June 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The authenticity of food and the guarantee of its validity has become of great importance nowadays. One of the most crucial methods for getting rid of food pollutants is food irradiation. Therefore, utilising deep learning algorithms, this work proposed an effective and non-destructive way for identifying foods that have been exposed to gamma radiation. In place of the conventional destructive method for irradiation detection, deep learning technology is suggested in this research as a quick and non-destructive alternative. The proposed method is based on the detection of the changes of the spectrum of the radiation samples. The method is tested over apple images samples, which are irradiated by 0.5, 1, 1.5, 2 and 2.5 KGy. The findings demonstrated that employing imaging processing technology, it was possible to accurately identify food that had been exposed to radiation. The deep learning boosts the score to 94% - 100%, thus improving machine learning method results from 85%. The suggested method is of great importance due to its usability, fast measurement and no need for special skills or sample preparation.

Keywords: 

deep learning (DL), deep transfer learning (DTL), food quality, irradiated food, non-destructive, linear discriminant analysis (LDA)

1. Introduction

Ongoing research around the world is now working on the validity of food and making sure that it is free of viruses and microbes, hence the importance of sterilization methods.

The safety of using irradiation on food has been approved by the FDA, but there are still some consumers who do not fully accept it. In this procedure, ionising radiation—specifically, gamma rays, electron beams, and X-rays—is used to kill bacteria. Irradiation is still a recognised method, despite some nations only approving it for specific products [1, 2].

Food processing with radiation is the primary technique to eliminate food contaminants and obtain food free of microbes, fungi, and parasites harmful to human health. Food irradiation is done by exposing samples of fruits and vegetables to a specific amount of ionizing rays, such as gamma rays, which prevent budding and delay ripening and mold growth in addition to killing microbes, insects, worms, and parasites, which leads to food preservation as long as possible in a safe and secure manner [3].

It is important to make sure that the food samples are exposed to the appropriate doses before handling it. Therefore, there are different ways to detect whether the food is radioactive or not, and there are many of them that are internationally recognized, which are the physical and chemical methods, but they are very expensive and time-consuming [1, 2] in analysis in addition to being destructive to food samples.

Food irradiation refers to a method used to avoid spoilage, eliminate foodborne pathogens, and eradicate harmful bacteria, pests, or parasites. The assurance that fresh fruits and vegetables satisfy specified standards is typically the main criterion used to evaluate their quality and safety for ingestion. External characteristics such as size, shape, colour, gloss, and consistency, as well as texture (firmness, crispness, and toughness) and flavour (sweetness and sourness), are included in these standards.

Our challenge is how to detect irradiation effects on food samples that can see as normal by using image processing machine learning such as Linear discriminate analysis (LDA), and deep transfer learning (DTL) ‎[4].

A non-disruptive method is necessary to detect irradiated foods in order to ensure adherence to current regulations, enable consumers to make informed choices and simplify international food trade.

The irradiation detection methods can be classified as analytical methods such as chemical methods, biological methods, and so on. These methods have different degrees of development. Prior to last year, it was possible to judge the quality of food by preserving accuracy, dependability, and consistency while removing the subjectivity of manual inspections [5].

One of the most popular physical methods for the detection irradiation effect is electron spin resonance (ESR) [6, 7]. It was discovered that a variety of variables, including fruit type and radiation dose, influence the effectiveness of ESR detection. The appropriate ESR device is required for the ESR process. If the seeds are ground, the sample will be lost. GC/MS (gas chromatography/mass spectrometry) is a chemical applied to the detection of volatile hydrocarbons. to apply all of these tests the customer will lose the sample for exposure to grinding operations and laboratory settings for testing.

Additionally, the use of software for image processing as a non-destructive tool for classifying and identifying fruit damage is growing globally. Lots of research have been done based on apple because it is a tactical fruit in the fruit and vegetable market. The primary characteristic of gamma-ray irradiation is that fruit does not change much in appearance after exposure to radiation that is clearly apparent to the naked eye. Therefore, by grading and tracking the colour evolution of apples using RGB image vision cameras, this research provided a non-destructive way for identifying apples exposed to radiation. Using deep learning algorithms, this technique recognises differences in the colour intensity of the sample before and after exposure to radiation dose on the same day.

Deep learning (DL) plays a significant role in statistics and predictive modeling across various domains. It involves gathering vast quantities of data and scrutinizing it to construct several predictive models that help identify trends and patterns present in the data [8, 9].

A machine can extract features from raw information for detection, classification, or regression via representation learning. Through the use of deep artificial neural networks (ANNs) with several layers, deep learning is a type of representation learning which improves multiple layers representation. The powerful feature-learning capacity of DL enables it to address numerous complicated problems quickly and efficiently [10].

The powerful automatic feature learning capability of deep learning has led to its application in vital fields such as medical fields and food sciences. This includes tasks such as recognizing food categories, detecting fruit and vegetable quality, and estimating food calories. DL models have demonstrated their strong abilities in classification and regression tasks, but their effectiveness largely depends on having a sufficient amount of data that accurately represents the specific problem [11, 12]. The classification machine learning models with a convolutional neural network produced a high success rate for size and appearance classification [13, 14] in many applications such as the potato quality grading system, which is based on machine learning models that contain 3D potato appearance data [15, 16].

Applying a deep learning algorithm to detect the irradiation effect on the food sample and having the ability to classify samples that were exposed to radiation from those that were not, and this is the main goal of this research.

The article is divided into the following five sections: Section I introduces the problem and recent research in the field. Section II related work section III explains the proposed algorithm, while section IV demonstrates the acquired results. Finally, section V summarizes the main study findings.

2. Related Work

Many research investigations use image processing as a non-destructive way for classifying and identifying fruit deterioration. In order to provide consumers with food products that are free of defects, food quality evaluation is crucial.

The assessment of the food industry continues to rely heavily on manual inspection, which is not only tedious, laborious, and expensive but also prone to subjective and inconsistent evaluation results due to physiological factors. Quality is a critical aspect of the modern food industry because the success of a product in today's highly competitive market is largely [11-13] dependent on its high quality. The quality of a substance determines its internal and exterior properties [14-16]. The observation of the dehydration procedure involved more than simply measuring weight and moisture content. It also entailed the use of computer vision to visually analyze [17, 18] alterations in the food's appearance. The advantage of utilizing these techniques is that they are non-invasive and can be put into practice without requiring costly laboratory equipment.

The crucial visual characteristics that were identified in the three RGB channels and are entropy, skewness, and contrast. The use of RSM (response surface methodology) in the Htike evaluation bruising in guava of impact using image processing study helped in decreasing the occurrence of impact bruising in guava across its supply chain. It is recommended to handle the fruit with care and store it under cool conditions to minimize the impact of energy [19].

Rizwan Iqbal and Hakim [20] provided a Deep learning optimal solution for mango classification into various cultivars and grading a deep learning-based approach for automated classification The grading of eight cultivars of harvested mangoes based on aesthetic qualities such colour, size, shape, and texture to reduce the amount of mangoes that are manually harvested that are discarded.

3. Proposed Method

Our work suggests a method to classify samples that were exposed to radiation and that didn’t. The method is based on deep learning algorithms. The basic step is database collected by using a RGB camera and capturing images before and after radiation.

3.1 Experimental design

In designing our experiment there were three main points of experiment construction. The first point is to choose a specific type of fruit, and apples have been given because they are common fruits around the world. The second point is the exposure of samples to radiation with appropriate and internationally recognized doses. The third point is how to deal with images and analyze those using appropriate algorithms.

3.2 Apple data set

The first step is how to build and get the idealized model photography with standards capturing image system and preparing the sample to build the database for analysis.

The apple fruit imaging device for capturing images must be standardized. So, the place of imaging is adjusted to, the place of the camera is fixed, and the distance between the camera and the samples is adjusted to 30cm. The lab equipment comprises of a Sony Cyber-shot W200 high-performance 3CCD (three Charged Coupled Devices) camera.

Golden apples were purchased from local stores for the experiment and stored at (4℃) refrigerated temperatures.

A total of three apples without any physical defects (which can be identified visually) were randomly selected for each radiation does for irradiation dose they formed a range of natural commercial color variations.

Each group contains a sample of apples, and each apple has a label specifying the radiation dose group on it. For each sample, three pictures are taken from three distinct angles, including the apple's label, its right and left sides, and its back. we have three apples with three sides for everyone and we have five groups according to the dose then we have 45 images before radiation and 45 after radiation apple images. Samples in Figure 1 shows examples for labeled apples without any physical defect it was carefully collected so that it did not have any apparent defects to investigate how radiation affects colour change in the samples.

Figure 1. Apple samples group

Three apples each make up a sample group, and each apple has a sticker on it identifying its number and radiation dose group. Images of each apple are captured both before and after radiation treatment. 2048 × 1536 pixels of RGB colour were used to save the image. Since there is no visual distinction between the apple samples, the image processing DL technique is employed to find the alterations in the colouring profile.

3.3 Irradiation

Taking apples as food samples exposed to gamma radiation Cobalt-60. FDA regulations describe radiation levels for food samples by the low dose and maxim dose. The samples are irradiated with by different dose 0.5kGy, 1kGy, 1.5kGy, 2kGy and 2.5kGy as the min dose and not get the maximum doses to provide any damage of the sample. The experiment is carried out at Egypt's "National Centre for Radiation and Research Technology" (NCRRT) "Gamma-ray Research Units".

3.4 Deep learning algorithm

The technique of creating a deep learning architecture and producing a model through the iterative application of functions in multiple layers is known as Deep Neural Network (DNN) or Deep Learning (DL) [20]. Although its interpretability is not as excellent as classical learning, deep learning has a substantial impact on conventional machine learning techniques. The fundamental layers in deep learning algorithms are illustrated in Figure 2. The first layer is responsible for extracting different features from the input images. This is achieved by performing a mathematical operation, known as convolution, between the input image and a filter of a specific size, typically MxM. The Pooling Layer basically summarizes the features generated through a convolution layer. The total sum of the elements in the predefined section is computed. In the sum pooling, from the feature map is should be taken the largest element.

The Fully Connected layer (FC), which comes before the output layer, is made up of the bases and weights as well as the neurons that link the neurons in the two separate levels. Based on the information acquired from the earlier steps, the FC layer forecasts the image's class using the convolution process' output.

The models to be trained on one problem can be utilized for another by using a pre-trained model. That is in turn fasten the training speed and the learning performance [7].

Figure 2. The basic layers in deep learning

3.5 Deep transfer learning (DTL)

Deep transfer learning is the basis of our suggested algorithm. The input layer is in charge of transferring information from one layer to another. The pre-trained network we employed was designed to include available moving and tuning functions as well as freezing and retraining specific network layers [21].

The First stage of the proposed method is repairing samples for non-radiated and the irradiated with gamma ray’s database of the proposed method is in Figure 3. The data preparation includes cropping and data augmentation.

Data augmentation is used for increasing the size of data used for training a model [22]. Deep learning models predominantly require a lot of training data for reliable predictions, which is not always available. Data augmentation techniques are translating, Auto contrast, rotate scale and shear.

Figure 3. The proposed method

The architectures of pre-trained networks can be greatly customized and have different hyper parameters. This is a more complex technique, layers are also selectively re-trained, Therefore, the plan is to refine the remaining layers while training while freezing (i.e., fixing the weights) a few layers. Accuracy and speed are the two primary considerations, which are the same for most machine learning projects. In our experiment, we chose a pertained models like Digits-Net, VGG-19, Alex-net and Dark.net.

3.5.1 Transfer learning

Because deep learning training systems are so large and require so many resources, transfer learning and feature transfer are used in many deep learning applications [23]. Six general steps can be used to implement transfer learning. Selecting the previously learned models to employ with the dataset is the first step.

The second step is adjusting the available dataset to match the input layer of the pertained networks.

Pre-trained weights are an optional extra that we can download. If you don't download the weights, you will have to utilise the build to start training your model from scratch.

In the final output layer, the base model usually has more modules than we need. During the base model creation, you must, therefore, remove or cut the final layer of output as shown in Figure 4. Then we add a final output layer compatible with the target dataset.

The third step is freezing layers, freezing the pre-trained model's layers so they don’t change during training. This is because we don’t want the weights in feature extraction layers to be re-initialized. If the re-initialize is done, then I will lose all the learning that has already been done, and that makes us train the model from scratch.

The fourth stage is to include new trainable layers that will allow the old features on the new dataset to be predicted.

Due to the fact that the pre-trained model is loaded without the final output layer, this is significant when adding new trainable layers. The new layers are trained on the dataset in the fifth stage.

3.5.2 Classification layer

The classification layer is a feature that uses the characteristics found in the image to identify a specific object in the picture. Prior to being translated into higher-level features, the creation of features produced in the feature extraction layers may be utilised to represent visual features in a hierarchical fashion.

The characteristics from this layer are integrated to perform the classification after training a new classification layer utilising feature transfer for a relevant domain using the input and feature extraction layers that have been learned with a specific data set.

Figure 4. The basic architecture of transfer learning

Table 1. The basic network architecture

 

Model 1

Model 2

Model 3

Model4

Model 5

CNN

Alex-net

VGG-19

Digit-net

VGG-16

Dark- net

Input size

227*227*3

224*224*3

28*28*1

224*224*3

256*256*7

learning rate

5e-3

5e-3

5e-3

5e-3

5e-3

Loss function

cross sgdm entropy loss

cross sgdm entropy loss

cross sgdm entropy loss

Cross sgdm entropy loss

cross sgdm entropy loss

4. Result and Discussion

4.1 Experimental setup

The dataset building is conducted by using the apple samples before being exposed to radiation (Cobalt-60 source) and after being irradiated with different doses. The training data contains 80% images, 10% for validation, and 10% for testing. The total number of image is 90 image. pre-trained CNN models such as Alex-net, VGG-19, VGG-16, Digit net, and Dark net are used to classify the apple samples before and after radiation. The CNN’S models architecture used Alex.net, VGG19, Digit net, VGG16, and Darknet.

The Alex.Net network, deep network contains 13 convolutional layers, and is more successful in classification problems and image recognition when the data set is defined correctly [23].

The VGG19 design has around 24 primary layers and 138 million total parameters. Filters are used in the convolutional layer to reduce the number of parameters because the network is deep. The VGG19 architecture consists of 16 convolutional, 5 pooling, and 3 fully linked layers specifically. In this architecture, the chosen filter may have a 3 3-pixel size. The primary components of the deployed CNNs employ data augmentation methods to boost the volume of input images. The input size is adjusted based on the network's basic architecture for every network. Table 1 shows the network parameter for regulating the degree to which the model should be adjusted based on the estimated error every time the weights of the model are modified.

The data augmentation by rotation and translation and shear. The augmentation methods used three methods. The Description of the Translation method is how to translate an image in the vertical and horizontal direction with a fixed magnitude as shown in Table 2. The rotation method described in augmentation rotates the image by fixed magnitude degrees. The shear method is done by Shearing the image with rate magnitude along an axis, either horizontal or vertical.

A loss function used is a cross-entropy loss to measure neural network model training data and compares the target for predicted output values.

4.2 Experimental results

The experimental results are shown in Table 3 by using CNNS models (training net used VGG16, 19, Digits net and Alex.net) and the classification accuracy. The classification results as shown of different network model. The result of validation accuracy 93% by using Digits net and 100% by using VGG 16 and others network. The accuracy of training layers 50 iterations gets around 100% of the input image.

Table 2 indicates that basic augmentation techniques, including translation, rotation, and shearing, deliver superior outcomes compared to more intricate techniques.

Table 2. The augmentation methods

Name

Range

Rotation

[-30, 30]

Translation

[-7, 7]

Shear

[-3, 3]

Table 3. The proposed method results

CNN

Accuracy

Alex.net

100%

VGG19

100%

Digit net

93.3%

VGG 16

100%

Dark net

100%

LDA [1]

85%

The training set can be expanded by utilizing data augmentation techniques that involve applying image processing methods, like rotation and translation. Our experimental findings demonstrate that these augmentation techniques produce favorable outcomes even when dealing with small datasets.

We trained a CNNS using a training set that was augmented by simple techniques, as shown in Table 3 and the proposed method outperformance the previse method by incorporating an LDA classifier [4].

5. Conclusion

Deep learning based irradiation detection is proposed in this paper. From the simulation result pre-trained networks improve and obtain a more reliable result than the old static method using LDA [4], irradiated apples can be classified from non-irradiated apples with about 85% of samples classified correctly but DTL enhances detection efficiency to 100%.

We demonstrate the fact that radiation exposure can be confirmed by non-destructive methods rather than conventional destructive methods.

The proposed method is based on RGB images to detect change in irradiated image.

  References

[1] Kiani, D., Borzouei, A., Ramezanpour, S., Soltanloo, H., Saadati, S. (2022). Application of gamma irradiation on morphological, biochemical, and molecular aspects of wheat (Triticum aestivum L.) under different seed moisture contents. Scientific Reports, 12(1): 11082. https://doi.org/10.1038/s41598-022-14949-6

[2] Chauhan, S.K., Kumar, R., Nadanasabapathy, S., Bawa, A.S. (2009). Detection methods for irradiated foods. Comprehensive Reviews in Food Science and Food Safety, 8(1): 4-16. https://doi.org/10.1111/j.1541-4337.2008.00063.x

[3] Food and Drug Administration (FDA). Food Facts. http://www.fda.gov/downloads/Food/IngredientsPackagingLabeling/UCM262295.pdf.

[4] Nada, H.M., Arafa, A.A., Tarrad, I.F., Ashour, M. (2021). Non-destructive detection for irradiated apple using image processing. International Journal of Computer Applications, 183(24): 20-24. https://doi.org/10.5120/ijca2021921609

[5] Horak, C.I., Di Giorgio, M., Kairiyama, E. (2009). Identification of irradiated apples for phytosanitary purposes. Radiation Physics and Chemistry, 78(7-8): 707-709. https://doi.org/10.1016/j.radphyschem.2009.03.054

[6] Mounir, A.M., El-Hefny, A.M., Mahmoud, S.H., El-Tanahy, A.M.M. (2022). Effect of low gamma irradiation doses on growth, productivity and chemical constituents of Jerusalem artichoke (Helianthus tuberosus) tubers. Bulletin of the National Research Centre, 46(1): 146. https://doi.org/10.1186/s42269-022-00838-5

[7] Kim, K.H., Shon, J.H., Kang, Y.J., Jo, T.Y., Park, H.Y., Kwak, J.Y., Lee, J.H., Park, J.I., Lee, H.J., Lee, S.J., Han, S.B. (2013). Detection characteristics of gamma-irradiated seeds by using PSL, TL, ESR and GC/MS. Journal of Food Hygiene and Safety (Seoul), 28(2): 130-137. https://doi.org/10.13103/JFHS.2013.28.2.130

[8] Zhu, L., Spachos, P., Pensini, E., Plataniotis, K.N. (2021). Deep learning and machine vision for food processing: A survey. Current Research in Food Science, 4: 233-249. https://doi.org/10.1016/j.crfs.2021.03.009

[9] Pattnayak, S.B., Patra, T.K. (2020). An image processing approach to detect fruit damage. International Research Journal of Engineering and Technology (IRJET), 7(7): 667-671.

[10] Ireri, D., Belal, E., Okinda, C., Makange, N., Ji, C. (2019). A computer vision system for defect discrimination and grading in tomatoes using machine learning and image processing. Artificial Intelligence in Agriculture, 2: 28-37. https://doi.org/10.1016/j.aiia.2019.06.001

[11] Xiao, Z.F., Wang, J.L., Han, L., Guo, S.B., Cui, Q.H. (2022). Application of machine vision system in food detection. Frontiers in Nutrition, 9. https://doi.org/10.3389/fnut.2022.888245

[12] Vesali, F., Gharibkhani, M., Komarizadeh, M.H. (2011). An approach to estimate moisture content of apple with image processing method. Australian Journal of Crop Science, 5(2): 111-115.

[13] Meenu, M., Kurade, C., Neelapu, B.C., Kalra, S., Ramaswamy, H.S., Yu, Y. (2021). A concise review on food quality assessment using digital image processing. Trends in Food Science & Technology, 118: 87-105. 

[14] Jackman, P., Sun, D.W., ElMasry, G. (2012). Robust colour calibration of an imaging system using a colour space transform and advanced regression modelling. Meat Science, 91(4): 402-407. https://doi.org/10.1016/j.meatsci.2012.02.014

[15] Su, Q., Kondo, N., Al Riza, D.F., Habaragamuwa, H. (2020). Potato quality grading based on depth imaging and convolutional neural network. Journal of Food Quality, 2020: 1-9. https://doi.org/10.1155/2020/8815896

[16] Baigts-Allende, D., Ramírez-Rodrígues, M., Rosas-Romero, R. (2022). Monitoring of the dehydration process of apple snacks with visual feature extraction and image processing techniques. Applied Sciences, 12(21): 11269. https://doi.org/10.3390/app122111269

[17] Nguyen, C.N., Vo, V.T., Ha, N.C. (2022). Developing a computer vision system for real-time color measurement–a case study with color characterization of roasted rice. Journal of Food Engineering, 316: 110821. https://doi.org/10.1016/j.jfoodeng.2021.110821

[18] Unal, Y., Taspinar, Y.S., Cinar, I., Kursun, R., Koklu, M. (2022). Application of pre-trained deep convolutional neural networks for coffee beans species detection. Food Analytical Methods, 15(12): 3232-3243. https://doi.org/10.1007/s12161-022-02362-8

[19] Htike, T., Saengrayap, R., Aunsri, N., Tontiwattanakul, K., Chaiwong, S. (2021). Investigation and evaluation of impact bruising in guava using image processing and response surface methodology. Horticulturae, 7(10): 411. https://doi.org/10.3390/horticulturae7100411

[20] Rizwan Iqbal, H.M., Hakim, A. (2022). Classification and grading of harvested mangoes using convolutional neural network. International Journal of Fruit Science, 22(1): 95-109. https://doi.org/10.1080/15538362.2021.2023069

[21] Hu, B., Lei, C., Wang, D., Zhang, S., Chen, Z. (2019). A preliminary study on data augmentation of deep learning for image classification. arXiv preprint arXiv:1906.11887. https://doi.org/10.48550/arXiv.1906.11887

[22] Zhang, R., Zhou, B., Lu, C., Ma, M. (2022). The performance research of the data augmentation method for image classification. Mathematical Problems in Engineering, 2022: 2964829. https://doi.org/10.1155/2022/2964829

[23] Tao, W., Al-Amin, M., Chen, H., Leu, M.C., Yin, Z., Qin, R. (2020). Real-time assembly operation recognition with fog computing and transfer learning for human-centered intelligent manufacturing. Procedia Manufacturing, 48: 926-931. https://doi.org/10.1016/j.promfg.2020.05.131