Apple Binocular Visual Identification and Positioning System

Apple Binocular Visual Identification and Positioning System

Li Liu Xin Qiao Xindong Shi Yong Wang Yinggang Shi*

College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China

Corresponding Author Email: 
syg9696@nwsuaf.edu.cn
Page: 
133-137
|
DOI: 
https://doi.org/10.18280/ria.330208
Received: 
2 January 2019
|
Revised: 
1 March 2019
|
Accepted: 
6 March 2019
|
Available online: 
25 August 2019
| Citation

OPEN ACCESS

Abstract: 

In order to realize the autonomous recognition and location of apple, an apple recognition and location system based on LabVIEW software, IMAQ Vision kit and binocular Vision was designed in the work. The system identified apples on the trees by background subtraction based on difference of surface color, object identification based on circle and binocular stereo measurement of the apples on trees. The results showed that the system has accomplished image acquisition, preprocessing, recognition and depth recovery, realizing the positioning of apple on LabVIEW. The system can be transplanted into the fully automatic picking system to pick apples more accurately and quickly.

Keywords: 

LabVIEW, object identification and positioning, binocular vision

1. Introduction

With the development of economy and technology, in order to reduce labor intensity and improve work efficiency, the fruit picking robot is expected to gradually replace labor [1, 2]. However, the low success rate of fruit recognition has become a major issue to limit the efficiency of picking robot. To solve the problem, many experts and scholars have carried out a large amount of studies [3-5]. Visual identification is the most common method [6-7].

Methods of visually identifying the fruit on a tree include monocular vision identification, binocular vision identification, static fruit identification, dynamic fruit identification, single fruit identification, overlapped & blocked fruit identification, and apple identification at night [8-15].

A fast and efficient method of fruit recognition, combined with visual programming software, can quickly integrate the visual system into an engineering project. In the work, the design of visual apple tree identification and positioning of the image processing system, using LabVIEW software integrating the visual identification of visualization algorithms, improved the portability of the algorithm and interfused, the recognition of apple picking and localization algorithm to the system software of picking robot simply and quickly.

The main research contents include overall design of the system, image processing, binocular Vision, conclusions and prospect.

2. Overall Design of the System

The binocular vision platform of automatic apple picking system is composed of an intelligent cloud platform MV-3939, left panoramic cameras VS-880HC, right panoramic cameras VS-880HC and continuous variable zoom lens VS-M1024. The system determines the angle and definition of shooting through rotation of the cloud platform and zooming of the lens. The Binocular vision identification is used to take photos of fruit trees, and then sends those images to computers with the PCI image capture card MV-8002. The picked objects are extracted through appropriate segmentation with the computers to determine if the apples are in this area and their possible positions. Then, binocular stereo-camera is calibrated through feature point matching in the images of left and right eyes with binocular vision measuring technology. In this way, the coordinates of the target objects can be calculated, providing necessary basic data for the planning of a picking path [16-20].

The image processing of the system covers image capture, object identification, camera calibration, and binocular vision measurement. The image processing software consists of LabVIEW8.2 and IMAQ Vision tool kit. The IMAQ Vision toolkit is based on Labveiw software to capture, analyze, and process images. The image processing programs include manual and automatic test programs. The former is used in test and debugging of algorithm parameters. The latter can be used for automatic test of the apples after the algorithm parameters are determined separately and combined together.

Figure 1. Framework of image capture and processing system

Figure 2 shows the image processing programs of the system. The capture and display of images are performed, followed by the preprocessing and segmentation of images. Next, the system determines whether there is any object in those images. If there is, the camera calibration program is called to conduct calibration and determine the external parameters of the binocular camera. Then, the images from the left and right cameras are matched, and the triangle method is used to carry out stereoscopic measurement, thus determining three dimensional space coordinates of objects. The program returns to object identification positioning and test continually. If it is stopped manually, the process will return to the starting point, and then stop.

Figure 2. Image processing programs

There are two ways for the capture and display of images under the LabVIEW environment with the MV-8002 image acquisition card. The first one is to call external program interfaces, such as MVLabConvert.dll and mv8lab.ocx, which are programmed in VC++ for image capture and display. The second one is to change the connection method of the instrument into USB with a serial converter, and then the IMAQ USB module in the LabVIEW software is used for the capture and display of images. Compared with the second method, the first method is faster and more accurate in processing large image data, which is helpful for improving the accuracy of fruit recognition and positioning. The work used the first method to call the interface program in the LabVIEW environment for the capture and display of images [21]. Figure 3 show the LabVIEW program block of the capture program.

3. Image Processing

There are currently two main ways to identify fruits on trees with computer vision by color or shape. First of all, if there is obvious difference between the colors of fruits and background, fruits can be identified according to the color. Secondly, an image capture device can be used with the equipment, such as filter to capture images, and identify fruits based on the shape.

The apples selected for this experiment have a great difference in colors of fruits and background. Therefore, RGB color images of mature apples are captured, and the red intensity is extracted to transfer into gray images. The basic principles are as follows: changing the red, green and blue intensity ratio of each pixel in the RGB images, leaving the fundamental color of red, and changing the intensity ratio of the rest colors to zero. It is to make a specific color. Figure 4a-b show the effects before and after extracting the red intensity, and the transformation formula is as follows:

$C \equiv r R+g G+b B$     (1)

where, C is one specific color; $\equiv$ the matching; R, G and B are the three primary colors; r, g and b the intensity ratio coefficients, and r + g + b=1.

Figure 3. LabVIEW program block diagram of the capture program

 

In LabVIEW, IMAQ Extract Single Color Plane and IMAQ Cast Image VI can be adopted to average r, g and b coefficients in RGB images into R, G and B. Then the gray images are obtained (Grayscale 8 bits). The gray value of each color only has little difference, and the r, g and b coefficients need to be set manually to highlight the gray of certain color and inhibit that of other colors. To make the image processing system universal, let r=1, and g=b=0, so C=R×r=R. Figure 4b shows the effect after extracting the red component from the images by IMAQ Extract Single Color Plane.

The practical conditions of the project and experimental data are used to choose a method. Values in the predefined query table are used to replace pixel values in the original images, thus reducing useless information and highlighting information necessary for image analysis. This changes the dynamic range of images, and results in image enhancement. Furthermore, the rations are remapped to enhance the contrast for big pixel values and decrease the contrast for small pixel values. The transformation formula is

$g(x, y)=[f(x, y)]^{X}$           (2)

The value of the variable X determines the influence on images, and Figure 4c shows the effect after image enhancement. The results show that there is obvious contrast between the brightness of the objects and that of the background in the images, with the brighter area of fruits and the darker background. It makes the contour of the objects more distinct, and increases the contrast between the objects and the background.

Figure 4. Image preprocessing

The threshold segmentation method is applied for image segmentation to obtain the binary images since there is a large difference between the pixel gray of the fruits and the background. Thus, the input image f(x,y) and output image g(x,y) with the threshold T can be expressed as

$f^{\prime}(x, y)=\left\{\begin{array}{ll}{1,} & {f(x, y) \geq T} \\ {0,} & {f(x, y)<T}\end{array}\right.$       (3)

According to the grayscale characteristics of the image, the image is divided into two parts, background P(Z) and foreground Q(Z). The segmentation threshold of foreground and background is denoted as T, which is located in the valley between two single-peak curves (See as Figure 5). The work compared five kinds of threshold segmentation methods, including clustering, entropy, metric, moments and the maximum inter-class variance with NI Vision Assistant [22]. The maximum inter-class variance method is simple to calculate and is not affected by the brightness and contrast of the image. Therefore, the maximum inter-class variance method was used to carry out automatic threshold selection of the images. Figure 4 shows the segmentation results with maximum cross entropy method.

Figure. 5 Schematic diagram of the threshold limit value

Therefore, most backgrounds were removed from the images after threshold value segmentation, making the main body of the fruits unchanged. However, there were still some useless pixel points in the images that interfered with the identification of the apples. Binary morphology was conducted on the images to further highlight the apples and remove these useless pixels. The corrosion algorithm and the convolution method were used to filter out the image noise. Figure 4e shows and the effects after noise filtering.

After corrosion and filtering, the images of the target apples were successfully segmented, allowing extraction of the contour of the objects. Sharpening of the images is also required to enhance the contour edges, details and gray scale mutation in the images, and to form a complete object boundary of the contour region representing the apples. Based on the shape and external characteristics of the targets, the Roberts edge extraction algorithm was selected to extract the edge of images [23-24]. Figure 4f shows the effects after edge extraction.

Figure 6. Front panel of the LabVIEW automatic testing program

The automatic testing program for apple identification was designed through combing several algorithms, including image capture and display, image preprocessing, and image segmentation; as well as the algorithm parameters obtained in the manual program. Figure 6 shows the front panel of the LabVIEW automatic testing program.

4. Binocular Vision

In order to position image object point in machine vision, the relationship between the camera image pixel position and scene point position, namely the camera calibration, should be established to obtain the model parameters using the image coordinates and world coordinates of known feature points within the camera model. A single camera calibration method is usually adopted to get the internal and external parameters of the two cameras firstly. Then the position relation between the two cameras can be established with a group of scaling points in the same world coordinates. TOOLBOX_calib, the camera calibration tool kit of MATLAB, can be used to calibrate the binocular vision system with the feature points on the targets to obtain the parameters of the linear model. They are regarded as the primary values, and the iteration method can be used to calculate the accurate solution.

Figure 7. Binocular matching results

The system performs model matching with the binocular vision identification function after feature extraction, thus to identifying whether there is an apple center point in the images. The IMAQ Setup Learn Pattern2 function of the shape matching function module is used to design a module matching subprogram, and Figure. 7 shows the binocular matching results. According to the shape feature of the objects, all the round or semi-round particles in the object images are counted, providing descriptive data of basic geometric features, such as, particle area, central position and rotation angle of each round particle. Moreover, image features can be extracted to obtain the left eye coordinates and right eye coordinates of the apple central point. Figure 7 shows the feature extraction results of images from corresponding left and right cameras.

The binocular stereo visual system consists of cameras L and R, whose optical axes are parallel and vertical to the base line. The focal length is f, and the connecting line between focal centers$O_{L}$ and $O_{R}$ is parallel to the scan line, with the two image planes in the same plane. The coordinate axes of the two cameras are parallel to each other, and their x axes are coincident, so the distance between cameras on the x direction is the baseline range B. Figure. 8 shows the principle of binocular visual, where the two cameras shoot the same feature point P of space objects at the same time to capture the images of point P on the left and right eyes. The imagecoordinates are $P_{\text {left}}=\left(X_{\text {left}}, Y_{\text {left}}\right),$ and $P_{\text {right}}=\left(X_{\text {right}}, Y_{\text {right}}\right)$ respectively.

Figure 8. Principle of binocular visual

If the images of the two cameras are in the same plane, then the Y coordinates of point P are the same, that is , so the disparity . Based on the triangular geometric relationship, the plane coordinates of feature point P on the camera coordinate system can be calculated, and the three-dimensional depth information of the scenery can be recovered by disparity calculation (the depth is inversely proportional to the disparity). The larger the disparity is, the closer the scenery is to the lens. If P tends to be infinite, the disparity tends to be zero. For any point on the right image, the three-dimensional coordinates of that point can be determined if its corresponding matching point can be identified on the left image. Then, the three-dimensional coordinates of feature point P on the camera coordinate system can be expressed as follows:

$x_{C}=\frac{B \cdot X_{l e f t}}{D}, y_{C}=\frac{B \cdot Y}{D}, z_{C}=\frac{B \cdot f}{D}$     (4)

5. Conclusions and Prospect

The work used the LabVIEW, IMAQ Vision tool kits, TOOLBOX_calib, and the camera calibration tool kit of MATLAB to analyze the working processes of picking robot, including object obtaining, background subtraction, picking identification, and binocular stereo measurement of the apples on trees. Moreover, the automatic object identification and positioning system were established, which provides an automatic program processing method for object identification. The coordinates were required for automatic picking, thus the current program algorithm could be transplanted into the fully automatic picking system.

Acknowledgment

This research has received support from the Fundamental Research Funds for the Central Universities (2452016077), and Shaanxi Key R&D Program (2019NY-171). The authors are also gratefully to the reviewers for their helpful comments and recommendations, which make the presentation better.

  References

[1] Zhang, Z., Heinemann, P.H., Liu, J., Baugher, T.A., Schupp, J.R. (2016). The development of mechanical apple harvesting technology: A review. Transactions of the ASAE. American Society of Agricultural Engineers, 59(5): 1165-1180. https://doi.org/10.13031/trans.59.11737

[2] Li, J., Karkee, M., Zhang, Q., Xiao, K.H., Feng, T. (2016). Characterizing apple picking patterns for robotic harvesting. Computers & Electronics in Agriculture, 127(C): 633-640. https://doi.org/10.1016/j.compag.2016.07.024

[3] Kelman, E.E., Linker, R. (2014). Vision-based localisation of mature apples in tree images using convexity. Biosystems Engineering, 118(1): 174-185. https://doi.org/10.1016/j.biosystemseng.2013.11.007

[4] Musale, S.S., Patil, P.M. (2014). Identification of defective mangoes using Gabor wavelets: A non-destructive technique based on texture analysis. International Journal of Agriculture Innovations & Research, 2(6): 992-996.

[5] Wang, D.D., Song, H.B., He, D.J. (2017). Research advance on vision system of apple picking robot. Transactions of the Chinese Society of Agricultural Engineering, 33(10): 59-69. https://doi.org/10.11975/j.issn.1002-6819.2017.10.008

[6] Lin, H.H., Cai, K., Chen, H.Z., Zeng, Z.F. (2015). Optimization design of fruit picking end-effector based on its grasping model. INMATEH - Agricultural Engineering, 47(3): 81-81.

[7] Zhao, Y., Gong, L., Huang, Y., Liu, C.L. (2016). A review of key techniques of vision-based control for harvesting robot. Computers and Electronics in Agriculture, 127: 311-323. https://doi.org/10.1016/j.compag.2016.06.022

[8] Wang, D., Song, H., Tie, Z., Zhang, W., He, D. (2016). Recognition and localization of occluded apples using K-means clustering algorithm and convex hull theory: A comparison. Multimedia Tools & Applications, 75(6): 3177-3198. https://doi.org/10.1007/s11042-014-2429-9

[9] Wang, D.D., Song, H.B., Yu, X.L., Zhang, W.Y., Qu, W.F., Xu, Y. (2015). An improved contour symmetry axes extraction algorithm and its application in the location of picking points of apples. Spanish Journal of Agricultural Research, 13(1): e0205. https://doi.org/10.5424/sjar/2015131-6181

[10] Xiang, R., Jiang, H., Ying, Y. (2014). Recognition of clustered tomatoes based on binocular stereo vision. Computers and Electronics in Agriculture, 106: 75-90. https://doi.org/10.1016/j.compag.2014.05.006.

[11] Ji, W., Qian, Z., Xu, B., Tao, Y., Zhao, D., Ding, S.H. (2016). Apple tree branch segmentation from images with small gray-level difference for agricultural harvesting robot. Optik. International Journal for Light and Electron Optics, 127(23): 11173-11182. https://doi.org/10.1016/j.ijleo.2016.09.044

[12] Zhang, B., Li, J., Fan, S., Wang, W.Q., Zhao, C.J., Liu, C.L., Huang, D.F. (2015) Hyperspectral imaging combined with multivariate analysis and band math for detection of common defects on peaches (prunus persica). Computers & Electronics in Agriculture, 114(C): 14-24. https://doi.org/10.1016/j.compag.2015.03.015

[13] Qureshi, W.S., Payne, A., Walsh, K., Linker, R., Cohen, O., Dailey, M.N. (2016). Machine vision for counting fruit on mango tree canopies. Precision Agriculture, 17(3): 58-62. https://doi.org/10.1007/s11119-016-9458-5

[14] Li, H., Zhang, M., Gao, Y., Li, M.Z., Ji, Y.H. (2017). Green ripe tomato detection method based on machine vision in greenhouse. Transactions of the Chinese Society of Agricultural Engineering, 33(s1): 328-334.

[15] Singla, A., Patra, S. (2016). A fast automatic optimal threshold selection technique for image segmentation. Signal Image & Video Processing, 11: 1-8. https://doi.org/10.1007/s11760-016-0927-0

[16] Mendoza, F., Dejmek, P., Aguilera, J.M. (2006). Calibrated color measurements of agricultural foods using image analysis. Postharvest Biology & Technology, 41(3): 285-295. https://doi.org/10.1016/j.postharvbio.2006.04.004

[17] Bac, C.W., van Henten, E.J., Hemming, J., Edan, Y. (2014): Harvesting robots for high-value crops: state-of-the-art review and challenges ahead. Journal of Field Robotics, 31(6): 888-911. https://doi.org/10.1002/rob.21525

[18] Zhao, D.A., Lü, J.D., Ji, W., Zhang, Y., Chen, Y. (2011). Design and control of an apple harvesting robot. Biosystems Engineering, 110(2): 112-122. https://doi.org/10.1016/j.biosystemseng.2011.07.005

[19] Ferhat, K., Won, S., Ali, V. (2011). Green citrus detection using ‘eigenfruit’, color and circular Gabor texture features under natural outdoor conditions. Computers and Electronics in Agriculture, 78(1): 140-149. https://doi.org/10.1016/j.compag.2011.07.001

[20] Bulanon, D.M., Burks, T.F., Alchanatis, V. (2009). Image fusion of visible and thermal images for fruit detection. Biosystem Engineering, 103(1): 12-22. https://doi.org/10.1016/j.biosystemseng.2009.02.009

[21] Pal, A.K., Nashar, L. (2013). Design of self-tuning fuzzy PI controller in LabVIEW for control of a real time process. International Journal of Electronics and Computer Science Engineering, 2(2): 538-545.

[22] Xu, Q.Y., Hu, F., Wang, C.T. (2018). Segmentation of fabric defect images based on improved frequency-tuned salient algorithm. Journal of Textile Research, 5-172-179.

[23] Shi, T., Kong, J.Y., Wang, X.D., Liu, Z., Xiong, J.L. (2016). Improved Roberts operator for detecting surface defects of heavy rails with superior precision and efficiency. High Technology Letters, 22(2): 207-214. https://doi.org/10.3772/j.issn.1006-6748.2016.02.013

[24] Wang, Z., Shang, Y., Liu, J., Wu, X.D. (2013). A LabVIEW based automatic test system for sieving chips. Measurement, 46(1): 402-410. https://doi.org/10.1016/j.measurement.2012.07.015