3D Convolutional Neural Networks for Solving Complex Digital Agriculture and Medical Imaging Problems
Metadata
Show full item recordAuthor
Hamila, Oumaima
Date
2022-06Citation
Hamila, Oumaima. 3D Convolutional Neural Networks for Solving Complex Digital Agriculture and Medical Imaging Problems; A thesis submitted to the Faculty of Graduate Studies of the University of Winnipeg in partial fulfillment of the requirements for the degree of Master of Science, Department of Applied Computer Science, TerraByte Research Group, University of Winnipeg. Winnipeg, Manitoba, Canada: University of Winnipeg, 2022. DOI: 10.36939/ir.202206021141.
Abstract
3D signals have become widely popular in view of the advantage they provide via 3D representations of data by employing a third spatial or temporal dimension to extend 2D signals. Predominantly, 3D signals contain details inexistent in their 2D counterparts such as the depth of an image, which is inherent to point clouds (PC), or the temporal evolution of an image, which is inherent to time series data such as videos. Despite this advantage, 3D models are still underexploited in machine learning (ML) compared to 2D signals, mainly due to data scarcity. In this thesis, we exploit and determine the efficiency and influence of using both multispectral PCs and time-series data with 3D convolutional neural networks (CNNs). We evaluate the performance and utility of these networks and data in the context of two applications from the areas of digital agriculture and medical imaging. In particular, multispectral PCs are investigated for the problem of fusarium-head-blight (FHB) detection and total number of spikelets estimation, while time-series echocardiography are investigated for the problem of myocardial infarction (MI) detection. In the context of the digital agriculture application, two state-of-the-art datasets were created, namely the UW-MRDC WHEAT-PLANT PC dataset, consisting of 216 multispectral PC of wheat plants, and the UW-MRDC WHEAT-HEAD PC dataset, consisting of 80 multispectral PC of wheat heads. Both dataset samples were acquired using a multispectral 3D scanner. Moreover, a real-time parallel GPU-enabled preprocessing method, that runs 1065 times faster than its CPU counterpart, was proposed to convert multispectral PCs into multispectral 3D images compatible with CNNs. Also, the UW-MRDC WHEAT-PLANT PC dataset was used to develop novel and efficient 3D CNNs for disease detection to automatically identify wheat infected with FHB from multispectral 3D images of wheat plants. In addition, the influence of the multispectral information on the detection performance was evaluated, and our results showed the dominance of the red, green, and blue (RGB) colour channels over both the near-infra-red (NIR) channel and RGB and NIR channels combined. Our best model for FHB detection in wheat plants achieved 100% accuracy. Furthermore, the UW-MRDC WHEAT-HEAD PC dataset was used to develop unique and efficient 3D CNNs for total number of spikelets estimation in multispectral 3D images of wheat heads, in addition to adapting three benchmark 2D CNN architectures to 3D images to achieve the same purpose. Our best model for total number of spikelets estimation in wheat head achieved 1.13 mean absolute error, meaning that, on average, the difference between the estimated number of spikelets and the actual value is equal to 1.13. Our 3D CNN for FHB detection in wheat achieved the highest accuracy amongst existing FHB detection models, and our 3D CNN for total number of spikelets estimation in wheat is a unique and pioneer application. These results suggest that replacing arduous tasks that require the input of field experts and significant temporal resources with automated ML models in the context of digital agriculture is feasible and promising. In the context of the medical imaging application, an innovative, real-time, and fully automated pipeline based on 2D and 3D CNNs was proposed for early detection of MI, which is a deadly cardiac disorder, from a patient’s echocardiography. The developed pipeline consists of a 2D CNN that performs data preprocessing by segmenting the left ventricle (LV) chamber from the apical 4-chamber (A4C) view from an echocardiography, followed by a 3D CNN that performs MI detection in real-time. The pipeline was trained and tested on the HMC-QU dataset consisting of 162 echocardiography. The 2D CNN achieved 97.18% accuracy on data segmentation, and the 3D CNN achieved 90.9% accuracy, 100% precision, 95% recall, and 97.2% F1 score. Our detection results outperformed existing state-of-the-art models that were tested on the HMC-QU dataset for MI detection. Moreover, our results demonstrate that developing a fully automated system for LV segmentation and MI detection is efficient and propitious and could enable the creation of a tool that reliably suggests the presence of MI in a given echocardiography on the fly. All the empirical results achieved in our thesis indicate the efficiency and reliability of 3D signals, that are multispectral PCs and videos, in developing detection and regression 3D CNN models that can achieve accurate and reliable results.