Volume 8 - Issue 3

Mini Review Biomedical Science and Research Biomedical Science and Research CC by Creative Commons, CC-BY

The Applications of Artificial Intelligence in Biomedical Imaging

*Corresponding author: Ping Wang, Precision Health Program, Department of Radiology, Michigan State University, 766 Service Road, Rm. 2020, East Lansing, MI 48823, USA

Received: March 27, 2020;Published: April 07, 2020

DOI: 10.34297/AJBSR.2020.08.001279

Abstract

Artificial Intelligence (AI) is paving the forefront of research in the biomedical sciences and related disciplines. This computationally advanced tool is evolving every day, with new algorithms and methods surfacing to aid in complex, deep analysis of datasets in the lab and clinic. Here, we aim to explore the current state of AI, specifically in regards to the canonical and novel machine learning algorithms developed thus far, and explicate how these methods are implemented in biomedical imaging for scientific advantage.

Introduction

The paradigm of Artificial Intelligence (AI) is ever-growing, with novel advancements and algorithms being developed every day to cater to the various realms and sectors of society that harness its power and intellect in order to propel findings and studies that previously stood impossible or difficult to achieve [1]. In regards to biomedical research, artificial intelligence has granted further insight into problems that require this ingenious engineering to answer complex questions through advanced solutions. The ability of computers to provide computational power at an unprecedented magnitude and scale grants further inquiry into this paradigm, and permits evaluation of the current state of AI and how the development of this intelligent tool is helping scientists in their respective fields. From its use as a tool in biomedical image analysis, radiology, treatment response and outcome prediction, and an abundance of other biomedical disciplines that require computational advancements and high-throughput processing beyond that of the human mind, although ironically built from the very principles of the human cortex, the potential of AI to unlock some of the greatest secrets and avenues in biomedical science is a force to be reckoned with – a journey upon which mankind has only begun to embark upon. Here, we aim to explore the current state of AI in regards to biomedical research, specifically biomedical imaging, and evaluate the canonical algorithms and novel methods that have become critical in engineering computational advancements which have generated a foundational basis that the biomedical sciences can utilize and expand upon, as well the applications and implications of AI within this realm [2].

Current Methods and Algorithms

In order to gain a better understanding of the current methods employed in AI and the computational algorithms that lead novel advancements within the field, it is crucial to understand the meaning of this term and appreciate the evolution of machine learning-the core foundation behind AI that instigated its culmination in the biomedical sciences. Although machine learning stems as far back in time as World War 2, when the Turing Test algorithm, to predict the location of U-Boats, was developed using the very principles that operate within machine learning today and set the pace for its development for years to come, it is the more advanced form of machine learning, termed deep learning, which has allowed AI algorithms to function with the computational complexity and human like intelligence that unlocks its greatest potential [3]. AI began as machine learning - algorithms that use some sort of memory-based and context-dependent learning, employing both top-down and bottom-up analytical approaches in which memory from previous rounds of training are loaded into weights and biases to make predictions on the dataset, or feature extraction and subsequent analysis from pre-established metrics, respectively [4,5]. The former constitutes supervised machine learning, in which training on dataset is used to establish “ground truth”, a reference point for the algorithm to refer to when estimating and making predictions during analysis of new (related) data, and the latter comprises unsupervised machine learning, in which analysis and predictions on dataset are made according to predetermined metrics and formulas that can be reapplied to virtually any novel dataset in each new iteration. A blend of various facets of these two machine learning subsets yields the latest, most conventional paradigm in AI, deep learning, as this algorithm uses previously stored memory of labeled dataset on which it was trained to make predictions and preform analysis on new datasets using this memory [6]. Below, we will explore some of the algorithms that comprise these various paradigms of AI.

Deep Learning

Modeled after the human brain, using facets of the cortex and multiple cortical layers that interact to allow intelligent decision making and inference skills, Deep Learning is commonly employed in the biomedical sciences through an algorithm known as the Convolutional Neural Network (CNN) to preform analysis and make predictions on a new dataset that is similar to that which the neural network was trained on [7]. Commonly used in image analysis tasks, the neural network consists of several layers of “neurons”, some layers which are discrete and have functions that do not contribute to the eventual output, and some layers which are involved in portraying outputs of importance to the user [8]. In order to train a neural network on a specific dataset, and to preform future computation on a dataset of similar nature, an algorithm for back propagation known as gradient descent loss is employed [9]. As are convolutional neural networks, gradient descent algorithms are commonly used in image analysis tasks where masks of the object or region of interest (ROI) are input into the algorithm, along with the original image in question, in order to train the network on the dimensions of the ROI when it encounters it in future, previously unseen datasets [8,10]. Through iterations in training using gradient descent loss, loss is minimized to a local minima on the gradient descent curve, and this loss, often referred to as dice loss, is calculated by comparing the current iterations image and dimensionality to that of the previously established ground truth [11,12]. In this regard, loss is analogous to cost, in that “how different is this ROI from the ROI that I am trained to detect and analyze?” From calculated loss, the neural network’s hyper parameters – connection weights and the biases associated with those weights that were used to get to its current prediction in the current iteration of training are adjusted, and after these weights and biases are updated, the network has completed a successful round of training [13,14]. Although this process may seem complicated, once trained on the available dataset, the algorithm pre-forms in a highly robust and intelligent manner, often outperforming image specialists, radiologists, and scientists within their respective fields in terms of detection, segmentation, and analysis of a ROI in a given dataset [2,15,16]. Although the applications and implications will be discussed in a later section, it is important to recognize that these key principles that comprise the CNN unlock a tool for scientists and researchers that grants high throughput, rapid monitoring, segmentation, and analysis of their dataset, and can be fine-tuned or completely reconstructed to cater to any problem which the neural network can be trained to detect and analyze using an abundantly available and labeled dataset. One of the drawbacks is this very requirement of neural networks: the large amount of training data required to establish ground truth and training of the network, and the fact that this data has to be labeled, often requires that one has a great deal of data readily available, or has to generate it, and the time intensive task of labeling the data and generating masks of the labeled regions can be daunting. Nonetheless, it is evident that the time and engineering required to build such an algorithm is rewarding and can yield novel insight into problems that were previously considered un-navigable.

Unsupervised Machine Learning

Data clustering is a common approach to finding trends and similarities amongst data points and grouping those similarities to preform various tasks such as normalizing, segmentation, and denoising/thresholding [17-19]. Although this type of data analysis has been commonly applied to numerical tasks for quantification and analysis of trends in large volumes of data, often for signal processing, clustering has increasingly transformed into an image analysis, mainly segmentation and quantification, task and is a powerful alternative to the neural network in regards to medical image analysis [20-21]. This form of AI is considered unsupervised machine learning due to its capability of parameter optimization to fit a certain type of dataset, such as an MRI or CT scan in which signals are generally uniform across images, with distortions in signals/pixels indicating an abnormal lesion, disease state, or issue within the subject that is often the basis for quantification and analysis [22]. A commonly employed clustering algorithm, known as Kmeans++, is a clustering-based approach to machine learning, in which the algorithm is fitted to a specific type of dataset and once fitted, it is able to perform on similar data even if it has not been trained to recognize that sort of data previously [23]. The mechanism for the K means++ algorithm consists of assigning a certain number of centroids during the initialization sequence, which constitute the central point of a given potential cluster in an image to which all other data points, such as pixels, can be assigned to [24,25]. Once all data points have been assigned to their specific clusters based off of the similarity in value to and distance from the various assigned centroids from the initialization phase of the algorithm, a specific ROI or object of interest can be extracted from the image by selecting a specific centroid (cluster) from the set of clusters processed by the algorithm. The benefit of such an algorithm is that it requires virtually no training on a large, labeled dataset, due to the parameter optimization or “fitting” of the algorithm to the dataset at hand during initial phases of engineering. Furthermore, it can function on a wide database within the domain that the algorithm was engineered towards, and therefore, can be considered more robust than CNN at times. However, unlike the CNN which performs with a high degree of accuracy and throughput due to deep, memory-based processing, the k means++ algorithm, since it does not use weight and bias enhanced memory from training on previous data, is prone to error and variances in its computation. Nonetheless, clustering based algorithms such as this can often be combined with neural networks in either prior or later steps in a pipeline in order to generate a powerful AI system that can not only operate on a wide variety of data, but preform with a high degree of accuracy as well.

Applications and Implications

Although these AI algorithms function with a great degree of computational and mathematical complexity and rigor, their applications within the realm of biomedical imaging, have provided a great deal of insight into previously difficult quantification tasks and measures which aid in analysis of data, especially that which is noisy and requires extensive time and scrutiny with the naked eye in order to yield results. Furthermore, the fact that there is no standardized method or approach to evaluating and analyzing biomedical images, as each image is often subject to selection bias by the radiologist, image specialist, or rater in question, brings into question the validity and accuracy of human selected ROI’s and quantification of biomedical images. In regards to Deep Learning, CNNs have been employed previously in various segmentation and quantification tasks, including segmentation of brain and abdominal tumor lesions from patient scans, prediction of ventricular heart disease, and histological analysis of cancerous tissue to detect markers of neoplasia to aid in predictive the ranostics [26-30]. This has permitted further analysis and use of AI to diagnose, render prognosis, and predict response to treatment for various diseases in a range of imaging modalities spanning MRI, CT, Positron Emission Tomography (PET) imaging, optical imaging and microscopy – this provides a powerful platform for propelling the future of radiomics and other domains within the emerging field of precision medicine [31]. Similarly, unsupervised machine learning approaches such as the K means++ clustering algorithm has been implemented in breast cancer diagnosis tasks as well as large-scale gene expression data clustering to identify molecular signatures for cancer in groups of patient cohorts [32,33]. As an evaluation metric of these algorithms, Intra class Correlation Coefficient (ICC) from various studies has shown that when compared to board certified radiologists and other imaging specialists in the field, deep learning algorithms have performed with unprecedented accuracy, often analyzing on a par with or better than their human counterpart [34,35]. This allows for a standardized, high throughput and rapid approach to segmentation of ROIs from multiple imaging modalities which can not only save time in the lab and clinic, but can also provide insight into and shed light on previously unforeseen problems and areas of analysis. For instance, in the field of stem cell tracking or genomic analysis, artificial intelligence can be employed to not only track the longitudinal development and differentiation of pluripotent stem cells or transplanted islets through various visible molecular patterns and biomarkers, but it can also recognize certain genetic patterns amongst large amounts of sequence data in high-throughput fashion [36,37]. Furthermore, in regards to the various imaging modalities where deep learning and clustering algorithms pre-form segmentation tasks, the increase in speed and accuracy in identifying and segmenting the ROI from background noise can allow for greater quantification measures in a pipeline that can permit prediction of various metrics that previously could not be calculated due to the limitations of image quantification such as selection bias and low contrast image scans where the ROI is difficult to observe. What is important to realize is the fact that AI in this regard is not meant to replace radiologists and specialists in the field, but instead, act as a powerful tool that will unlock great potential in the field of biomedical research, aid scientists and doctors in their various processes and standardize the current approach to biomedical image quantification to increase throughput, reliability and accuracy in the field.

References

Sign up for Newsletter

Sign up for our newsletter to receive the latest updates. We respect your privacy and will never share your email address with anyone else.