Artificial intelligence, radiomics and other horizons in body composition assessment
Review Article

Artificial intelligence, radiomics and other horizons in body composition assessment

Simona Attanasio1#, Sara Maria Forte1#, Giuliana Restante1, Michela Gabelloni1, Giuseppe Guglielmi2, Emanuele Neri1

1Department of Translational Research, University of Pisa, Pisa, Italy;2Department of Clinical and Experimental Medicine, University of Foggia, Foggia, Italy

#These authors contributed equally to this work.

Correspondence to: Simona Attanasio. Department of Translational Research, University of Pisa, Pisa, Italy. Email: simona.attanasio@med.unipi.it.

Abstract: This paper offers a brief overview of common non-invasive techniques for body composition assessment methods, and of the way images extracted by these methods can be processed with artificial intelligence (AI) and radiomic analysis. These new techniques are becoming more and more appealing in the field of health care, thanks to their ability to treat and process a huge amount of data, suggest new correlations between extracted imaging biomarkers and traits of several diseases as well as lead to the possibility to realise an increasingly personalized medicine. The idea is to suggest the use of AI applications and radiomic analysis to search for features that may be extracted from medical images [computed tomography (CT) and magnetic resonance imaging (MRI)], and that may turn out to be good predictors of metabolic disorder diseases and cancer. This could lead to patient-specific treatments and management of several diseases linked with excessive body fat.

Keywords: Artificial intelligence (AI); radiomics, body composition assessment


Submitted Dec 11, 2019. Accepted for publication Feb 04, 2020.

doi: 10.21037/qims.2020.03.10


Introduction

With the escalating incidence of obesity, a better understanding of methods to assess body composition and fat metabolism through the development of advanced techniques to quantify and characterise adiposity are required. Recently, it has been recognized that not only total amount of fat, but also fat distribution plays an important role in metabolism. In fact, body composition measurements are increasingly important for diagnosis and monitoring of metabolic disease and metabolic components of other disease. Traditionally, measurements like skinfold thickness, bioelectric impedance, body mass index (BMI), waist circumference attempt to predict body composition and its influence on health outcomes. The availability, in health care, of new techniques of investigation like Radiomics and AI, opens wide scenarios and offers unexpected possibilities to integrate “classical measurements” into such new techniques of investigation with the aim to establish correlations between adiposity estimate and its consequences on the onsets of several ailments or diseases. For these purposes, the most accurate methods in measuring adipose tissue (AT) are computed tomography (CT) and magnetic resonance imaging (MRI). On these images could be applied the data extraction needed for Radiomics analysis and the development of AI tools in order to provide support to decisional process of physicians and to establish new correlations between radiomics features and outcome diseases. The images produced by these tomographies are clinically collected for diagnosis, classification, and treatment evaluation in various disease, so they are available for the purpose of research.


Artificial intelligence (AI)

In the past few decades, the interest in a field of computer science known as AI, has been increasing in physicians’ and research communities (1). AI represents the ability demonstrated by a machine to “make decisions” based on past experiences. It could be defined by making a computer perform functions that could be considered intelligent had they had been made by a human being (2,3). First used at Dartmouth College in 1956 (4), since the 1970s the term “artificial intelligence” has been used in several research fields, such as pattern recognition, intelligent control, machine translation, and robotics, and has experienced three major setbacks that have slowed down its development. Such setbacks revealed the incompatibility of AI with the past information environment. In 1991, with the arrival of the World Wide Web, everything changed. During the next few years, tools with increasing computational power were developed and this new ability allowed to manage and structure Big Data. A new golden age for AI started. As a result, industrial and research communities started to pay attention to Big Data and, consequently, to AI as a tool to boost their analyses. As a matter of fact, AI allows for a rapid processing of large amount of data by computer-based approaches (e.g., pattern recognition and learning) (5).

AI is a multidisciplinary field of study that includes several methods, theories and technologies. A subset of AI is machine learning, which has been applied to Big Data analysis due to the ability of these algorithms to find hidden patterns in data without explicitly being instructed about where to look or what to conclude. It was Arthur Samuel that coined the term “machine learning” in 1959, while he was working at IBM (6). He was a pioneer in the field of AI and computer gaming, and he made one of the first successful self-learning programs, known as Samuel Checkers playing program. Today, machine learning is not a new science, but it is gaining a fresh momentum. Indeed, recently it has been possible to apply machine learning algorithms to perform complex mathematical calculation on Big Data faster and faster, due to the simultaneous advances of computing power and of the large amount of data. It is fast-growing even in the health care industry, where this technology can help physicians analyse data to point out trends that may lead to improve diagnoses and treatment. The learning methods may be distinguished in supervised, unsupervised, semi-supervised and reinforcement learning. In supervised learning, the algorithm takes in a training dataset made up of input-output pairs of data (labelled data), and learns how to map the input into the output. Then, supervised learning uses the acquired pattern to predict the values of the label in additional unlabelled data. Unsupervised learning, by contrast, models the data distribution taking in only input data (unlabelled data), in order to know more about them. The machine is not told the “right answer”, but has to figure it out and find some pattern in the data. The most widely adopted unsupervised techniques includes k-means clustering, nearest-neighbour mapping, self-organizing maps, singular value decompositions. Semi-supervised learning typically makes use of unlabelled data and a small amount of labelled data. It is used for the same application of supervised learning in case labelled data cost is too high to allow for a fully labelled training process.

The concept behind reinforcement learning is slightly different. It is formulated as a Markov decision process (MDP), but it does not draw knowledge from an exact mathematical model. Reinforcement learning focuses on the balance between exploration and exploitation of data (7). It has three main components: the agent, the environment and the actions. The model analyses the reward over a given amount of time for every possible action, and chooses the one that maximizes this reward. In this way, the agent will reach the goal much faster by following a good policy. Reinforcement learning is often used in navigation, gaming and robotics.

The applications of AI in medicine are several and have boosted discoveries in genetics and molecular medicine. For example, the unsupervised protein-protein interaction algorithm that permits to discover novel therapeutic targets using an adaptive evolutionary clustering method (8), and the computational methodology that identifies DNA variants (SNPs) to predict diseases or specific features (9). However, it is only very recently that one specific algorithm has emerged as the leading AI method: artificial neural networks (ANN) (10). This field was established before the advent of the computers. In 1943, McCulloch and Pitts (11) created a computational model for neural networks based on mathematics and algorithms, but at that time technology prevented them from doing much. Still, McCulloch and Pitts’ computational model was the first artificial neuron and it was called threshold logic.

Neural Networks are a set of algorithms designed to recognize patterns. They are inspired by the way the brain processes information. These systems learn to perform tasks by considering examples, in the same way that we learn from experience in our everyday lives, generally without being programmed with any task-specific rules. This means that they require data to learn.

Neural networks consist of input and output layers, as well as one or more hidden layers consisting of units (neurons) that transform the input into something that the output layer can use. The connections of the biological neuron are modelled as weights. The weight increases or decreases the strength of the signal at a connection. All inputs are weighted and added to one another, as in a linear combination.

Different layers may perform different kinds of transformations on their inputs. Signals travel from the input layer to the output layer, and the output amplitude is controlled by an activation function. Just as the functions that compute the activation, weights can be modified by a process called learning, which is governed by a rule. It modifies the parameters of the neural network in order to provide the network with an input that produces a favoured output.

The most well-known of the traditional neural networks is the multi-layered perceptron (MLP) that has several layers of these activation functions.

Deep neural networks (DNN) are multi-layers neural networks and for a long time they were considered hard to train efficiently. They gained popularity only in the past few decades, when it was shown that they could result in good performance after training them layer-by-layer in an unsupervised manner followed by supervised fine-tuning. Currently, the most popular models are trained end-to-end in a supervised fashion, and the most popular architecture widely used in medical image analysis are convolutional neural networks (CNNs) (12). They contain different types of layers, such as convolution, pooling, and fully connected layers, each with a specific purpose. CNNs called attention to image classification and object detection. However, CNNs have also been applied to natural language processing and forecasting.

There are other kinds of DNNs. They depend on the task that the algorithm should perform. Recurrent neural networks (RNNs), for example, are used in forecasting and time series applications, sentiment analysis and other text applications. That is because they use sequential information. Unlike traditional neural networks, inputs are not independent of each other, and the output for each element depends on the computations of its preceding elements.

Feedforward neural networks have no feedback loops. Each neuron in one layer is connected to every neuron in the next layer. Hence, information is fed forward from one layer to the next in the forward direction only.

Autoencoder neural networks are used to create abstractions, called encoders. Autoencoders desensitize irrelevant abstractions and sensitize relevant ones, seeking to model the inputs themselves. This method is considered unsupervised. Autoencoders may be used by linear or nonlinear classifiers.

DNNs, RNNs, CNNs and autoencoders are part of a specific field of AI: deep learning (DL). DL algorithms have been applied to fields including computer vision, speech recognition, natural language processing, bioinformatics, drug design, medical image analysis, where they have produced results comparable to and in some cases superior to human experts (13,14).


Radiomics

AI applications include different techniques, such as machine learning, neural networks and DL. They may be applied to process medical images, typically CT and MRI series. Recently, the field of medical images has grown at an extraordinary rate, also due to an increased number of pattern recognition tools and an increase in data set sizes. These advances have allowed the development of processes for high-throughput extraction of quantitative features that entails conversion of images into mineable data and the successive analysis of these data for decision support. All this is called Radiomics (15). This process is based on the idea that biomedical images contain information that reflect the underlying pathophysiology, and that these relationships can be extracted by means of quantitative image analysis. Although radiomics is a natural branch of computer-aided diagnosis and detection (CAD) systems, it is, in fact, substantially different from them. In contrast to CAD systems, which aim at giving a single answer (i.e., the presence of a lesion, or cancer), radiomics is a process explicitly aimed at extracting a large number of quantitative features from digital images, store these data in shared databases, and successively mine the data for hypothesis generation, testing, or both. Although radiomics can be applied to a large number of medical applications, it is especially developed in oncology: quantitative image features based on intensity, shape, size or volume and texture offer information on tumour phenotype and microenvironment (or habitat) that is distinct from that provided by clinical reports, laboratory test results, and genomic or proteomic assays. Such features, joined with other information, can be linked with clinical outcomes data and used for evidence-based clinical decision support. The reason of major development of a radiomics approach for cancer is that digital radiologic images are acquired for nearly every patient affected by cancer.

There are several clinical applications of AI and radiomics in radiology. In the last decade, AI has been employed in image processing for disease detection, classification, organ segmentation, lesion segmentation and assessment of treatment response, especially in oncology (16-21). One of the most interesting advantages of the use of AI is the possibility of creating a patient-specific medicine. As a matter of fact, AI can effectively support analysis of radiomic features and their correlation between measures of fat and muscle in a person’s body and other diseases.


Body composition analysis (BCA)

BCA may be performed using BMI, calculated as weight in kilograms divided by height in square meters. BMI does not distinguish between muscle and AT. Furthermore, low BMI may mask excess adiposity, while high BMI may mask low muscularity (22). Since several studies have demonstrated correlation of body composition with many diseases, such as cardiovascular pathologies, diabetes type II, liver inflammation and fibrosis, cancer risk and survival (23-25), a direct measurements of body size is required.

The incidence of several cancers is increased by obesity, supporting the hypothesis that biological interactions between fat and the survival of tumour cells may occur (26). There are several reasons why obesity is linked with cancer. Primarily, (I) the high levels of insulin and insulin growth factor-1 (IGF-1) may help the proliferation of cancer cells; (II) people who are obese are characterized by low levels of chronic inflammation that is linked with an increased cancer risk; (III) fat tissue releases higher amounts of oestrogen which is described as a factor involved in the development of some cancers, such as breast and endometrial cancers; (IV) fat cells may be involved in the altered regulation of cancer cell growth (27,28). Interestingly, several works showed that the direct measurement of adiposity could contribute to highlight medical conditions known to be associated with obesity in cancer patients.

The direct measurement of adiposity is useful to predict surgical complications and short-term surgical outcomes as well as survival and recurrence in colorectal (29,30), breast (31,32), and prostate (33,34) cancers. Due to the escalating incidence of cancer, a better understanding of the relationship between tumorigenesis and fat metabolism is necessary. It is important to characterize the different biologic activity of different fat compartments and to evaluate whether visceral and subcutaneous fat measurements can predict the development of medical complications related to obesity.

Several studies found that the fatty infiltration into muscle fibres, the reduced function and quality of muscle (defined by a low skeletal muscle radiodensity) are associated with an increased risk of death (32,35). Accordingly, imaging tools that are able to investigate severe loss of skeletal muscle mass (sarcopenia) are important to examine associations between measures of body composition and overall mortality. Moreover, the assessment of cachexia, characterized by the progressive reduction of skeletal muscle mass and functional impairment, with or without fat mass loss, is important for the care and monitoring of cancer patients (36).


Techniques of body composition profiling

There are several non-invasive techniques for BCA. The most widely used way to estimate body fat is the BMI body weight normalized by the square of height (kg/m2). Unfortunately, BMI and other anthropometric measures, such as waist circumference and waist-to-hip ratio, are poor predictors for individual fat distribution and metabolic risk (37). Through many attempts, scientists have tried to determine the body composition in different ways, with a wide range of different physical principles and devices, and using different models and hypotheses [densitometry, air displacement plethysmography (ADP), bioelectrical impedance analysis (BIA), dual-energy X-ray absorptiometry]. Currently, in vivo measurements of several fat depots and fat infiltration in organs can be made using tomographic imaging techniques, such as CT and MRI. These techniques are today recognized as golden standards for body composition assessment (23). Indeed, CT provides a three-dimensional high-resolution image of the full body or some specific parts of it, reconstructed by the acquisition of many X-ray projections of the body from different angles. The technique uses the known differences in attenuation coefficients of X-rays between lean soft tissue (LT) and AT, and can then be used both to separate these tissues and to determine mixtures of them. The accuracy of CT to evaluate fat in skeletal muscle tissue and in the liver is high, but it is significantly lower for liver fat (<5%) which limits the use of CT to diagnose low-grade steatosis. In clinical practice, CT-based body composition assessment is in most cases performed by two-dimensional analysis of one or a few axial slices of the body, which allow for a good evaluation of the volume of AT. The reasons for this limitation are easily perceivable. First, it is important to keep the part of the body to be scanned as limited as possible in order to minimize the ionizing radiation dose. Second, manual segmentation of different compartments in the images is a very intensive task, which may be reduced by limiting the analysis of a full, a three-dimensional volume to a few slices.

MRI, conversely, does not use ionizing radiation: it takes advantage of the different magnetic properties of the nuclei of hydrogen in water and fat to produce images of soft tissue in the body. A very precise method to quantify AT and LT, as well as diffuse fat infiltration in other organs, uses the so-called ‘quantitative fat water imaging’. This is based on fat water separated (Dixon) imaging, in which the difference between magnetic resonance frequencies of proton in fat and water are used for separating the two signals into a fat image and a water image (38). The safety of MRI technique allows for true volumetric three-dimensional imaging both in the adult and the paediatric populations, even in healthy volunteers and infants. The drawback of MRI is due to its limited accessibility and cost.

I either case (CT and MRI), the analysis of these images remains a very time-consuming task for radiologists. It requires expertise to be performed manually, hence, advanced techniques that implement automatic detection and segmentation of body composition could be useful in clinical practice.

The purpose of this review is to describe the current status of imaging techniques used to study measures of body composition, including adiposity, and muscle quality in obese and high-risk cancer patients and introduce AI and radiomics applications to body composition management, leading to a patient-specific treatment of several diseases.


AI and radiomics in BCA

Literature search strategy

For the purposes of the present review a literature search was performed on PubMed and Google scholar, Embase, between January 2016 and October 2019. Key terms are the following: “medical images” and “BMI”; “CT” and “BMI”; “MRI” and “BMI”; “CT” and “adipose tissue”; “CT” and “muscle mass”; “MRI” and “adipose tissue”; “MRI” and “muscle mass”; “Body Composition Assessment CT MRI”; “Body Composition Assessment Radiomics”; “Body composition Artificial Intelligence”; “Artificial Intelligence and visceral adiposity”; “Artificial Intelligence” and “BMI”. Retrieved publications were manually selected based on the relevance for our objective.

About 213 articles or reviews have been found. Review articles, randomized trials, single center studies, multi-centric studies were excluded and considered only those since 2015 in order to have a survey as comprehensive and up-to-date as possible of the literature on the subject.

In the end, 7 scientific papers met our criteria and were considered for this review.

AI and radiomics state-of-the-art overview

Besides personalized medicine, there is a growing interest in BCA for the assessment of effects on patients and treatment in various pathologies. In the body composition assessment, AI and radiomics could be a powerful tool.

The practice of radiomics involves discrete steps: (I) acquiring the images; (II) identifying the volumes of interest; (III) segmenting the volumes (i.e., delineating the borders of the volume manually or with computer-assisted contouring); (IV) extracting quantitative features from the volume; (V) using the latter to populate a searchable database; (VI) mining these data to develop classifying models to predict outcomes either alone or in combination with additional information (genomic, proteomic and other clinical data) (15). As three steps (II), (III) and (VI), recurring to AI proves to be crucial. In the last year, many computer science teams have worked on the use of AI algorithm in medical images. Generally, the measurement of visceral adiposity tissue (VAT) using imaging techniques is performed in two steps: finding the image/slice of interest and assessing the body composition (39-42). It has been noted that slices at the third and/or fourth lumbar vertebra region are the most commonly selected for measuring VAT (43,44). ANNs are usually used to perform these operations. Such networks must undergo supervised training on specific datasets which the algorithm can learn. Next, ANNs should be tested against a validation dataset to tune the architecture parameters and evaluate if the model fits the new data. Next, an unseen and unlabelled dataset will be used to assess the network performance.

Belharbi et al. (41) presented a pipeline that applies a CNN for spotting a particular slice in a CT scan. His team tackled this task as a regression problem, where the slice position height is estimated. In their approach, the CT scan is converted into a 2D input using maximum intensity projection (MIP) method. This step is fundamental to overcome the need of large computing and memory resources required from CNN without loss of important information. Moreover, the input size of CNN models impacts the model’s number of parameters. Hence, the direct use of 3D images without dimension reduction is not efficient if a large training dataset is not available. In the learning phase, the stochastic gradient method was used to perform the parameters optimization. Large number of training samples was required. To overcome the lack of training data, the transfer learning method was adopted (45,46). CNN architectures previously trained for a different task on computer vision problems where large dataset exists were selected: AlexNet, VGG16, VGG19, GoogleNet. Moreover, a homemade CNN architecture was designed and trained from scratch. Only weights of the convolutional layers are set using transfer learning. Fully connected layers are initialized randomly. In the decision phase, a sliding window procedure on the MIP images is performed. The dataset used consists of six hundred forty-two CT exams acquired from four different scanners and different acquisition protocols were used. All the models have been evaluated in a cross-validation procedure by computing the absolute difference between the predicted value and the target as the prediction error. Balharbi’s results were compared to a random forest (RF) regression as a regression method instead of CNN. They showed that RF has not good performance over this task. Moreover, the results confirmed a benefit from the use of transfer learning and the interest of use DL algorithms in medical problems.

Some groups have proposed atlas-based methods for the selection of the L3 slice in a CT scan. Chung et al. (47) group is one of them. They implemented an automatic method for muscle, VAT and subcutaneous adiposity tissue (SAT) segmentation. Their approach is to use an implicit shape model for the analysis of muscles that is robust to topological changes. The free form deformation (FFD) model was utilized to parametrize image deformation. It consists of B-splice cubic interpolation of regular lattice points. Moreover, incremental deformations were encoded using principal component analysis (PCA). To determine the performance the performance of their segmentation algorithm, twenty CT images of patients showing a normal muscle shape were used. Results showed high agreement between manual and automated segmentations. However, the segmentation failed if images with abnormal muscle shape were utilized.

Moghbeli et al. (39) presented an unsupervised method for VAT and SAT segmentation using a self-organized map (SOM) neural network and a new level set method called distance regularized level set evolution (DRLSE) on axial magnetic resonance images (MRI) of the abdomen. The method was performed on 23 subjects and for each case three slice (L4-L5, L3-L4 and L2-L3) of the whole-body abdominal MRI were selected. These images were obtained on a 1.5 T unit Siemens scanner. Manual segmentations were performed by four experienced radiologists and were used as gold standard for the automatic segmentation method evaluation. Standard algorithms for image analysis were used. For example, signal intensity inhomogeneity (48,49) correction was applied to eliminate its impact on automatic intensity-based tissue classification methods. As stated above, a self-organized map neural network, usually known as Kohonen network (50), was used. It allows to visualize and analyse high dimensional data projecting them into a two-dimensional grid. The SOM neural network consists of two layers (input and output neurons respectively) that are connected with settable weights (51). This network classifies the image pixels in four classes: air and water, bone narrow, fat, and muscles. Total abdominal tissue mask was created by selecting the second maximum value of pixel number and then, a ROI only in the abdomen was created to decrease the computational cost of the calculation. For the creation of SAT mask, a higher dimensional function called level set function DRLSE (52,53) was used. Their method was found to be precise and robust. According to their results, automatic and manual segmentation of VAT and SAT are significantly correlated.

In their work, Bridge et al. (40) describe a fully automated analysis of body composition from CT images in a two-step process. The method used a DenseNet architecture (54) to select the CT slice at third lumbar level (L3) and a U-net (55) architecture to perform segmentation of muscle, subcutaneous fat and visceral fat. The selection of the L3 slice was posed as a slice-wise regression problem. This allowed the use of a 2D network model (DenseNet) and reduced the network complexity. Single CT slices downsampled to a 256×256 images were given to the model as input. The DenseNet model learns to predict the z-coordinate along the craniocaudal axis of the L3 slice (zL3). The mean absolute error loss between zL3 and the regression target was calculated. After the model selected the L3 slice, the full 512×512 image was passed to the U-net model for the segmentation needed in the BCA. Batch normalization was added to the model before each activation and soft Dice maximization loss (56) was set as loss function of the network to overcome the problem of class imbalance between three tissue classes and the background class. They experimented the architectures with different parameters and selected the one that has higher performance. For all the models, Adam optimizer and data augmentation were applied. Independent cohorts consisting of 595 CT scans (dataset A) and 534 CT scans (dataset B) respectively were used to train and test the algorithms. To evaluate this method Dice scores and correlation coefficients were determined. Their results show that a fully automated BCA is feasible.

Lee et al. (42) demonstrated automated segmentation and quantification of skeletal muscle cross-sectional area (CSA) using a full convolutional neural network (FCN) (57) with weight initialization of an Image-Net pre-trained model. Their method includes grayscale image conversion. Therefore, the effects of window and bit settings on segmentation performance were evaluated by using different window configurations and bit depth per pixel. Lee et al. used transfer learning method to allow a fast convergence of the loss function when a small training dataset is available. The model was trained using the stochastic gradient descent (SGD) algorithm with a fixed, tiny learning rate and weight decay (57). The algorithm was implemented on four hundred axial CT slices taken at L3. Its performance was evaluated using dice similarity coefficient (DSC) to compare the overlap degree between the ground truth segmentation mask and the FCN-derived mask and using CSA error to measure the percentage difference in area between the ground truth segmentation mask and the FCN-derived mask. The accuracy increased as the number of features of different layers was fused and it was markedly better than a traditional thresholding method without human tuning.

Zgallai et al. (58) focused on providing a quantification of SAT and VAT from MRI images of obese patient before and after fasting using DL and CNN. They utilized three hundred thirty images segmented manually by expertise and semi-automatically by using professional software. The U-net CNN (55) and the VGG-16 CNN were used to perform the automated segmentation. All the images were pre-processed using Matlab Image Processing Toolbox and OpenCV with Python to eliminate artefacts and noise, and to create a region of interest (ROI). Data augmentation methods were applied to increase the number of images available. Running the DL CNN, the SAT surface area has been recognized and quantified. Within this area, all non-VAT parts were excluded, and on the remaining ones another run of the CNN was performed to quantify the VAT tissue. Their process is fully automatic. Therefore, their results are completely reproducible and independent of clinical expertise. Initial applications show that this method produces better results than the semi-automated software.

Liu et al. (59) aimed at quantifying automatically four tissue components in the body torso—SAT, VAT, bone tissue and muscle tissue—using low-dose CT images. Their work is in three phases. First, they construct a fuzzy anatomy model where all anatomic organs and tissue regions and interfaces are organized in a hierarchical order. Hence, the model includes a fuzzy object-model for every object/organ in the image and the relationship between organs in the image taken pairwise. The second phase involved finding the location of each object in any given whole-body image according to the hierarchy set in the model. The results of these first two phases is a fuzzy mask for each object that includes the approximate position, size and shape of the object. During the third phase, the body components were quantified using the localization information and the intensity distribution of the four tissue components, already encoded in the model. The low-dose CT images of 38 patients were used in a 5-fold cross validation strategy. Their method quantifies four tissues with under 5% overall error. Less than one minute is required to process one patient image.

Some teams have developed automated assessment of body composition for tight MR images. For example, Yang et al. (60) developed and validated a 3D automatic segmentation algorithm based on machine learning for thigh composition segmentation on Dixon MR images. A three-class classification method was implemented and applied to datasets of (I) four contrast images, (II) water and fat images, and (III) unsuppressed images acquired from one hundred ninety subjects. The most accurate segmentation was achieved by the application of the algorithm to the first dataset.

These works regard the first steps of the radiomic pipeline excluding the extraction and analysis of radiomic features. Up to date there are no works that applied the entire radiomic workflow to investigate the body composition.


Discussion and conclusions

These studies show that automatic muscle and fat segmentation using AI methods may be challenging. Machine learning and DL techniques have been shown to outperform semi-automated algorithms and, sometimes, even manual segmentation. They permit better integration with other quantitative clinical measures such as radiomic features and genomic data. Since radiomics seems to offer almost a limitless source of imaging biomarkers that can be linked with genomics (DNA), transcriptomics (RNA), proteomics (proteins) and metabolomics (metabolites) data, we may very well suppose that—after the identification in a CT or MRI series of VAT, abdominal subcutaneous adipose tissue volume (ASAT), liver-fat fraction, and muscle fat infiltration (MFI)—a radiomic analysis could be performed on CT and MR images. Just like in cancer treatment, the extracted features could be correlated with clinical outcomes data and, could be used in metabolism assessment and as quantitative predictors of the risk for metabolic disorder diseases and for cancer itself.

AI algorithms can also improve reproducibility and the standardization of medical procedures. A limitation of this type of methods is the need of a big set of data to train the algorithm: this is not always available, especially in medicine. The current status of medical records is mainly not structured in standard forms, hence they have to go through a long process—in term of time and costs—before they can actually be useful for the training phase of automatic algorithms. Clinicians and laboratory technicians need to collaborate for the implementation of electronic health records (61). Medical datasets should be shared by physicians via open-source and made automatically available for further scientific researches. However, the legal and ethical concerns about these data platforms are enormous, even more if omics-data are considered. Together with other partners, our group is working on the implementation of a European platform that can contain and integrate medical data, from imaging and clinical data to radiomics features. If personalized medicine is the final goal, such datasets are essential.

Lately, there has been huge excitement about the application of AI and radiomic techniques to images and this has led to increasing investments in start-up companies dealing with these issues. Nevertheless, while AI is spreading, the interest in even larger dataset and more types of data will increase. This could lead to even greater accuracy in early detection and target prevention in medicine. Certainly, a better understanding of the natural correlation of the disease to the BCA should be investigated more deeply.


Acknowledgments

Funding: None.


Footnote

Provenance and Peer Review: This article was commissioned by the Guest Editors (Giuseppe Guglielmi and Alberto Bazzocchi) for the special issue “Body Composition Imaging” published in Quantitative Imaging in Medicine and Surgery. The article was sent for external peer review organized by the Guest Editors and the editorial office.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/qims.2020.03.10). The special issue “Body Composition Imaging” was commissioned by the editorial office without any funding or sponsorship. GG served as the unpaid Guest Editor of the special issue and serves as an unpaid editorial board member of Quantitative Imaging in Medicine and Surgery. The authors have no other conflicts of interest to declare.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Marmett B, Carvalho R, Fortes M, Cazella S. Artificial Intelligence technologies to manage obesity. Vittalle - Rev Ciências Da Saúde 2018;30:73-9.
  2. Haugeland J. Artificial Intelligence: The Very Idea. Cambridge, MA, USA: Massachusetts Institute of Technology, 1985.
  3. Russell S, Norvig P. Artificial Intelligence: A Modern Approach. 3rd ed. Upper Saddle River, NJ, USA: Prentice Hall Press, 2009.
  4. McCarthy J, Minsky ML, Rochester N, Shannon CE. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955 (outline of the summer conference, actually held in 1956). AI MAG 2006.27.
  5. O’Leary DE. Artificial Intelligence and Big Data. IEEE Intell Syst 2013;28:96-99. [Crossref]
  6. Samuel AL. Some Studies in Machine Learning Using the Game of Checkers. IBM J Res Dev 1959;3:210-29. [Crossref]
  7. Kaelbling LP, Littman ML, Moore AW. Reinforcement Learning: A Survey. J Artif Int Res 1996;4:237-85. [Crossref]
  8. Theofilatos K, Pavlopoulou N, Papasavvas C, Likothanassis S, Dimitrakopoulos C, Georgopoulos E, Moschopoulos C, Mavroudi S. Predicting protein complexes from weighted protein-protein interaction graphs with a novel unsupervised methodology: Evolutionary enhanced Markov clustering. Artif Intell Med 2015;63:181-9. [Crossref] [PubMed]
  9. Rapakoulia T, Theofilatos K, Kleftogiannis D, Likothanasis S, Tsakalidis A, Mavroudi S. EnsembleGASVR: a novel ensemble method for classifying missense single nucleotide polymorphisms. Bioinformatics 2014;30:2324-33. [Crossref] [PubMed]
  10. Miller DD, Brown EW. Artificial Intelligence in Medical Practice: The Question to the Answer? Am J Med 2018;131:129-33. [Crossref] [PubMed]
  11. McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 1943;5:115-33. [Crossref]
  12. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60-88. [Crossref] [PubMed]
  13. Ciregan D, Meier U, Schmidhuber J. Multi-column deep neural networks for image classification. 2012 IEEE Conference on Computer Vision and Pattern Recognition. 2012:3642-9.
  14. Krizhevsky A, Sutskever I, Hinton GE. ImageNet Classification with Deep Convolutional Neural Networks. In: Pereira F, Burges CJC, Bottou L, Weinberger KQ. editors. Advances in Neural Information Processing Systems 25. Curran Associates, Inc., 2012:1097-105.
  15. Gillies RJ, Kinahan PE, Hricak H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016;278:563-77. [Crossref] [PubMed]
  16. Halabi SS, Prevedello LM, Kalpathy-Cramer J, Mamonov AB, Bilbily A, Cicero M, Pan I, Pereira LA, Sousa RT, Abdala N, Kitamura FC, Thodberg HH, Chen L, Shih G, Andriole K, Kohli MD, Erickson BJ, Flanders AE. The RSNA Pediatric Bone Age Machine Learning Challenge. Radiology 2019;290:498-503. [Crossref] [PubMed]
  17. Sadoughi F, Kazemy Z, Hamedan F, Owji L, Rahmanikatigari M, Azadboni TT. Artificial intelligence methods for the diagnosis of breast cancer by image processing: a review. Breast Cancer 2018;10:219-30. [PubMed]
  18. Ha R, Chin C, Karcich J, Liu MZ, Chang P, Mutasa S, Pascual Van Sant E, Wynn RT, Connolly E, Jambawalikar S. Prior to Initiation of Chemotherapy, Can We Predict Breast Tumor Response? Deep Learning Convolutional Neural Networks Approach Using a Breast MRI Tumor Dataset. J Digit Imaging 2019;32:693-701. [Crossref] [PubMed]
  19. Streba CT, Ionescu M, Gheonea DI, Sandulescu L, Ciurea T, Saftoiu A, Vere CC, Rogoveanu I. Contrast-enhanced ultrasonography parameters in neural network diagnosis of liver tumors. World J Gastroenterol 2012;18:4427-34. [Crossref] [PubMed]
  20. naceur M Ben, Saouli R, Akil M, Kachouri R. Fully Automatic Brain Tumor Segmentation using End-To-End Incremental Deep Neural Networks in MRI images. Comput Methods Programs Biomed 2018;166:39-49. [Crossref] [PubMed]
  21. Yuan Y, Qin W, Buyyounouski M, Ibragimov B, Hancock S, Han B, Xing L. Prostate cancer classification with multiparametric MRI transfer learning model. Med Phys 2019;46:756-65. [Crossref] [PubMed]
  22. Greenlee H, Unger JM, LeBlanc M, Ramsey S, Hershman DL. Association between Body Mass Index and Cancer Survival in a Pooled Analysis of 22 Clinical Trials. Cancer Epidemiol Biomarkers Prev 2017;26:21-9. [Crossref] [PubMed]
  23. Borga M, West J, Bell JD, Harvey NC, Romu T, Heymsfield SB, Dahlqvist Leinhard O. Advanced body composition assessment: from body mass index to body composition profiling. J Investig Med 2018;66:1-9. [Crossref] [PubMed]
  24. Linge J, Borga M, West J, Tuthill T, Miller MR, Dumitriu A, Thomas EL, Romu T, Tunón P, Bell JD, Dahlqvist Leinhard O. Body Composition Profiling in the UK Biobank Imaging Study: Body Composition Profiling in UK Biobank. Obesity (Silver Spring) 2018;26:1785-95. [Crossref] [PubMed]
  25. Shuster A, Patlas M, Pinthus JH. The clinical importance of visceral adiposity: A critical review of methods for visceral adipose tissue analysis. Br J Radiol 2012;85:1-10. [Crossref] [PubMed]
  26. Ligibel JA, Wollins D. American Society of Clinical Oncology Obesity Initiative: Rationale, Progress, and Future Directions. J Clin Oncol 2016;34:4256-60. [Crossref] [PubMed]
  27. Balentine CJ, Marshall C, Robinson C, Wilks J, Anaya D, Albo D, Berger DH. Validating Quantitative Obesity Measurements in Colorectal Cancer Patients. J Surg Res 2010;164:18-22. [Crossref] [PubMed]
  28. Ibrahim MM. Subcutaneous and visceral adipose tissue: structural and functional differences. Obes Rev 2010;11:11-8. [Crossref] [PubMed]
  29. Akay S, Urkan M, Balyemez U, Ersen M, Tasar M. Is visceral obesity associated with colorectal cancer? The first volumetric study using all CT slices. Diagn Interv Radiol 2019;25:338-45. [Crossref] [PubMed]
  30. Oh TH, Byeon JS, Myung SJ, Yang SK, Choi KS, Chung JW, Kim B, Lee D, Byun JH, Jang SJ, Kim JH. Visceral obesity as a risk factor for colorectal neoplasm. J Gastroenterol Hepatol 2008;23:411-7. [Crossref] [PubMed]
  31. Schapira D V, Clark RA, Wolff PA, Jarrett AR, Kumar NB, Aziz NM. Visceral obesity and breast cancer risk. Cancer 1994;74:632-9. [Crossref] [PubMed]
  32. Caan BJ, Cespedes Feliciano EM, Prado CM, Alexeeff S, Kroenke CH, Bradshaw P, Quesenberry CP, Weltzien EK, Castillo AL, Olobatuyi TA, Chen WY. Association of Muscle and Adiposity Measured by Computed Tomography With Survival in Patients With Nonmetastatic Breast Cancer. Jama Oncol 2018;4:798-804. [Crossref] [PubMed]
  33. von Hafe P, Pina F, Perez A, Tavares M, Barros H. Visceral fat accumulation as a risk factor for prostate cancer. Obes Res 2004;12:1930-5. [Crossref] [PubMed]
  34. Zhang Q, Sun L, Qi J, Yang Z, Huang T, Huo R. Periprostatic adiposity measured on magnetic resonance imaging correlates with prostate cancer aggressiveness. UROL J 2014;11:1793-9. [PubMed]
  35. Rier HN, Jager A, Sleijfer S, van Rosmalen J, Kock MCJM, Levin M-D. Low muscle attenuation is a prognostic factor for survival in metastatic breast cancer patients treated with first line palliative chemotherapy. Breast 2017;31:9-15. [Crossref] [PubMed]
  36. Fearon K, Strasser F, Anker SD, Bosaeus I, Bruera E, Fainsinger RL, Jatoi A, Loprinzi C, MacDonald N, Mantovani G, Davis M, Muscaritoli M, Ottery F, Radbruch L, Ravasco P, Walsh D, Wilcock A, Kaasa S, Baracos VE. Definition and classification of cancer cachexia: an international consensus. Lancet Oncol 2011;12:489-95. [Crossref] [PubMed]
  37. Nuttall FQ. Body Mass Index: Obesity, BMI, and Health: A Critical Review. Nutr Today 2015;50:117-28. [Crossref] [PubMed]
  38. Dixon WT. Simple proton spectroscopic imaging. Radiology 1984;153:189-94. [Crossref] [PubMed]
  39. Moghbeli F, Langarizadeh M, Younesi A, Radmard AR, Rahmanian MS, Orooji A. A Method for Body Fat Composition Analysis in Abdominal Magnetic Resonance Images Via Self-Organizing Map Neural Network. Iran J Med Phys 2018;15:108-116.
  40. Bridge C, Rosenthal M, Wright B, et al. Fully-Automated Analysis of Body Composition from CT in Cancer Patients Using Convolutional Neural Networks. 2018.
  41. Belharbi S, Chatelain C, Herault R, Adam S, Thureau S, Chastan M, Modzelewski R. Spotting L3 slice in CT scans using deep convolutional network and transfer learning. Comput Biol Med 2017;87:95-103. [Crossref] [PubMed]
  42. Lee H, Troschel FM, Tajmir S, Fuchs G, Mario J, Fintelmann FJ, Do S. Pixel-Level Deep Segmentation: Artificial Intelligence Quantifies Muscle on Computed Tomography for Body Morphometric Analysis. J Digit Imaging 2017;30:487-498. [Crossref] [PubMed]
  43. Mitsiopoulos N, Baumgartner RN, Heymsfield SB, Lyons W, Gallagher D, Ross R. Cadaver validation of skeletal muscle measurement by magnetic resonance imaging and computerized tomography. J Appl Physiol 1998;85:115-22. [Crossref] [PubMed]
  44. Shen W, Punyanitya M, Wang Z, Gallagher D, St-Onge MP, Albu J, Heymsfield SB, Heshka S. Total body skeletal muscle and adipose tissue volumes: estimation from a single abdominal cross-sectional image. J Appl Physiol 2004;97:2333-8. [Crossref] [PubMed]
  45. Bar Y, Diamant I, Wolf L, Lieberman S, Konen E, Greenspan H. Chest pathology detection using deep learning with non-medical training. Int Symp Biomed Imaging 2015;2015:294-7.
  46. Shin H-C, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans Med Imaging 2016;35:1285-98. [Crossref] [PubMed]
  47. Chung H, Cobzas D, Birdsell L, Lieffers J, Baracos V. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis. Proc SPIE 2009;7261. [Crossref]
  48. Sled JG, Zijdenbos AP, Evans AC. A nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Trans Med Imaging 1998;17:87-97. [Crossref] [PubMed]
  49. Langarizadeh M, Maghsoudi B, Nilforushan N. Decision Support System for Age-Related Macular Degeneration Using Convolutional Neural Networks. Iran J Med Phys 2017;14:141-8.
  50. Kohonen T. Self-organized formation of topologically correct feature maps. Biol Cybern 1982;43:59-69. [Crossref]
  51. Demirhan A. Güler undefinednan. Combining Stationary Wavelet Transform and Self-Organizing Maps for Brain MR Image Segmentation. Eng Appl Artif Intell 2011;24:358-67. [Crossref]
  52. Caselles V, Catté F, Coll T, Dibos F. A geometric model for active contours in image processing. Numer Math 1993;66:1-31. [Crossref]
  53. Malladi R, Sethian JA, Vemuri BC. Shape modeling with front propagation: a level set approach. IEEE Trans Pattern Anal Mach Intell 1995;17:158-75. [Crossref]
  54. Huang G, Liu Z, Weinberger KQ. Densely Connected Convolutional Networks. CORR 2016;abs/1608.0.
  55. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. Lect Notes Comput Sci 2015;9351:234-41. [Crossref]
  56. Milletari F, Navab N, Ahmadi SA. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. 2016 4th Int Conf 3d Vision, 3DV 2016 2016:565-71.
  57. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015:3431-40.
  58. Zgallai W, Brown T, Murtada A, Ali S, Haji A, Khalil K, Omran M, Dalah E, Faris MAE, Obaideen AM. The Application of Deep Learning to Quantify SAT/VAT in Human Abdominal Area. 2019 Advances in Science and Engineering Technology International Conferences, 2019:1-5.
  59. Liu T, Udupa JK, Miao Q, Tong Y, Torigian DA. Quantification of body-torso-wide tissue composition on low-dose CT images via automatic anatomy recognition. Med Phys 2019;46:1272-85. [PubMed]
  60. Yang YX, Chong MS, Tay L, Yew S, Yeo A, Tan CH. Automated assessment of thigh composition using machine learning for Dixon magnetic resonance images. Magma 2016;29:723-31. [Crossref] [PubMed]
  61. Castaneda C, Nalley K, Mannion C, Bhattacharyya P, Blake P, Pecora A, Goy A, Suh KS. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine. J Clin Bioinforma 2015;5:4. [Crossref] [PubMed]
Cite this article as: Attanasio S, Forte SM, Restante G, Gabelloni M, Guglielmi G, Neri E. Artificial intelligence, radiomics and other horizons in body composition assessment. Quant Imaging Med Surg 2020;10(8):1650-1660. doi: 10.21037/qims.2020.03.10

Download Citation