Deep Learning in Medical Image Analysis



Yüklə 4,25 Mb.
səhifə20/25
tarix14.06.2022
ölçüsü4,25 Mb.
#89427
1   ...   17   18   19   20   21   22   23   24   25
Figure 9

(a) Shared feature learning from patches of different modalities, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), with a discriminative multimodal deep Boltzmann machine (DBM). The yellow circles represent the input patches, and the blue circles show joint feature representation. (b,c) Visualization of the learned weights in Gaussian restricted Boltzmann machines (RBMs) (bottom) and those of the first hidden layer (top) from MRI and PET pathways in a multimodal DBM (29). Each column, with 11 patches in the upper block and the lower block, composes a three-dimensional patch.
schizophrenia data set and a Huntington disease data set. Inspired by the work of Plis et al., Kim et al. (121) and Suk et al. (33) independently studied applications of deep learning for fMRI-based brain disease diagnosis. Kim et al. used an SAE for whole-brain resting-state functional connec- tivity pattern representation for the diagnosis of schizophrenia and the identification of aberrant functional connectivity patterns associated with schizophrenia. They first computed Pearson’s correlation coefficients between pairs of 116 regions on the basis of their regional mean blood oxygenation level–dependent (BOLD) signals. After performing Fisher’s r-to-z transformation on the coefficients and Gaussian normalization sequentially, they fed the pseudo-z-scored levels into their SAE. More recently, Suk et al. (33) proposed a novel framework of fusing deep learning with a hidden Markov model (HMM) for functional dynamics estimation in resting-state fMRI and successfully used this framework for the diagnosis of mild cognitive impairment (MCI). Specif- ically, they devised a deep auto-encoder (DAE) by stacking multiple RBMs in order to discover hierarchical nonlinear functional relations among brain regions. Figure 10 shows examples of the learned connection weights in the form of functional networks. This DAE was used to trans- form the regional mean BOLD signals into an embedding space, whose bases were understood as complex functional networks. After embedding functional signals, Suk et al. then used the HMM to estimate the dynamic characteristics of functional networks inherent in resting-state fMRI via internal states, which could be inferred statistically from observations. By building a generative model with an HMM, they estimated the likelihood of the input features of resting-state fMRI as belonging to the corresponding status (i.e., MCI or normal healthy control), then used this information to determine the clinical label of a test subject.


Annu. Rev. Biomed. Eng. 2017.19:221-248. Downloaded from www.annualreviews.org Access provided by 82.215.98.77 on 06/08/22. For personal use only.



Figure 10
Functional networks learned from the first hidden layer of the deep auto-encoder from Reference 33. The functional networks in the left column correspond to ( from top to bottom) the default-mode network, executive attention network, visual network, subcortical
regions, and cerebellum. The functional networks in the right column show the relations among regions of different networks, cortices, and cerebellum.
Other studies have used CNNs to diagnose brain disease. Brosch et al. (47) performed manifold learning from downsampled MR images by using a deep generative model composed of three convolutional RBMs and two RBM layers. To speed up the calculation of convolutions, the computational bottleneck of the training algorithm, they performed training in the frequency domain. By generating volume samples from their deep generative model, they validated the effectiveness of deep learning for manifold embedding with no explicitly defined similarity measure or proximity graph. Li et al. (44) constructed a three-layer CNN with two convolutional layers and one fully connected layer. They proposed to use CNNs to integrate multimodal neuroimaging data by designing a 3D CNN architecture that received one volumetric MRI patch as input and another volumetric PET patch as output. When trained end to end on subjects with both data modalities, the network captured the nonlinear relationship between two modalities. These experiments demonstrated that PET data could be predicted and estimated, given the input MRI

Annu. Rev. Biomed. Eng. 2017.19:221-248. Downloaded from www.annualreviews.org Access provided by 82.215.98.77 on 06/08/22. For personal use only.
data, and the authors quantitatively evaluated the proposed data completion method by comparing the classification results according to the predicted and actual PET images.



  1. Yüklə 4,25 Mb.

    Dostları ilə paylaş:
1   ...   17   18   19   20   21   22   23   24   25




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə