Deep Learning in Medical Image Analysis



Yüklə 4,25 Mb.
səhifə12/25
tarix14.06.2022
ölçüsü4,25 Mb.
#89427
1   ...   8   9   10   11   12   13   14   15   ...   25
Figure 5 demonstrates the power of feature representations learned by deep learning methods.
Figure 5ac shows a typical image registration result for brain images of an elderly patient, and Figure 5df compares different feature representations for finding a correspondence of a template point. Clearly, the deformed subject image in Figure 5c is far from being well registered with the template image in Figure 5a, especially for ventricles. It is very difficult to learn meaning-
ful features given such inaccurate correspondences derived from imperfect image registration, a problem that many supervised learning methods suffer from (83–85). Moreover, the features [e.g., local patches and scale-invariant feature transform (SIFT) (86)] either detect too many noncorre- sponding points when using the entire intensity patch as the feature vector (Figure 5d ) or have too-low responses and thus miss the correspondence when using SIFT (Figure 5e). Meanwhile, SAE-learned feature representations present the least confusing correspondence information for

a Template b Subject
c Registered
subject image d Local patches e SIFT f SAE


Annu. Rev. Biomed. Eng. 2017.19:221-248. Downloaded from www.annualreviews.org Access provided by 82.215.98.77 on 06/08/22. For personal use only.





Figure 5
Similarity maps identifying the correspondence for the point indicated by the red cross in the template (a) with regard to the subject
(b) by hand-designed features (d,e) and by stacked auto-encoder (SAE) features learned through unsupervised deep learning ( f ). The registered subject image is shown in panel c. Clearly, inaccurate registration results might undermine supervised feature representation learning, which relies strongly on the correspondences across all training images. In panels df, the different colors of the voxels indicate their likelihood of being selected as correspondence for their respective locations. Abbreviation: SIFT, scale-invariant feature transform.

the subject point under consideration, making it easy to locate the correspondence of the template point in the subject image domain.


In order to qualitatively evaluate the registration accuracy, Wu et al. obtained deformable image registration results over various public data sets (Figure 6). Compared with the state-of- the-art registration methods of intensity-based diffeomorphic Demons (87) and feature-based


a Template b Subject c Demons d HAMMER e HAMMER + SAE

Yüklə 4,25 Mb.

Dostları ilə paylaş:
1   ...   8   9   10   11   12   13   14   15   ...   25




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə