Annu. Rev. Biomed. Eng. 2017.19:221-248. Downloaded from www.annualreviews.org Access provided by 82.215.98.77 on 06/08/22. For personal use only.
Figure 5
Similarity maps identifying the correspondence for the point indicated by the red cross in the template (
a) with regard to the subject
(
b) by hand-designed features (
d,
e) and by stacked auto-encoder (SAE) features learned through unsupervised deep learning (
f ). The registered subject
image is shown in panel c. Clearly, inaccurate registration results might undermine supervised feature representation learning, which relies strongly on the correspondences across all training images. In panels
d–
f, the different colors of the voxels indicate their likelihood of being selected as correspondence for their respective locations. Abbreviation: SIFT, scale-invariant feature transform.
the subject point under consideration, making it easy to locate the correspondence of the template point in the subject image domain.
In order to qualitatively evaluate the registration accuracy, Wu et al. obtained deformable image registration results over various public data sets (
Figure 6). Compared with the state-of- the-art registration methods of intensity-based diffeomorphic Demons (87) and feature-based
a Template
b Subject
c Demons
d HAMMER
e HAMMER + SAE