Deep Learning in Medical Image Analysis


Deep Learning for Detection of Anatomical Structures



Yüklə 4,25 Mb.
səhifə14/25
tarix14.06.2022
ölçüsü4,25 Mb.
#89427
1   ...   10   11   12   13   14   15   16   17   ...   25

Deep Learning for Detection of Anatomical Structures


Localization and interpolation of anatomical structures in medical images are key steps in the radiological workflow. Radiologists usually accomplish these tasks by identifying certain anatom- ical signatures, namely image features that can distinguish one anatomical structure from others. Is it possible for a computer to automatically learn such anatomical signatures? The success of such methods essentially depends on how many anatomy signatures can be extracted by computa- tional operations. Whereas early studies often created specific image filters to extract anatomical signatures, more recent research has revealed that deep learning–based approaches have become prevalent for two reasons: (a) Deep learning technologies are now mature enough to solve real- world problems, and (b) more and more medical image data sets have become available to facilitate the exploration of big medical image data.



      1. Detection of organs and body parts. Shin et al. (51) used SAEs to separately learn both visual and temporal features in order to detect multiple organs in a time series of 3D dynamic contrast–enhanced MRI scans over data sets from two studies of liver metastases and one study of kidney metastases. Unlike conventional SAEs, the SAE in this study involved the application of a pooling operation after each layer so that features of progressively larger input regions were es- sentially compressed. Because different organ classes have different properties, the authors trained multiple models to separate each organ from all of the other organs in a supervised manner.

Roth et al. (93) presented a method for organ- or body part–specific anatomical classification of medical images using deep convolutional networks. Specifically, they trained their deep network by using 4,298 axial 2D CT images to learn five parts of the body: neck, lungs, liver, pelvis, and legs. Their experiments achieved an anatomy-specific classification error of 5.9% and an average AUC (area under the receiver-operating characteristic curve) value of 0.998. However, real-world applications may require more finely grained differentiation than that used for only five body parts (e.g., they may need to distinguish aortic arch from cardiac sections). To address this limitation, Yan et al. (94, 95) designed a multistate deep learning framework with a CNN to identify the body part

Annu. Rev. Biomed. Eng. 2017.19:221-248. Downloaded from www.annualreviews.org Access provided by 82.215.98.77 on 06/08/22. For personal use only.
a Intensity b Hand-designed features c SAE-learned features


Yüklə 4,25 Mb.

Dostları ilə paylaş:
1   ...   10   11   12   13   14   15   16   17   ...   25




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə