|
Deep Learning in Medical Image Analysis
|
səhifə | 4/25 | tarix | 14.06.2022 | ölçüsü | 4,25 Mb. | | #89427 |
|
ji
kj
j
k
Under a mild assumption on the activation function, a two-layer neural network with a finite number of hidden units can approximate any continuous function (67); therefore, it is regarded as a universal approximator. However, it is also possible to approximate complex functions to the same accuracy by using a deep architecture (i.e., one with more than two layers), with a far fewer number of units (8). Thus, it is possible to reduce the number of trainable parameters, enabling training with a relatively small data set (68).
2 W(1) = [W (1) ] ∈ RM ×D; W(2) = [W (2) ] ∈ RK ×M ; b(1) = [b (1) ] ∈ RM ; b(2) = [b (2) ] ∈ RK .
a Stacked auto-encoder b Deep belief network c Deep Boltzmann machine
h(L)
h(L – 1)
h(1)
v
h(L)
h(L – 1)
h(1)
v
h(L)
h(L – 1)
h(1)
v
Annu. Rev. Biomed. Eng. 2017.19:221-248. Downloaded from www.annualreviews.org Access provided by 82.215.98.77 on 06/08/22. For personal use only.
Figure 2
Three representative deep models with vectorized inputs for unsupervised feature learning. The red links, whether directed or undirected, denote the full connections of units in two consecutive layers but no connections among units in the same layer. Note the differences among models in directed/undirected connections and the directions of the connections that depict conditional relationships.
Dostları ilə paylaş: |
|
|