Image and Signal Processing



Yüklə 103,5 Kb.
tarix07.11.2018
ölçüsü103,5 Kb.
#78487

Image and Signal Processing

Michael Wasserman (msw2103) & Chad Plummer (crp2103)

CS4162 - HW 1 (Written Questions)

2/16/06
Aliasing (7 points)


1. What is aliasing? Give examples of spatial and temporal aliasing. Discuss your answer more formally in both spatial and frequency domains (but you can be brief and need not give mathematical formulae).
Aliasing is the misinterpretation of discretely sampled continuous data, often resulting in a loss of precision and level of detail. Spatial aliasing is the loss of continuity or the altogether loss of data that results in discretizing continuous graphical data, on account of a deficiency of spatial resolution. The most well known issue of this is the “jaggies” problem, wherein unblended pixels are forced to accommodate being partially filled by simply using a display threshold, creating graphical artifacts. In the spatial domain, this is an inability to accommodate sharp data with sinusoidal functions at relatively lower frequencies. In the frequency domain, this can be thought of as taking insufficient samples of high frequency data and incorrectly reconstructing this data with a lower frequency that happens to provide the correct values where the original data was sampled. Temporal aliasing is the visual capture of fast moving, high frequency imagery at a lower frequency, such that it appears to be an alternate form of motion. Examples of this are the wagon-wheel effect (car tires looking like they are spinning backward) and Zoetropes (Fast spinning platters with figurines mounted such that a strobe light gives the appearance of animation).
2. Does aliasing contribute to jaggies (such as jagged lines)? Discuss your answer briefly in both spatial and frequency domains. Be brief (no need for mathematical formulae), but specifically address polygon edges and lines.
Yes, jaggies can be caused by aliasing, this is the visible round-off error in displaying partially filled pixels at some discrete value of visibility. In the spatial domain this is the problem of data with infinitely sharp sides, thus there is no frequency that will sample them perfectly. These regions have high frequency data to allow for the hard contrast, and these have the distinct possibility of being misinterpreted as low frequency sinusoidal curves. In the frequency domain, this is high frequencies masquerading as low frequencies, because of poor sampling rates or an insufficient sampling technique. This causes two or more frequencies to become indistinguishable, and when the incorrect frequency is chosen, a significant amount of the original data is lost. In the case of polygon edges and lines, these typically cause high contrast regions that regularly don’t align perfectly with the pixel schema, and necessitate some form or approximation.
3. What is quantization? Does it contribute to jaggies or aliasing? Again, discuss your answer in both spatial and frequency domains.
Quantization is a lack of intensity resolution, which is not to be confused with a lack of spatial resolution that aliasing and its visual jaggies represent. Quantization is the discretization of the per-channel data for each pixel, usually performed in order to save storage space. This can cause artifacts visually similar to jaggies, because hard contrasts will become apparent over continuous gradients in the original data, but they are two separate issues. In the spatial domain, this is performing a rounding function that converts continuous data into the form of a complex step function. In the frequency domain this can be thought of as limiting the data set to discrete amplitudes.

Convolution (10 points)
1. What is (continuous) convolution in the spatial domain? Define it mathematically.
Continuous convolution in the spatial domain is the integrated multiplication of two functions f and g (the filter and signal, or vice versa) as such: h(y) = ∫(-∞  ∞) f(x)g(y-x)
2. What does it become in the frequency (Fourier) domain? Derive this relation mathematically.
In the frequency domain, this becomes the simple multiplication of the two functions, as such:

H(x) = F(x)G(x). This is derived by using the forward Fourier transformation function:



F(u) = ∫(-∞  ∞) f(x)e-2πiux dx.
Low-Pass Filter (8 points) To blur an image, we typically convolve with a low pass filter. Consider the ideal low-pass filter, which lets through only frequencies smaller than ω (without any attenuation or filtering) but completely eliminates higher frequencies.
1. Mathematically derive the form of this ideal low-pass filter in both the spatial and frequency domains in 1D. Your answer should include a sketch or graph of the function in both domains.
The ideal low-pass filter is truncating function drawn below in the frequency domain, which corresponds to the spatial domain’s sinc function. The sinc function is negative at some points (invalid values) and rings infinitely, necessitating that it be truncated and approximated. Mathematically, this is sinc(x) = sin(πx)/ πx.

2. Is this filter commonly used in image processing? Why or why not?
The actual sinc function cannot be used, because it would entail an infinite amount of spatial and computational complexity to accommodate the ringing, it has negative values which must be worked around, and it is difficult to compute, instead discrete filter approximations are used.
Yüklə 103,5 Kb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə