## Digital Line Drawing

Simple version: set pixels to a color --- note: we’re already in trouble
Simple algorithm:

Pick longest dimension – want pixel per row/column

Think about Octants -- 1^{st} octant to make things easy

Fewer pixels: we’ll have gaps, more pixels, we’ll have bulges

Could compute pixel center (easy, but expensive)

Step: each step either goes up or not

Brezenham’s algorithm, midpoint algorithm

Aliasing

Jaggies (at step)

45 degree line is thinner than horizontal line

No control over thickness (not even constant!)

Note: filtering after the fact won’t help

Solution: in little square model

Pixels “partially full”

Not all or nothing

Problem 1: need to know what’s underneath

**Composite** line over background

Blending

Hacky: vertical distance (need for midpoint any way)
Fill based on distance

At crossover, 50% in each

Less (or no) jaggies

Still have the thinner at angle problem

Think about as thick line – with smooth (tent) cross section

Smoother (filtered)

Technically, we’re shearing the rectangle – hence thinner

Real line drawing:

Thicker rectangles

Still need to soften edges

Need to composite

Point Drawing

Nearest neighbor

No control of size of point

(resolution dependent)

Jaggies

Jumps from pixel to pixel when moves

Better
Disc with smooth edges

Not too big, not too small

## Filtering

Each point is a function of its neighborhood
All kinds of filters – any function, any neighborhood

Linear filters

Pixel is a linear combination of its neighbors

Averaging

Moving averages

Sampling as averaging

Discrete Convolution

Weighted moving average (mask)

Boundary cases

Assume zero

Repeat ends

Mirror

Continuous Convolution

Relationship of Convolution to Frequency Domain
Blurring

As a low-pass filter
Different Types of LPF

Box

Tent

Gaussian – B/Spline
Ideal Low-Pass Filter

High-Pass filtering

High-pass vs. Sharpening

Square wave 1 0 1 0 1 0 … * [ ¼ ½ ¼ ] = .5 .5 .5 .5 .5

No HF left, so high pass filtering does nothing

Sobel (Edge Detector) [-1 0 1]

HPF – look like opposite of LPF (1-LPF)

[ -1/2 1 -1/2 ]

Sharpening by De-Convolution

Need to know what the blur was

Or can try to guess

Guess at what the signal was that when blurred gave the result

## Sampling and Convolution

Sampling a continuous signal
Test value at points – but make sure signal is bandpass!

Could pre-filter (apply a low-pass filter)

Put the filter on the “probe” (compute convolution at samples)

Reconstructing from samples

Spike chain – just need to low pass filter

LPF is convolution!

Could stick kernel on the “probe”

Linear interpolation as a convolution

Catmull-Rom Cubic as a convolution (note overshoot)

Re-Sampling

Reconstruct (low-pass), Pre-Filter (low-pass), sample

Put the two filters together into a single one

Ideal case – Nyquist

Ideal Low-Pass Filter

Why not in practice

## Question 1:

Be sure you can do discrete convolutions!
For example, if:

f = [ 1 2 1 3 1 4 1 5 1 6 2 5 3 4 3 ]

g = 1/3 [1 1 1]

h = 1/6 [1 2 3] (remember: convolutions reverse)
Make sure you can compute f*g, and f*h.

Make sure you know how to deal with the boundaries. (zero pad, repeat ends, …)

## Question 2:

Make sure you can reconstruct a signal given the discrete signal and the sampling kernel.
Consider reconstructing the signal from the following samples (the first sample was at t=0):

f = [ 1 2 1 3 1 2 2 ]
Compute the value of the reconstructed signal at t=1.5, t=2, and t=3.25 with the following reconstruction filters (your answer for each should be 3 numbers).

2.A The unit box (g(t) = 1 if -.5 < t <= .5, 0 otherwise)

Note: in the book, this is the continuous case and r=1/2

2.B The unit tent (g(t) = (1-t) if -1 < t <= 0, t if 0 < t < 1, and 0 otherwise)

Note: in the book, this is the tent filter of r=1/2

## Question 3:

Make sure you understand the idea of resampling by pre-filtering.
Consider resampling the following signal:

[ 0 0 4 4 0 0 4 0 4 0 4 0 0 0 4 0 0 0 ] using the pre-filtering kernel 1/4 [1 2 1]
3.A If you resample the signal at half the sampling rate, what result would you get?

3.B If you made a small change in how you sampled in 2.A (say, chose even instead of odd values), would the results be very much different? What does this say about the adequacy of the kernel for doing this resampling?

3.C If you resample the signal at 1/3rd the samping rate (pick every third sample), what result would you get?

3.D If you made a small change in how you sampled in 2.C (say, shifted the samples a little), could the results be very much different? What does this say about the adequacy of the kernel for doing this resampling?

**Dostları ilə paylaş:**