The contrast between the two processes can be seen in another way.
In the iterative case, the program
variables provide a complete description of the state of the process at any point. If we stopped the
computation between steps, all we would need to do to resume the computation is to supply the
interpreter with the values of the three program variables. Not so with the recursive process. In this
case there is some additional ‘‘hidden’’ information, maintained by the interpreter and not contained in
the program variables, which indicates ‘‘where the process is’’ in negotiating the chain of deferred
operations. The longer the chain, the more information must be maintained.
30
In contrasting iteration and recursion, we must be careful not to confuse the notion of a recursive
process with the notion of a recursive
procedure. When we describe a procedure as recursive, we are
referring to the syntactic fact that the procedure definition refers (either directly or indirectly) to the
procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive,
we are speaking about how the process evolves, not about the syntax of how a procedure is written. It
may seem disturbing that we refer to a recursive procedure such as
fact-iter
as generating an
iterative process. However, the process really is iterative: Its state is captured completely by its three
state variables, and an interpreter need keep track of only three variables in order to execute the
process.
One reason that the distinction between process and procedure may be confusing is that most
implementations of common languages (including Ada, Pascal, and C) are designed in such a way that
the interpretation of any recursive procedure consumes an amount of memory that grows with the
number of procedure calls, even when the process described is, in principle, iterative. As a
consequence, these languages can describe iterative processes only by resorting to special-purpose
‘‘looping constructs’’ such as
do
,
repeat
,
until
,
for
, and
while
. The implementation of
Scheme we shall consider in chapter 5 does not share this defect. It will execute an iterative process in
constant space, even if the iterative process is described by a recursive procedure. An implementation
with this property is called tail-recursive. With a tail-recursive implementation, iteration can be
expressed using the ordinary procedure call mechanism, so that special iteration constructs are useful
only as syntactic sugar.
31
Exercise 1.9. Each of the following two procedures defines a method for adding two positive integers
in terms of the procedures
inc
, which increments its argument by 1, and
dec
, which decrements its
argument by 1.
(define (+ a b)
(if (= a 0)
b
(inc (+ (dec a) b))))
(define (+ a b)
(if (= a 0)
b
(+ (dec a) (inc b))))
Using the substitution model, illustrate the process generated by each procedure in evaluating
(+ 4
5)
. Are these processes iterative or recursive?
Exercise 1.10. The following procedure computes a mathematical function called Ackermann’s
function.
(define (A x y)
(cond ((= y 0) 0)
((= x 0) (* 2 y))
((= y 1) 2)
(else (A (- x 1)
(A x (- y 1))))))
What are the values of the following expressions?
(A 1 10)
(A 2 4)
(A 3 3)
Consider the following procedures, where
A
is the procedure defined above:
(define (f n) (A 0 n))
(define (g n) (A 1 n))
(define (h n) (A 2 n))
(define (k n) (* 5 n n))
Give concise mathematical definitions for the functions computed by the procedures
f
,
g
, and
h
for
positive integer values of
n. For example,
(k n)
computes 5
n
2
.
1.2.2 Tree Recursion
Another common pattern of computation is called tree recursion. As an example, consider computing
the sequence of Fibonacci numbers, in which each number is the sum of the preceding two:
In general, the Fibonacci numbers can be defined by the rule
We can immediately translate this definition into a recursive procedure for computing Fibonacci
numbers:
(define (fib n)
(cond ((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1))
(fib (- n 2))))))
Figure 1.5: The tree-recursive process generated in computing
(fib 5)
.
Figure 1.5: The tree-recursive process generated in computing
(fib 5)
.
Consider the pattern of this computation. To compute
(fib 5)
, we compute
(fib 4)
and
(fib
3)
. To compute
(fib 4)
, we compute
(fib 3)
and
(fib 2)
. In general, the evolved process
looks like a tree, as shown in figure 1.5. Notice that the branches split into two at each level (except at
the bottom); this reflects the fact that the
fib
procedure calls itself twice each time it is invoked.
This procedure is instructive as a prototypical tree recursion, but it
is a terrible way to compute
Fibonacci numbers because it does so much redundant computation. Notice in figure 1.5 that the entire
computation of
(fib 3)
-- almost half the work -- is duplicated. In fact, it is not hard to show that
the number of times the procedure will compute
(fib 1)
or
(fib 0)
(the number of leaves in the
above tree, in general) is precisely Fib(n + 1). To get an idea of how bad this is, one can show that the
value of Fib(n) grows exponentially with n. More precisely (see exercise 1.13), Fib(n) is the closest
integer to
n
/ 5, where
is the golden ratio, which satisfies the equation
Thus, the process uses a number of steps that grows exponentially with the input. On the other hand,
the space required grows only linearly with the input, because we need keep track only of which nodes
are above us in the tree at any point in the computation. In general, the number of steps required by a
tree-recursive process will be proportional to the number of nodes in the tree, while the space required
will be proportional to the maximum depth of the tree.