52
This formula was discovered by the seventeenth-century English mathematician John Wallis.
53
It would be clearer and less intimidating to people learning Lisp if a name more obvious than
lambda
, such as
make-procedure
, were used. But the convention is firmly entrenched. The
notation is adopted from the calculus, a mathematical formalism introduced by the mathematical
logician Alonzo Church (1941). Church developed the calculus to provide a rigorous foundation for
studying the notions of function and function application. The calculus has become a basic tool for
mathematical investigations of the semantics of programming languages.
54
Understanding internal definitions well enough to be sure a program means what we intend it to
mean requires a more elaborate model of the evaluation process than we have presented in this
chapter. The subtleties do not arise with internal definitions of procedures, however. We will return to
this issue in section 4.1.6, after we learn more about evaluation.
55
We have used 0.001 as a representative ‘‘small’’ number to indicate a tolerance for the acceptable
error in a calculation. The appropriate tolerance for a real calculation depends upon the problem to be
solved and the limitations of the computer and the algorithm. This is often a very subtle consideration,
requiring help from a numerical analyst or some other kind of magician.
56
This can be accomplished using
error
, which takes as arguments a number of items that are
printed as error messages.
57
Try this during a boring lecture: Set your calculator to radians mode and then repeatedly press the
cos
button until you obtain the fixed point.
58
(pronounced ‘‘maps to’’) is the mathematician’s way of writing
lambda
. y x/y means
(lambda(y) (/ x y))
, that is, the function whose value at y is x/y.
59
Observe that this is a combination whose operator is itself a combination. Exercise 1.4 already
demonstrated the ability
to form such combinations, but that was only a toy example. Here we begin to
see the real need for such combinations -- when applying a procedure that is obtained as the value
returned by a higher-order procedure.
60
See exercise 1.45 for a further generalization.
61
Elementary calculus books usually describe Newton’s method in terms of the sequence of
approximations x
n+1
= x
n
- g(x
n
)/Dg(x
n
). Having language for talking about processes and using the
idea of fixed points simplifies the description of the method.
62
Newton’s method does not always converge to an answer, but it can be shown that in favorable
cases each iteration doubles the number-of-digits accuracy of the approximation to the solution. In
such cases, Newton’s method will converge much more rapidly than the half-interval method.
63
For finding square roots, Newton’s method converges rapidly to the correct solution from any
starting point.
64
The notion of first-class status of programming-language elements is due to the British computer
scientist Christopher Strachey (1916-1975).
65
We’ll see examples of this after we introduce data structures in chapter 2.
66
The major implementation cost of first-class procedures is that allowing procedures to be returned
as values requires reserving storage for a procedure’s free variables even while the procedure is not
executing. In the Scheme implementation we will study in section 4.1, these variables are stored in the
procedure’s environment.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
Chapter 2
Building Abstractions with Data
We now come to the decisive step of mathematical
abstraction: we forget about what the symbols stand for.
...
[The mathematician] need not be idle; there are many
operations which he may carry out with these symbols,
without ever having to look at the things they stand for.
Hermann Weyl, The Mathematical Way of Thinking
We concentrated in chapter 1 on computational processes and on the role of procedures in program
design. We saw how to use primitive data (numbers) and primitive operations (arithmetic operations),
how to combine procedures to form compound procedures through composition, conditionals, and the
use of parameters, and how to abstract procedures by using
define
. We saw that a procedure can be
regarded as a pattern for the
local evolution of a process, and we classified, reasoned about, and
performed simple algorithmic analyses of some common patterns for processes as embodied in
procedures. We also saw that higher-order procedures enhance the power of our language by enabling
us to manipulate, and thereby to reason in terms of, general methods of computation. This is much of
the essence of programming.
In this chapter we are going to look at more complex data. All the procedures in chapter 1 operate on
simple numerical data, and simple data are not sufficient for many of the problems we wish to address
using computation. Programs are typically designed to model complex phenomena, and more often
than not one must construct computational objects that have several parts in order to model real-world
phenomena that have several aspects. Thus, whereas our focus in chapter 1 was on building
abstractions by combining procedures to form compound procedures, we turn in this chapter to another
key aspect of any programming language: the means it provides for building abstractions by
combining data objects to form compound data.
Why do we want compound data in a programming language? For the same reasons that we want
compound procedures: to elevate the conceptual level at which we can design our programs, to
increase the modularity of our designs, and to enhance the expressive power of our language. Just as
the ability to define procedures enables us to deal with processes at a higher conceptual level than that
of the primitive operations of the language, the ability to construct compound data objects enables us
to deal with data at a higher conceptual level than that of the primitive data objects of the language.
Consider the task of designing a system to perform arithmetic with rational numbers. We could
imagine an operation
add-rat
that takes two rational numbers and produces their sum.
In terms of
simple data, a rational number can be thought of as two integers: a numerator and a denominator.
Thus, we could design a program in which each rational number would be represented by two integers
(a numerator and a denominator) and where
add-rat
would be implemented by two procedures (one
producing the numerator of the sum and one producing the denominator). But this would be awkward,
because we would then need to explicitly keep track of which numerators corresponded to which
denominators. In a system intended to perform many operations on many rational numbers, such
bookkeeping details would clutter the programs substantially, to say nothing of what they would do to