Structure and Interpretation of Computer Programs



Yüklə 2,71 Mb.
Pdf görüntüsü
səhifə23/222
tarix08.08.2018
ölçüsü2,71 Mb.
#61085
1   ...   19   20   21   22   23   24   25   26   ...   222

‘‘I don’t see what difference that could make,’’ says Louis. ‘‘I do.’’ says Eva. ‘‘By writing the

procedure like that, you have transformed the   (

log

 n) process into a   (n) process.’’ Explain. 



Exercise 1.27.  Demonstrate that the Carmichael numbers listed in footnote 47 really do fool the

Fermat test. That is, write a procedure that takes an integer n and tests whether a



n

 is congruent to a

modulo n for every a<n, and try your procedure on the given Carmichael numbers. 

Exercise 1.28.  One variant of the Fermat test that cannot be fooled is called the Miller-Rabin test

(Miller 1976; Rabin 1980). This starts from an alternate form of Fermat’s Little Theorem, which states

that if n is a prime number and a is any positive integer less than n, then a raised to the (n - 1)st power

is congruent to 1 modulo n. To test the primality of a number n by the Miller-Rabin test, we pick a

random number a<n and raise a to the (n - 1)st power modulo n using the 

expmod


 procedure.

However, whenever we perform the squaring step in 

expmod

, we check to see if we have discovered



a ‘‘nontrivial square root of 1 modulo n,’’ that is, a number not equal to 1 or n - 1 whose square is

equal to 1 modulo n. It is possible to prove that if such a nontrivial square root of 1 exists, then n is not

prime. It is also possible to prove that if n is an odd number that is not prime, then, for at least half the

numbers a<n, computing a



n-1

 in this way will reveal a nontrivial square root of 1 modulo n. (This is

why the Miller-Rabin test cannot be fooled.) Modify the 

expmod


 procedure to signal if it discovers a

nontrivial square root of 1, and use this to implement the Miller-Rabin test with a procedure analogous

to 

fermat-test



. Check your procedure by testing various known primes and non-primes. Hint:

One convenient way to make 

expmod

 signal is to have it return 0. 



29

 In a real program we would probably use the block structure introduced in the last section to hide

the definition of 

fact-iter

(define (factorial n)



  (define (iter product counter)

    (if (> counter n)

        product

        (iter (* counter product)

              (+ counter 1))))

  (iter 1 1))

We avoided doing this here so as to minimize the number of things to think about at once. 

30

 When we discuss the implementation of procedures on register machines in chapter 5, we will see



that any iterative process can be realized ‘‘in hardware’’ as a machine that has a fixed set of registers

and no auxiliary memory. In contrast, realizing a recursive process requires a machine that uses an 

auxiliary data structure known as a stack

31

 Tail recursion has long been known as a compiler optimization trick. A coherent semantic basis for



tail recursion was provided by Carl Hewitt (1977), who explained it in terms of the

‘‘message-passing’’ model of computation that we shall discuss in chapter 3. Inspired by this, Gerald

Jay Sussman and Guy Lewis Steele Jr. (see Steele 1975) constructed a tail-recursive interpreter for

Scheme. Steele later showed how tail recursion is a consequence of the natural way to compile

procedure calls (Steele 1977). The IEEE standard for Scheme requires that Scheme implementations 

be tail-recursive. 

32

 An example of this was hinted at in section 1.1.3: The interpreter itself evaluates expressions using



a tree-recursive process. 


33

 For example, work through in detail how the reduction rule applies to the problem of making

change for 10 cents using pennies and nickels. 

34

 One approach to coping with redundant computations is to arrange matters so that we automatically



construct a table of values as they are computed. Each time we are asked to apply the procedure to

some argument, we first look to see if the value is already stored in the table, in which case we avoid

performing the redundant computation. This strategy, known as tabulation or memoization, can be

implemented in a straightforward way. Tabulation can sometimes be used to transform processes that

require an exponential number of steps (such as 

count-change

) into processes whose space and

time requirements grow linearly with the input. See exercise 3.27. 

35

 The elements of Pascal’s triangle are called the binomial coefficients, because the nth row consists



of the coefficients of the terms in the expansion of (x + y)

n

. This pattern for computing the coefficients 

appeared in Blaise Pascal’s 1653 seminal work on probability theory, Traité du triangle arithmétique.

According to Knuth (1973), the same pattern appears in the Szu-yuen Yü-chien (‘‘The Precious Mirror

of the Four Elements’’), published by the Chinese mathematician Chu Shih-chieh in 1303, in the

works of the twelfth-century Persian poet and mathematician Omar Khayyam, and in the works of the

twelfth-century Hindu mathematician Bháscara Áchárya. 

36

 These statements mask a great deal of oversimplification. For instance, if we count process steps as



‘‘machine operations’’ we are making the assumption that the number of machine operations needed

to perform, say, a multiplication is independent of the size of the numbers to be multiplied, which is

false if the numbers are sufficiently large. Similar remarks hold for the estimates of space. Like the

design and description of a process, the analysis of a process can be carried out at various levels of

abstraction. 

37

 More precisely, the number of multiplications required is equal to 1 less than the log base 2 of n



plus the number of ones in the binary representation of n. This total is always less than twice the log

base 2 of n. The arbitrary constants k

1

 and k



2

 in the definition of order notation imply that, for a

logarithmic process, the base to which logarithms are taken does not matter, so all such processes are

described as   (

log

 n). 



38

 You may wonder why anyone would care about raising numbers to the 1000th power. See 

section 1.2.6. 

39

 This iterative algorithm is ancient. It appears in the Chandah-sutra by Áchárya Pingala, written



before 200 

B

.



C

. See Knuth 1981, section 4.6.3, for a full discussion and analysis of this and other

methods of exponentiation. 

40

 This algorithm, which is sometimes known as the ‘‘Russian peasant method’’ of multiplication, is



ancient. Examples of its use are found in the Rhind Papyrus, one of the two oldest mathematical

documents in existence, written about 1700 

B

.

C



. (and copied from an even older document) by an

Egyptian scribe named A’h-mose. 

41

 This exercise was suggested to us by Joe Stoy, based on an example in Kaldewaij 1990. 



42

 Euclid’s Algorithm is so called because it appears in Euclid’s Elements (Book 7, ca. 300 

B

.

C



.).

According to Knuth (1973), it can be considered the oldest known nontrivial algorithm. The ancient

Egyptian method of multiplication (exercise 1.18) is surely older, but, as Knuth explains, Euclid’s

algorithm is the oldest known to have been presented as a general algorithm, rather than as a set of

illustrative examples. 



Yüklə 2,71 Mb.

Dostları ilə paylaş:
1   ...   19   20   21   22   23   24   25   26   ...   222




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə