The following procedure runs the
test a given number of times, as specified by a parameter. Its value is
true if the test succeeds every time, and false otherwise.
(define (fast-prime? n times)
(cond ((= times 0) true)
((fermat-test n) (fast-prime? n (- times 1)))
(else false)))
Probabilistic methods
The Fermat test differs in character from most familiar algorithms, in which one computes an answer
that is guaranteed to be correct. Here, the answer obtained is only probably correct. More precisely, if
n ever fails the Fermat test, we can be certain that n is not prime. But the fact that n passes the test,
while an extremely strong indication, is still not a guarantee that n is prime. What we would like to say
is that for any number n, if we perform the test enough times and find that n always passes the test,
then the probability of error in our primality test can be made as small as we like.
Unfortunately, this assertion is not quite correct. There do exist numbers that fool the Fermat test:
numbers n that are not prime and yet have the property that a
n
is congruent to a modulo n for all
integers a < n. Such numbers are extremely rare, so the Fermat test is quite reliable in practice.
47
There are variations of the Fermat test that cannot be fooled. In these tests, as with the Fermat method,
one tests the primality of an integer
n by choosing a random integer
a<
n and checking some condition
that depends upon n and a. (See exercise 1.28 for an example of such a test.) On the other hand, in
contrast to the Fermat test, one can prove that, for any n, the condition does not hold for most of the
integers a<n unless n is prime. Thus, if n passes the test for some random choice of a, the chances are
better than even that n is prime. If n passes the test for two random choices of a, the chances are better
than 3 out of 4 that n is prime. By running the test with more and more randomly chosen values of a
we can make the probability of error as small as we like.
The existence of tests for which one can prove that the chance of error becomes arbitrarily small has
sparked interest in algorithms of this type, which have come to be known as probabilistic algorithms.
There is a great deal of research activity in this area, and probabilistic algorithms have been fruitfully
applied to many fields.
48
Exercise 1.21. Use the
smallest-divisor
procedure to find the smallest divisor of each of the
following numbers: 199, 1999, 19999.
Exercise 1.22. Most Lisp implementations include a primitive called
runtime
that returns an integer
that specifies the amount of time the system has been running (measured, for example, in
microseconds). The following
timed-prime-test
procedure, when called with an integer n, prints
n and checks to see if n is prime. If n is prime, the procedure prints three asterisks followed by the
amount of time used in performing the test.
(define (timed-prime-test n)
(newline)
(display n)
(start-prime-test n (runtime)))
(define (start-prime-test n start-time)
(if (prime? n)
(report-prime (- (runtime) start-time))))
(define (report-prime elapsed-time)
(display " *** ")
(display elapsed-time))
Using this procedure, write a procedure
search-for-primes
that checks the primality of
consecutive odd integers in a specified range. Use your procedure to find the three smallest primes
larger than 1000; larger than 10,000; larger than 100,000; larger than 1,000,000. Note the time needed
to test each prime. Since the testing algorithm has order of growth of ( n), you should expect that
testing for primes around 10,000 should take about 10 times as long as testing for primes around
1000. Do your timing data bear this out? How well do the data for 100,000 and 1,000,000 support the
n prediction? Is your result compatible with the notion that programs on your machine run in time
proportional to the number of steps required for the computation?
Exercise 1.23. The
smallest-divisor
procedure shown at the start of this section does lots of
needless testing: After it checks to see if the number is divisible by 2 there is no point in checking to
see if it is divisible by any larger even numbers. This suggests that the values used for
test-divisor
should not be 2, 3, 4, 5, 6,
...
, but rather 2, 3, 5, 7, 9,
...
. To implement this
change, define a procedure
next
that returns 3 if its input is equal to 2 and otherwise returns its input
plus 2. Modify the
smallest-divisor
procedure to use
(next test-divisor)
instead of
(+ test-divisor 1)
. With
timed-prime-test
incorporating this modified version of
smallest-divisor
, run the test for each of the 12 primes found in exercise 1.22. Since this
modification halves the number of test steps, you should expect it to run about twice as fast. Is this
expectation confirmed? If not, what is the observed ratio of the speeds of the two algorithms, and how
do you explain the fact that it is different from 2?
Exercise 1.24. Modify the
timed-prime-test
procedure of exercise 1.22 to use
fast-prime?
(the Fermat method), and test each of the 12 primes you found in that exercise. Since the Fermat test
has (
log
n) growth, how would you expect the time to test primes near 1,000,000 to compare with
the time needed to test primes near 1000? Do your data bear this out? Can you explain any discrepancy
you find?
Exercise 1.25. Alyssa P. Hacker complains that we went to a lot of extra work in writing
expmod
.
After all, she says, since we already know how to compute exponentials, we could have simply written
(define (expmod base exp m)
(remainder (fast-expt base exp) m))
Is she correct? Would this procedure serve as well for our fast prime tester? Explain.
Exercise 1.26. Louis Reasoner is having great difficulty doing exercise 1.24. His
fast-prime?
test
seems to run more slowly than his
prime?
test. Louis calls his friend Eva Lu Ator over to help. When
they examine Louis’s code, they find that he has rewritten the
expmod
procedure
to use an explicit
multiplication, rather than calling
square
:
(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (* (expmod base (/ exp 2) m)
(expmod base (/ exp 2) m))
m))
(else
(remainder (* base (expmod base (- exp 1) m))
m))))