Structure and Interpretation of Computer Programs



Yüklə 2,71 Mb.
Pdf görüntüsü
səhifə204/222
tarix08.08.2018
ölçüsü2,71 Mb.
#61085
1   ...   200   201   202   203   204   205   206   207   ...   222

(define (user-print object)

  (cond ((compound-procedure? object)

         (display (list ’compound-procedure

                        (procedure-parameters object)

                        (procedure-body object)

                        ’


)))

        ((compiled-procedure? object)

         (display ’))

        (else (display object))))

51

 We can do even better by extending the compiler to allow compiled code to call interpreted



procedures. See exercise 5.47. 

52

 Independent of the strategy of execution, we incur significant overhead if we insist that errors



encountered in execution of a user program be detected and signaled, rather than being allowed to kill

the system or produce wrong answers. For example, an out-of-bounds array reference can be detected

by checking the validity of the reference before performing it. The overhead of checking, however, can

be many times the cost of the array reference itself, and a programmer should weigh speed against

safety in determining whether such a check is desirable. A good compiler should be able to produce

code with such checks, should avoid redundant checks, and should allow programmers to control the

extent and type of error checking in the compiled code.

Compilers for popular languages, such as C and C++, put hardly any error-checking operations into

running code, so as to make things run as fast as possible. As a result, it falls to programmers to

explicitly provide error checking. Unfortunately, people often neglect to do this, even in critical

applications where speed is not a constraint. Their programs lead fast and dangerous lives. For

example, the notorious ‘‘Worm’’ that paralyzed the Internet in 1988 exploited the UNIX

 TM

 operating

system’s failure to check whether the input buffer has overflowed in the finger daemon. (See Spafford

1989.) 


53

 Of course, with either the interpretation or the compilation strategy we must also implement for the

new machine storage allocation, input and output, and all the various operations that we took as

‘‘primitive’’ in our discussion of the evaluator and compiler. One strategy for minimizing work here is

to write as many of these operations as possible in Lisp and then compile them for the new machine.

Ultimately, everything reduces to a small kernel (such as garbage collection and the mechanism for

applying actual machine primitives) that is hand-coded for the new machine. 

54

 This strategy leads to amusing tests of correctness of the compiler, such as checking whether the



compilation of a program on the new machine, using the compiled compiler, is identical with the

compilation of the program on the original Lisp system. Tracking down the source of differences is

fun but often frustrating, because the results are extremely sensitive to minuscule details. 

[Go to first, previous, next page;   contents;   index]




[Go to first, previous, next page;   contents;   index]

 

References

Abelson, Harold, Andrew Berlin, Jacob Katzenelson, William McAllister, Guillermo Rozas, Gerald

Jay Sussman, and Jack Wisdom. 1992. The Supercomputer Toolkit: A general framework for

special-purpose computing. International Journal of High-Speed Electronics 3(3):337-361.

Allen, John. 1978. Anatomy of Lisp. New York: McGraw-Hill.

ANSI X3.226-1994. American National Standard for Information Systems -- Programming Language



-- Common Lisp.

Appel, Andrew W. 1987. Garbage collection can be faster than stack allocation. Information



Processing Letters 25(4):275-279.

Backus, John. 1978. Can programming be liberated from the von Neumann style? Communications of



the ACM 21(8):613-641.

Baker, Henry G., Jr. 1978. List processing in real time on a serial computer. Communications of the 



ACM 21(4):280-293.

Batali, John, Neil Mayle, Howard Shrobe, Gerald Jay Sussman, and Daniel Weise. 1982. The

Scheme-81 architecture -- System and chip. In Proceedings of the MIT Conference on Advanced

Research in VLSI, edited by Paul Penfield, Jr. Dedham, MA: Artech House.

Borning, Alan. 1977. ThingLab -- An object-oriented system for building simulations using

constraints. In Proceedings of the 5th International Joint Conference on Artificial Intelligence.

Borodin, Alan, and Ian Munro. 1975. The Computational Complexity of Algebraic and Numeric 



Problems. New York: American Elsevier.

Chaitin, Gregory J. 1975. Randomness and mathematical proof. Scientific American 232(5):47-52.

Church, Alonzo. 1941. The Calculi of Lambda-Conversion. Princeton, N.J.: Princeton University 

Press.


Clark, Keith L. 1978. Negation as failure. In Logic and Data Bases. New York: Plenum Press, pp. 

293-322.


Clinger, William. 1982. Nondeterministic call by need is neither lazy nor by name. In Proceedings of

the ACM Symposium on Lisp and Functional Programming, pp. 226-234.

Clinger, William, and Jonathan Rees. 1991. Macros that work. In Proceedings of the 1991 ACM



Conference on Principles of Programming Languages, pp. 155-162.

Colmerauer A., H. Kanoui, R. Pasero, and P. Roussel. 1973. Un système de communication

homme-machine en français. Technical report, Groupe Intelligence Artificielle, Université d’Aix

Marseille, Luminy.




Yüklə 2,71 Mb.

Dostları ilə paylaş:
1   ...   200   201   202   203   204   205   206   207   ...   222




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə