Iterations – Norberg –
Software Development at EMCC
15
Figure 3. Compiling operation in A-0.
Hopper presented the A-0 compiler at the Association for Computing
Machinery (ACM) meeting in Pittsburgh in May 1952. She carried 200
copies of her presentation to the meeting, and returned with 100 copies to
Philadelphia.
42
Richard Ridgway in a paper delivered in September 1952
offered some comparative data for the use of A-0 acquired around the time
of Hopper’s talk. The group compared the calculation of the problem
discussed above using the conventional method of program development
and running and using the compiler. In the conventional method, 740
minutes were needed of the programmer’s time, 105 minutes for auxiliary
manpower and equipment, and 35 minutes to run the problem on the
UNIVAC. The equivalent numbers using the compiler were 20, 20, and
8.5 minutes, respectively. Thus, problem solution required 880 minutes
conventionally and 48.5 minutes using a compiler.
43
In spite of this
advantage in time, the A-0 compiler did
not come into general use,
because EMCC was in process of increasing its generality (A-1) and then
developed a more effective compiler by 1955 (A-2).
The time specified for programmer minutes in a given problem is only
useful in this comparison. The time translates into only about an hour-and-
a-half. A more detailed analysis of the conventional method for UNIVAC
Iterations – Norberg – Software Development at EMCC
16
would show that the problem setup could sometimes take weeks,
especially with a new problem. Consider the four stages in addressing
problem analysis in the EMCC programming group. Lloyd Stowe noted in
early 1951 that the programmer’s task could be divided into four parts:
analysis of the problem, preparation of block diagrams and flow charts,
coding, checking, and preparation of time estimates, and, if needed,
running of sample problems through the code, and “bookkeeping.”
44
He
noted that in one case he spent seven weeks on preparation of a problem.
After the first preparation, he received more information about the
problem. And after a second preparation, more information came that the
people with the problem had not previously recognized as necessary. So a
third effort was required. This example is reminiscent of the later problem
in the 1960s of trying to obtain an expert’s knowledge for developing an
expert system. In the latter case, understanding a problem requires
consultation with the proposers of the problem and analysis of the
information provided. Since Stowe concentrated on commercial problems,
he was particularly interested in the nature and amount of input data. The
Census Bureau data, which Stowe, Snyder, and Gilpin were working on at
this time, expected an input of 151,000,000 punch cards.
45
Programmers
asked themselves such questions as: what form does the data have? What
is the required form and volume of the output data? How will the output
be used? Will the output be reused? Etc. From this, the programmer could
craft a block diagram or flow chart of the problem, in classic von
Neumann style. The block diagrams helped Stowe, and presumably other
programmers, to identify omissions, inconsistencies, and errors. The
general order of problem solution was set down, sometimes in great detail:
“take this number, add it to that one, divide it by two, and get a
percentage.”
46
Next came the flow chart procedure.
The most laborious part of the process followed. The flow charts had to be
translated into the language of the machine, a process the compiler was
designed to circumvent. To accomplish the simple task of adding two
numbers, a significant number of lines of code were written.
The programmer must instruct the machine in its particular code to bring one
number from the storage. He must tell it in another
operation to add another
number from the storage and must further instruct it to take the sum and send it
back to storage. These three operations for the computer are implied by one
operation in the flow chart. It is frequently difficult to look at the flow chart and
say it is going to need 752 lines of coding; it is almost impossible without
experience with the particular type of problem. Some of the most innocuous
looking boxes on the flow chart may take lines and lines of coding.
47
It was at this point in the process that a decision could be made between
two possible solutions, if multiple possibilities were under investigation.
After coding, an independent programmer at EMCC checked the entire
program. At EMCC, the coders tried to have at least two independent
Iterations – Norberg – Software Development at EMCC
17
checks made of the code at this point. In the
event the problem needed to
be put aside for a higher priority concern, Stowe would try to write a
report on the work to that point so that when he returned to the problem he
would not have to start anew. Then came specific operating instructions.
These instructed the operator what to do if something went wrong with the
routine, how the tapes were to be mounted, how long was the expected
run. Even these instructions were occasionally not enough. Sometimes, the
programmer actually accompanied the operator during the program run to
be able to handle programming errors if they turned up.
One last point about the conventional programming activity should be
noted. As computer people emphasize repeatedly, internal memory space
and computer running time were at a premium. To conserve time when
running a new problem for the first time, the coder entered rerun aspects
(i.e., checkpoints) to the program to prevent having to go back to the
beginning of the problem each time an error was corrected and the
problem was rerun. “The problem should be arranged in such a logical
form that it is necessary to go back only just so far and start over, not go
back and completely rerun the entire problem.”
48
This procedure saved
time, and, perhaps more important, money.
In his talk to the seminar, Stowe emphasized the need for training of
coders
and programmers, a point also stressed by Mauchly to Remington
Rand management at this time. At this time, the programmer needed to be
familiar with the logic of the computer system, but did not need to be a
computer design specialist. They could be taught programming. A
background in particular problems would be a decided advantage. He
stopped short of calling for a training program. If nothing else, Stowe’s
presentation illustrated the attitude of EMCC programming people of the
need and desire for more sophisticated programming tools. The Hopper
group’s work on A-0 was designed to meet this need. But in 1951 A-0 did
not go far enough.
Hopper realized that even writing the input specifications for the A-0
compiler was long and cumbersome. She and her group adopted a three-
address code for the 12 alpha-decimal characters. They imposed this on
top of A-0, and wrote a translator to put on the front end of A-0. Thus, the
A-2 compiler came to be.
49
In the previous compilers (A-0 and A-1), the
problem analyst prepared the problem for solution and submitted the steps
to a coder for preparation, just as was done in coding any problem. In A-2,
the analyst circumvented this step through the use of a “Pseudo-code.”
The Pseudo-code instructions were recorded on tape and read into the
computer. The compiler read these instructions and assembled the entire
program for running. By this time, the C-10 code was in use on the
UNIVAC.