Programming languages



Yüklə 196,57 Kb.
tarix07.11.2018
ölçüsü196,57 Kb.
#78912

PROGRAMMING LANGUAGES

8.2 Late Binding of Machine Code

In the traditional conception (Example 1.7), compilation is a one-time activity, sharply distinguished from program execution. The compiler produces a target program, typically in machine language, which can subsequently be executed many times for many different inputs.



8.2.1 Just-in-Time and Dynamic Compilation

To promote the Java language and virtual machine, Sun Microsystems coined the slogan “write once, run anywhere”—the idea being that programs distributed as Java byte code (JBC) could run on a very wide range of platforms. Source code, of course, is also portable, but byte code is much more compact, and can be interpreted without additional preprocessing.


Dynamic Compilation

We have noted that a language implementation may choose to delay JIT compilation to reduce the impact on program start-up latency. In some cases, compilation must be delayed, either because the source or byte code was not created or discovered until run time, or because we wish to perform optimizations that depend on information gathered during execution


C o = new C( args );

o.bar(); // no question what type this is

Other times it is not:

static void f(C o) {

o.bar();

}

Here the compiler can in-line the call only if it knows that f will never be passed an instance of a class derived from C. A dynamic compiler can perform the optimization if it verifies that there exists no class derived from C anywhere in the (current version of the) program. It must keep notes of what it has done, however: if dynamic linking subsequently extends the program with code that defines a new class D derived from C, the in-line optimization may need to be undone.



static void f(C o) {

if (o.getClass() == C.class) {

... // code with in-lined calls -- much faster

} else {

... // code without in-lined calls

}

} _

An Example System: the HotSpot Java Compiler HotSpot is Sun’s principal JVM and JIT compiler for desktop and server systems. It was first released in 1999, and is available as open source. HotSpot takes its name from its use of dynamic compilation to improve the performance of hot code paths. for systems in which a human user frequently starts new programs. It translates JBC to static single assignment (SSA) form and performs a few straightforward machine-independent optimizations.

them to machine code. As noted in Section 10.2, C# 3.0 includes lambda expressions reminiscent of those in functional languages:



Func square_func = x => x * x;

Here square_func is a function from integers to integers that multiplies its parameter (x) by itself, and returns the product. It is analogous to the following in Scheme.



(let ((square-func (lambda (x) (* x x)))) ...

Given the C# declaration, we can write

y = square_func(3); // 9

But just as Lisp allows a function to be represented as a list, so too does C# allow a lambda expression to be represented as a syntax tree: Expression> square_tree = x => x * x;

Various methods of library class Expression can now be used to explore and manipulate the tree.When desired, the tree can be converted to CIL code:

square_func = square_tree.Compile();

These operations are roughly analogous to the following in Scheme.



(let* ((square-tree ’(lambda (x) (* x x))) ; note the quote mark

(square-func (eval square-tree (scheme-report-environment 5))))

...


The difference in practice is that while Scheme’s eval checks the syntactic validity of the lambda expression and creates the metadata needed for dynamic type

checking, the typical implementation leaves the function in list (tree) form, and interprets it when called. C#’s Compile is expected to produce CIL code; when called it will be JIT compiled and directly executed. Many Lisp dialects and implementations have employed an explicit mix of interpretation and compilation. Common Lisp includes a compile function that takes the name of an existing (interpretable) function as argument. As a side effect, it invokes the compiler on that function, after which the function will (presumably) run much faster:



(defun square (x) (* x x)) ; outermost level function declaration

(square 3) ; 9

(compile ’square)

(square 3) ; also 9 (but faster :-)

CMU Common Lisp, a widely used open-source implementation of the language, incorporates two interpreters and a compiler with two back ends. $foo = "abc";



$foo =˜ s/b/2 + 3/ee; # replace b with the value of 2 + 3

print "$foo\n"; # prints a5c

Perl can also be directed, via library calls or the perlcc command-line script (itself written in Perl), to translate source code to either byte code or machine code. In the former case, the output is an “executable” file beginning with #! /usr/bin/perl. If invoked from the shell, this file will feed itself back into Perl 5, which will notice that the rest of the file contains byte code instead of source, and will perform a quick reconstruction of the syntax tree, ready for interpretation. If directed to produce machine code, perlcc generates a C program, which it then runs through the C compiler.



8.2.2 BinaryTranslation

Just-in-time and dynamic compilers assume the availability of source code or of byte code that retains all of the semantic information of the source. There are times, however, when it can be useful to recompile object code. This process is known as binary translation. It allows already-compiled programs to be run on a machine with a different instruction set architecture. Readers may be familiar, for example, with Apple’s Rosetta system, which allows programs compiled for older PowerPC-based Macintosh computers to run on newer x86-based Macs.


Dynamic Optimization

In a long-running program, a dynamic translator may revisit hot paths and optimize them more aggressively. A similar strategy can also be applied to programs that don’t need translation—that is, to programs that already exist as machine



code for the underlying architecture. This sort of dynamic optimization has been reported to improve performance by as much as 20% over already-optimized code, by exploiting run-time profiling information. Much of the technology of dynamic optimization was pioneered by the Dynamo project at HP Labs in the late 1990s. Dynamo was designed to transparently enhance the performance of applications for the PA-RISC instruction set. A subsequent version, Dynamo RIO, was written for the x86. Dynamo’s key innovation was the concept of a partial execution trace: a hot path whose basic blocks can be reorganized, optimized, and cached as a linear sequence. An example of such a trace appears in Figure 15.4. Procedure print matching takes a set and a predicate as argument, and prints all elements of the set that match the predicate. At run time, Dynamo may discover that the procedure is frequently called with a particular predicate p that is almost never true.



Department of CSE/ISE NIT,Raichur

Yüklə 196,57 Kb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə