HotpathVM: An Effective JIT Compiler for
Resource-constrained Devices
Andreas Gal
Donald Bren School of Information and
Computer Science
University of California, Irvine
Irvine, CA 92697-3425, USA
gal@uci.edu
Christian W. Probst
Informatics and Mathematical Modelling
Technical University of Denmark
2800 Kongens Lyngby, Denmark
probst@imm.dtu.dk
Michael Franz
Donald Bren School of Information and
Computer Science
University of California, Irvine
Irvine, CA 92697-3425, USA
franz@uci.edu
Keywords
Dynamic Compilation, Embedded and Resource-constrained
Systems, Mixed-mode Interpretive/compiled Systems, Software
Trace Scheduling, Static Single Assignment Form, Virtual Ma-
chines
Abstract
We present a just-in-time compiler for a Java VM that is small
enough to fit on resource-constrained devices, yet is surprisingly ef-
fective. Our system dynamically identifies traces of frequently ex-
ecuted bytecode instructions (which may span several basic blocks
across several methods) and compiles them via Static Single As-
signment (SSA) construction. Our novel use of SSA form in this
context allows to hoist instructions across trace side-exits without
necessitating expensive compensation code in off-trace paths. The
overall memory consumption (code and data) of our system is only
150 kBytes, yet benchmarks show a speedup that in some cases
rivals heavy-weight just-in-time compilers.
1.
Introduction
A decade after the arrival of Java, great progress has been made
in improving the run-time performance of platform-independent
virtual-machine based software. However, using such machine-
independent software on resource-constrained devices such as mo-
bile phones and PDAs remains a challenge, as both interpretation
and just-in-time compilation of the intermediate VM language run
into technological limitations.
Running virtual-machine based code strictly in interpreted
mode brings with it severe performance overheads, and as a re-
sult requires to run the device’s processor at a much higher clock
speed than if native code were executed instead. This in turn leads
to an increased power consumption, reduced battery autonomy,
and may require the overall use of more expensive processors vs. a
pure native-code solution. Just-in-time compilation, on the other
hand, produces more efficient native code as an end result, but the
process of getting to that native code may be very costly for a
resource-constrained device to perform in the first place.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. To copy otherwise, to republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a fee.
VEE’06
June 14–16, 2006, Ottawa, Ontario, Canada.
Copyright c 2006 ACM 1-59593-332-6/06/0006. . . $5.00.
For example, Sun’s Java HotSpot Virtual Machine 1.4.2 for
PowerPC includes a just-in-time compiler that achieves an impres-
sive speedup of over 1500% compared to pure interpretation. How-
ever, this comes at the price of a total VM size of approximately
7MB, of which about 90% can be attributed to the just-in-time
compiler. Clearly, such resources (that don’t yet include dynamic
memory requirements) are not available on most current embedded
devices.
As a consequence, distinct embedded just-in-time compilers
have emerged, in which trade-offs are being made between re-
source consumption of the just-in-time compiler and the ultimate
execution performance of the code being run on top of the VM.
As a representative of this class of VM-JIT compilers, Insignia’s
Jeode EVM [16] achieves a speedup of 600% over pure interpreta-
tion [14].
Embedded just-in-time compilers achieve their results using
significantly fewer resources than their larger counterparts mostly
by using simpler algorithms. A commonly cited example is the
use of linear-scan register allocation instead of a graph-coloring
approach, which not only reduces the run-time of the algorithm,
but also greatly diminishes the memory footprint. Embedded just-
in-time compilers also tend to use far less ambitious data struc-
tures than “unconstrained” compilers—for example, while the use
of Static Single Assignment form [6] is fairly standard in just-in-
time compilers, the time and memory needed to convert just the
10% most frequently executed methods to SSA form using tradi-
tional techniques would far exceed the resources of most embedded
computers.
In this paper, we present a just-in-time compiler that pursues a
new dynamic-compilation approach. Our compiler is an add-on to
the JamVM [17], a virtual machine for embedded devices. Unlike
other just-in-time compilers that are “intertwined” with the virtual
machine hosting them, ours requires changing no more than 20
lines of JamVM’s source code. The first prototype of our compiler
was in fact designed as an add-on for Sun’s KVM [23, 24] virtual
machine. Porting the compiler to JamVM only required minimal
changes to both our JIT compiler as well as the JamVM source
base. Our JIT compiler runs in a total footprint of 150 kBytes
(including code and data) while for regular code still achieving
speedups similar to those of heavyweight JIT compilers.
Key to the success of our approach is trace-based compilation
using SSA form. Similar to other systems before, the HotpathVM
JIT compiler dynamically identifies execution traces that are exe-
cuted frequently—we build dynamic traces from bytecode (which
would have been interpreted anyway) rather than from native code,
so that the relative overhead of trace recording is much less criti-
cal. But the real novelty of our system comes to bear after a hot