Hardware-in-the-Loop Simulation, 02/99



Yüklə 150,44 Kb.
Pdf görüntüsü
tarix14.12.2017
ölçüsü150,44 Kb.
#15724


ystem level testing is one of the major expenses

in developing a complex product that incorpo-

rates embedded computing. The need to mini-

mize time to market while simultaneously pro-

ducing a thoroughly tested product presents

tremendous challenges. Increasing levels of com-

plexity in system hardware and software are making this

problem more severe with each new generation of prod-

ucts. Additionally, any significant changes made to an exist-

ing product’s hardware or software must be thoroughly

regression-tested to confirm that the changes do not pro-

duce unintended effects.

Embedded computing is becoming more pervasive in

systems that are safety-critical, which makes the need for

thorough testing even more acute. Clearly, there is an

urgent need to accelerate and automate system-level testing

as much as possible within the constraints of test system cost

and development effort.

Hardware-in-the-loop (HIL) simulation is a technique

for performing system-level testing of embedded systems in

a comprehensive, cost effective, and repeatable manner.

HIL simulation is most often used in the development and

testing of embedded systems, when those systems cannot be

tested easily, thoroughly, and repeatably in their opera-

tional environments.

HIL simulation requires the development of a real-time

simulation that models some parts of the embedded system

under test (SUT) and all significant interactions with its

operational environment. The simulation monitors the

SUT’s output signals and injects synthetically generated

input signals into the SUT at appropriate points. Output

signals from the SUT typically include actuator commands

and operator display information. Inputs to the SUT might

include sensor signals and commands from an operator.

The outputs from the embedded system serve as inputs to

the simulation and the simulation generates outputs that

become inputs to the embedded system. Figure 1 shows a

high-level view of an example HIL simulation.

42

FEBRUARY 1999  



Embedded Systems Programming

J I M   A .   L E D I N

f

e

a



tur

e

S



Hardware-in-the-Loop

Simulation

Hardware-in-the-loop (HIL) simulation is a technique for performing system-level testing of embedded systems

in a comprehensive, cost-effective, and repeatable manner. This article provides an overview of the techniques of

HIL simulation, along with hardware and software requirements, implementation methods, algorithms, and rules

of thumb.




Embedded Systems Programming 

FEBRUARY 1999   

43

In the figure, the SUT is represent-



ed as having interfaces to sensors and

actuators, as well as an operator inter-

face that displays information and

accepts command inputs. In this

example, operator commands are syn-

thetically generated by the simulation

to stimulate the SUT during testing.

The use of synthetically generated

operator commands allows the

automation of test sequences and per-

mits precise repeatability of tests. It

may also be possible to run the simu-

lation in a mode in which humans

observe the operator displays and gen-

erate inputs. This mode may be valu-

able for usability testing and operator

training, but the ability to precisely

repeat test sequences is lost.

You can apply the HIL simulation

test concept to a wide variety of sys-

tems, from relatively simple devices

such as a room temperature controller

to complex systems like an aircraft

flight control system. HIL simulation

has historically been used in the devel-

opment and testing of complex and

costly systems such as military tactical

missiles, aircraft flight control systems,

satellite control systems, and nuclear

reactor controllers.

Ongoing performance advances

and price reductions in computer

hardware have made it possible to

develop cost-effective HIL simulations

for a much wider range of products.

This article is intended to promote the

application of HIL simulation to areas

where this technique may not have

been used in the past.

HIL simulation can help you to

develop products more quickly and

cost effectively with improved test

thoroughness. Another benefit of HIL

simulation is that it significantly

reduces the possibility of serious prob-

lems being discovered after a product

has gone into production. During the

product development phase, HIL sim-

ulation is a valuable tool for perform-

ing design optimization and hard-

ware/software debugging.

However, for an HIL simulation (or

any simulation, for that matter) to pro-

duce reliable outputs, it is critical to

demonstrate that the simulated envi-

ronment is an adequate representa-

tion of reality. This is where the con-

cepts of simulation verification and

validation must come into play.

Simulation verification and

validation

Before the HIL simulation can be

used to produce useful results, there

must first be a demonstration that the

SUT and its simulated environment in

the HIL simulation sufficiently repre-

sent the operational system and envi-

ronment. The major steps in the

process of demonstrating and docu-

menting that the correctness and

fidelity of the simulation are adequate

are called verification and validation.

Verification is the process of

demonstrating that the HIL simula-

tion contains an accurate implementa-

tion of the conceptual mathematical

models of the SUT and its environ-

ment. This step can be performed

before a prototype of the SUT has

even been developed. Verification is

typically performed by comparing HIL

simulation results with the results of

analytical calculations or results from

independently developed simulations

of the SUT. Other techniques such as

source code peer reviews can be effec-

tive verification tools.

Validation consists of demonstrat-

ing that the HIL simulation models

the embedded system and the real

world operational environment with

acceptable accuracy. A standard

approach to validation is to use the

results of system operational tests for

comparison against simulation results.

This type of validation test involves

running the embedded system in the

HIL simulation through a test sce-

nario that is identical to one that was

performed by an actual system in an

operational environment. The results

of the two tests are compared and any

differences are analyzed to determine

if they represent a significant devia-

tion between the simulation and the

real world.

If the operational test results don’t

The HIL simulation test concept can be applied to a wide variety of

systems, from relatively simple devices such as a room temperature

controller to complex systems like an aircraft flight control system.

BEN FISHMAN

FIGURE 1  

Example HIL simulation

Embedded

System

(SUT)

Real-Time

Simulation

Sensor input signals

Operator commands

Actuator control signals

Operator displays



hil simula

tion


accurately match the HIL simulation

results, improving the fidelity of the

simulation modeling in particular

areas may be necessary. Defects in the

simulation software or hardware inter-

facing may also become apparent

when comparing simulation results to

operational test results. If changes

must be made to the HIL simulation

to correct problems, you’ll have to

rerun any validation tests that have

already been performed to confirm

adequate simulation performance.

By carefully selecting the scenarios

for operational system tests, you can

validate the simulation for a limited

number of specific test conditions and

then proceed on the assumption that

the simulation is valid over interpola-

tions between those conditions. You

must take care to demonstrate that the

intervals over which interpolation is

being performed are smooth and do

not contain any hidden surprises.

After all deviations between the

performance of the embedded system

in the operational environment and in

the HIL simulation have been

reduced to an acceptable level and

properly documented, the HIL simu-

lation is considered validated. HIL

simulation tests can then be used to

replace or augment operational tests

of the system as long as the environ-

ment being simulated falls within the

validated range of the simulation.

When working with a validated sim-

ulation, you must always be careful to

understand the range of conditions

over which the simulation has been

validated. If a proposed test is for a sit-

uation where the simulation has not

been validated and an extrapolation

from validated conditions cannot be

justified, you must use the simulation

only with extreme caution. In these

cases, the simulation results should be

accompanied with a statement of the

assumptions that had to be made in

using the simulation.

When making modifications to a

validated simulation, you’ll have to

maintain adequate software configura-

tion control and carry out regression

tests to verify that simulation perfor-

mance isn’t adversely affected as

changes are made. Restoring and run-

ning a previous version of a simulation

is sometimes necessary. This situation

can occur when a question arises as to

whether an unexpected result is

caused by actual behavior of the SUT,



Embedded Systems Programming 

FEBRUARY 1999   

45

Real-time integration algorithms



In an HIL simulation, the interactions between the SUT and the real-world environment

are usually modeled as a continuous system defined by a set of nonlinear differential

equations. This system must be arranged as a set of first-order differential equations,

which are solved numerically in the real-time simulation.

Differential equations can be solved approximately by advancing time in small, dis-

crete steps and approximating the solution at each step using a numerical integration

algorithm. In a non real-time simulation, a variety of algorithms exists for varying the

time step size so that a predefined degree of accuracy is maintained. These methods

are usually unsuitable for use in a real-time environment due to requirements to sam-

ple inputs and update outputs at precise time intervals.

Some popular fixed-step size integration algorithms such as the fourth-order

Runge-Kutta algorithm require inputs for their derivative sub-step calculations that

occur at a time in the future relative to the time of the current sub-step. This makes

them unsuitable for use in a real-time application.

Algorithms that are suitable for real time generally use fixed-time step sizes and

only require inputs for derivative evaluation that are available at the current time or

earlier. The simplest algorithm that meets these criteria is Euler integration. The formu-

la for integrating a state from the current time step (denoted by the subscript n) to a

time seconds into the future (subscript n+1) with a current derivative 

n

is:


While it has the advantage of simplicity, Euler integration has poor performance

characteristics in terms of the error in approximating the solution. It’s a first-order algo-

rithm, so its local truncation error (the error in the solution after a single step) is pro-

portional to h.

An algorithm that provides much better accuracy while remaining suitable for real-

time use is the Adams-Bashforth second order algorithm. It uses the current value of

the derivative 

n

as well as the previous frame’s derivative 



n

– 1


:

This is a second-order method with local truncation error proportional to h

2

. It is the



most commonly used integration method in real-time continuous system simulations.

Its main drawback is that a significant transient error can occur if there is a discontinu-

ity in the derivative function. It also requires an initial step using the Euler algorithm

because 


– 1


isn’t available during the first simulation frame.

Many additional real-time integration algorithms are available at orders from sec-

ond to fourth. Some, such as versions of the Runge-Kutta methods modified to be

compatible with real-time inputs, provide additional features such as superior perfor-

mance in dealing with discontinuous derivatives. This property is valuable when inter-

facing a model of a continuous system to a model of a discrete system.

In practice, the Adams-Bashforth second-order method is usually suitable. If this

method doesn’t work well, you may need to reduce the integration step time h.



x



x



x

h

x

x

n

n

n

n

+



-

=



+

-

Ê



Ë

Á

ˆ



¯

˜

1



1

3

2



1

2

x



x



x



x

h x

n

n

n

+



=

+

1



x




hil simula

tion


or is the result of a change that has

been made to the simulation software.

You can quickly resolve these ques-

tions if the simulation software is

under proper configuration manage-

ment. Reasonably priced software ver-

sion control tools are available that

can make the configuration manage-

ment process robust and relatively

easy.


Verification and validation are

indispensable parts of simulation

development. If the simulation does

not undergo an adequate verification

and validation process, its results will

lack credibility with decision makers

and significant project risks may be

created. To minimize these risks, you

must allocate adequate resources for a

verification and validation effort from

the beginning of an HIL simulation

development project.

HIL simulation hardware

and software

To develop an HIL simulation, you’ll

need suitable computing and I/O

hardware, as well as software to per-

form the real-time simulation model-

ing and I/O operations. Let’s examine

the requirements for these compo-

nents in more detail.

Simulation computer 

hardware

In addition to the SUT, the hardware

used in an HIL simulation must

include:


A computer system capable of

meeting the real-time performance

requirements of the simulation

Facilities on the simulation com-



puter (or on a connected host com-

puter) to allow operator control of

the simulation as well as simulation

data collection, storage, analysis,

and display

A set of I/O interfaces between the



simulation computer and the SUT

The real-time performance

requirements for the simulation com-

puter depend on the characteristics of

the embedded system to be tested and

its operational environment, such as:

The SUT’s I/O update rates and



I/O data transfer speeds

The bandwidth of the dynamic sys-



tem composed of the SUT and its

environment

The complexity of the SUT ele-



ments and operational environ-

ment to be modeled in the simula-

tion software

I/O devices

Many different categories of I/O

devices are used in embedded systems.



Embedded Systems Programming 

FEBRUARY 1999   

47



hil simula

tion


In an HIL simulation, an I/O device

must be installed in the simulation

computer that connects to each SUT

I/O port of interest. I/O interfaces

are available from several sources that

support signal types, such as:

Analog (DACs and ADCs)



Discrete digital (for example, TTL

or differential)

Serial (for example, RS-232 or RS-



422)

Real-time data bus (for example,



MIL-STD-1553, CAN, or ARINC-

429)


Instrumentation bus (IEEE-488, for

example)

Network (Ethernet, for example)



Device simulators (for simulating

LVDT transducers, thermocouples,

and the like)

For an SUT with low I/O rates and

a simulated environment that isn’t

overly complex, an ordinary PC run-

ning a non real-time operating system

such as Windows NT may be capable

of running a valid and useful HIL sim-

ulation. For complex, high I/O rate

SUTs, a high-performance computer

system is essential. In these applica-

tions, you’ll need more than just raw

CPU speed. The simulation computer

must also have well-defined and

repeatable real-time performance

characteristics. A high-performance

simulation might require that the soft-

ware update all the simulation models

and perform I/O at precise intervals

of a few hundred microseconds.

The simulation computer must

provide system-level software that sup-

ports real-time computing and doesn’t

allow code execution to be blocked in

inappropriate ways. Most general-pur-

pose operating systems don’t provide

sufficient support of real-time features

to be useful in anything other than a

low I/O rate HIL simulation. This

condition may necessitate the use of

an RTOS or a dedicated real-time soft-

ware environment on the simulation

computer.

A summary of requirements for a

high-performance real-time simula-

tion computer system includes:

High-performance CPUs



Support for real-time operations

High data transfer rates



Support for a variety of I/O devices

In years past, these requirements

were met by specially designed,

extremely expensive simulation com-

puters. While more recently, many

HIL simulation developers have

turned to the VME bus as the basis for

their simulation computer systems. In

the future, newer buses such as

CompactPCI may provide a lower cost

foundation for developers of real-time

simulations.

Simulation software 

structure

The simulation software contains sec-

tions of code that perform the tasks

needed during real-time simulation. A

diagram of the basic software flow of

an HIL simulation is shown in Figure

2.

As examination of the flow diagram



reveals, the HIL simulation software

can be separated into three basic

parts:



Initialization of the simulation soft-



ware and external hardware

A dynamic loop that includes I/O,



simulation model evaluation, and

state variable integration

Shutdown of the simulation soft-



ware and external hardware

At the bottom of each pass through

the dynamic loop, an interval timer

must expire before execution of the

next frame begins. The length of this

interval, known as the simulation

frame time, is a critical parameter for

the HIL simulation. The frame time

48

FEBRUARY 1999  



Embedded Systems Programming


hil simula

tion


must be short enough to maintain sim-

ulation model accuracy and numerical

stability. At the same time, it must be

long enough to tolerate the worst-case

time to complete all the calculations

and I/O operations in the dynamic

loop. A shorter frame time require-

ment implies higher performance

requirements for the simulation com-

puter hardware. Alternatively, a short-

er frame time may require simplifica-

tion of the simulation models so that

their calculations can complete in the

available time. As the frame time is

lengthened, simulation accuracy

degrades. At a certain point, the

numerical integration algorithm

becomes unstable. The following for-

mula is a rule of thumb for determin-

ing the longest acceptable frame time

for a simulation model:

In this formula, t

s

is the shortest



time constant in seconds of the simu-

lated dynamic system and is the

frame time in seconds. As is

increased above the range given by the

formula, the accuracy of the simula-

tion will begin to suffer and eventually

it will become numerically unstable.

Multiframing

Some subsystems modeled in a simula-

tion may have time constants that vary

significantly from those of other sub-

systems. When this occurs, improving

simulation performance is possible by

using the technique of multiframing.

A multiframe simulation has more

than one frame rate. The frame times



h

1

h



2

, and so forth are generally set at

integer multiples of a common factor,

with the fastest frame time called h



f

.

The simulation updates each model at



the times appropriate for that model’s

frame rate.

Faster frame rate models that use

values computed by slower models

may need to use interpolation or

extrapolation techniques to compute

input values appropriate for the cur-

rent time. State variables in a slow

frame are suitable for interpolation,

and algebraic variables must be

extrapolated. The interpolation or

extrapolation may be done using

methods of first or higher order.

If the simulation computer sup-

ports multithreading, a scheduling

technique such as Rate Monotonic

Scheduling can manage the execution

of the models and their I/O opera-

tions at the different frame rates (see

the sidebar on Rate Monotonic

Scheduling). Appropriate inter-thread

communication techniques must be

used to allow data to be passed among

the models in a controlled manner. A

multiframe simulation can provide

comparable accuracy to a single frame

rate simulation that updates all its

models at the h



f

rate while requiring

far less CPU power.

Simulation programming

languages and environ-

ments


Over the years, a number of special-

ized programming languages and

environments have been developed to

ease the task of developing simula-

tions of dynamic systems. Some simu-

lation languages are text-based while

other development environments are

graphical. Graphical tools allow the

user to construct diagrams describing

the system to be simulated. Many of

these tools are intended for use only

in non real-time applications, and

therefore aren’t suitable for develop-

ing HIL simulations. In addition,

many of these languages and tools are

either proprietary or are supported by

only a single company.

Common attributes of full-featured

dynamic simulation languages and

graphical development environments

include:

Numeric integration of state variables



h

s

£

t



20

50

FEBRUARY 1999  



Embedded Systems Programming


hil simula

tion


Support for multidimensional

interpolation functions

Support for matrix and vector data



types and mathematical operations

A library of simulation component



models such as transfer functions,

limiters, or quaternions

For a simulation language or

graphical programming tool to be use-

ful for HIL simulation, it must be able

to generate code optimized for use on

a real-time simulation computer. It

should also provide a simulation oper-

ator interface that allows the user to:

Examine and modify simulation



variables

Perform simulation runs 



Collect and store simulation data

Analyze, display, and print simula-



tion results

Debug and optimize the simulation



In addition to the specialized simu-

lation languages and graphical devel-

opment tools, it is common to use gen-

eral purpose programming languages

such as Fortran and C to develop HIL

simulations. Many HIL simulations

have been written from scratch, using

only the tools and libraries supplied

with the target system compiler. While

this approach can succeed, it requires

a great deal of effort developing effi-

cient and correct methods for numer-

ical integration, interpolation func-

tion generation, coordinate transfor-

mation, and so forth.

Developing simulation

models

Developing simulation models is a



field unto itself. For a relatively simple

application, it may be possible to

develop a model that consists of a few

equations describing the physics of the

subsystem or phenomenon being

modeled. For a more complex system,

a model might be based on a large

number of complex mathematical

algorithms and voluminous tables of

experimentally measured data.

A real-time simulation model of a

continuous dynamic system is usually

implemented as a set of simultaneous,

nonlinear, first-order ordinary differ-

ential equations. These equations are

supplemented by algebraic equations

that are used to compute intermediate

and other miscellaneous values. This

entire set of equations is used to com-

pute the derivatives of the state vari-

ables during each simulation frame.

After evaluating the derivatives, the

state variables are integrated numeri-

cally to produce the state variable val-

ues that will be valid at the start of the

next frame.

For subsystems that operate in a

discrete-time manner, models can be

developed in terms of difference equa-

tions. In a difference equation, the

next-frame value of a state is comput-

ed during the evaluation of the cur-

rent frame. At the end of the frame,

the discrete states are updated to their

next-frame values. Discrete states can

be used to implement algorithms such

as digital filters and to model subsys-

tems such as DAC output signals.

Continuous-system models and dis-

crete-system models can be combined

in a simulation as long as integration

algorithms are used for the continu-

ous system that can accommodate the

discontinuities introduced by the dis-

crete time model.

In general, simulation models are

not precise representations of their

52

FEBRUARY 1999  



Embedded Systems Programming

FIGURE 2  

HIL simulation software flow

Initialize simulation

and hardware

Initialization

Shutdown

Dynamic Loop

Yes

No

Read from input



devices

Evaluate simulation

models

Write  to output

devices

Integrate state

variables

Delay until time to

start next frame

Shut down simulation

and hardware

End of run?


hil simula

tion


physical counterparts. Assumptions

and simplifications must be made to

develop simulation models. The fewer

assumptions made, the more complex

a model must be. Some major limita-

tions on model complexity are:

The time it takes to design, devel-



op, verify, and validate the simula-

tion model

The effort required to collect and



analyze experimental data needed

to develop the model

The execution time and memory



space that the model’s implementa-

tion consumes on the simulation

computer

The developer must design and

implement a model that has adequate

fidelity for its intended purpose while

remaining within these constraints.

Many techniques have been

employed in HIL simulations to imple-

ment models of adequate fidelity

while remaining within simulation

computer execution time and memo-

ry limits. Some of these techniques

include:


Precompute as much as possible

during program initialization.

Although the compiler’s optimizer

can help with techniques such as

common subexpression elimina-

tion, you can enhance perfor-

mance further by removing calcula-

tions that produce unchanging val-

ues from the simulation dynamic

frame



Use interpolation functions to



approximate the evaluation of

complex expressions. Doing a mul-

tidimensional lookup and interpo-

lation may be faster than evaluating

an expression containing several

transcendental function calls, float-

ing point divisions, or other time-

consuming operations

In a real-time simulation, the worst-



case execution time for a model is

all that matters, rather than the

average time. Therefore, focus on

optimizing only those execution

paths that contribute to the worst-

case execution time. Good profil-

ing and debugging tools are invalu-

able during this process

Although developing and validat-

ing a simulation model that meets the

requirements discussed above is chal-

lenging, the model can then be used

in a variety of applications in addition

to real-time simulation. For example,

the model can be used by product

54

FEBRUARY 1999  



Embedded Systems Programming


hil simula

tion


developers to perform their own sys-

tem analyses, and they can modify it to

examine the effects of proposed prod-

uct changes and enhancements.

These changes can then be imple-

mented in the HIL simulation for fur-

ther testing. This sharing of simula-

tion models among different groups

can result in tremendous benefits to

the product development process.

Implementation issues

Several issues in the development and

operation of an HIL simulation can be

problematic. Some areas that have cre-

ated difficulties in past HIL simula-

tions are examined in this section.



Selection of appropriate interface points.

When you are interfacing an embed-

ded system to a simulation computer,

it isn’t always obvious what the best

point in the system is at which to

implement the interface. As an exam-

ple, consider the application shown in

Figure 3, in which a video camera

monitors items moving along a pro-

duction line. The signal from the cam-

era is captured by an image processor,

which extracts a feature set from the

image. This feature set is then

processed by the production line con-

trol processor.

An HIL simulation used for devel-

opment of the production line control

system would require an interface to

the information coming in from the

video camera. Where should this inter-

face be located? One possibility is that

the simulation computer simulates the

operation of the production line,

video camera, and image processor,

and synthesizes feature sets for trans-

mission to the control processor. This

method would probably have the low-

est cost because it would be done

entirely in the digital domain and the

feature set data is relatively low-band-

width as compared to the video signal.

However, in this approach the video

camera and image processor aren’t

tested in the HIL simulation. If the

operational details of the camera and

image processor aren’t thoroughly

understood and accurately modeled,

this approach may be inadequate.

A more costly alternative would be

to have the simulation computer syn-

thetically generate a scene in real time

and transfer it to the image processor

with the same format and timing the

video camera uses. This would require

additional processing and I/O hard-

ware in the simulation computer,

along with software to control them.

With this technique, the image proces-

sor is tested in the HIL simulation,

and testing can be performed repeat-

ably. However, the video camera still

isn’t being tested.

Another even more expensive

alternative would be to set up a video

projection system that places the real-

time, synthetically generated image on

a screen in front of the video camera.

56

FEBRUARY 1999  



Embedded Systems Programming

FIGURE 3  

Example production line control system

Feature


set

Video


signal

Video

camera

Image

processor

Control

processor

Production

line


hil simula

tion


This allows the entire subsystem to be

tested, including the video camera,

image processor, and production line

control computer. Drawbacks to this

approach, besides the hardware cost

and development expense, include

the need to align and calibrate the

projected image so that it appears real-

istic to the system. Although this tech-

nique may seem farfetched for this

application, it is used effectively in the

test of some tactical missile sensors.

Finally, you could use a simulated

or real production line to create the

scene viewed by the video camera in

real time. This method may or may

not be feasible, depending on consid-

erations such as whether such produc-

tion lines exist, how complex it would

be to build a simulated line, and so

forth.

This example demonstrates how



important it is to identify the require-

ments for an HIL simulation and

select the appropriate points at which

to interface the system under test to

the simulation computer. Selection of

the proper interface point depends on

the simulation purpose, the adequacy

of models of subsystems that might be

simulated rather than implemented as

real hardware in the simulation, and

funding available to develop the HIL

simulation.



Budget realistically for simulation develop-

ment and ongoing operations. An HIL

simulation can be an expensive propo-

sition. Its value will be realized,

though, if overall project cost and risk

can be reduced compared to the cost

if the simulation had not been used.

The initial development of an HIL

simulation can be quite expensive and

this must be understood during pro-

ject planning. Continuing operation

of the simulation will also require

competent technical staffing to imple-

ment upgrades and troubleshoot

problems.

The project phase at which the HIL

is brought on-line can have a signifi-

cant impact on overall product devel-

opment cost. The availability of an

HIL simulation early in the project

can be invaluable to system designers

who need a tool for studying alterna-

tives and refining their designs.



Consider in advance how to deal with

unexpected results. Inevitably, there will

be times when a simulation doesn’t

produce the expected results. If this

occurs while a deadline is creating

severe time pressures, these results

may be ignored. This situation is espe-

cially likely to occur if the simulation

hasn’t gone through a thorough vali-

dation process and won the support of

all interested parties.

Some preplanning should be done

to decide what actions to take in these

situations. Ideally, sufficient resources

should be devoted to determining the

reason for the discrepancy. If it turns

out that there is a system problem, the

HIL simulation will have demonstrat-

ed its value.

58

FEBRUARY 1999  



Embedded Systems Programming

Rate Monotonic Scheduling in real-time simulations

Rate Monotonic Scheduling (RMS) is a method for scheduling real-time tasks so that

each task’s deadlines will always be met. The technique is applicable to run-time envi-

ronments that support prioritized, preemptive multitasking or multithreading. Here, we

use the term task to mean either a task or a thread. The mathematical basis of RMS

has been rigorously developed and analyzed (visit the Software Engineering Institute at

www.sei.cmu.edu

and search for “Rate Monotonic Analysis”).

To use RMS, the execution frequency of each task must first be identified. In our

HIL simulation application, this frequency will be the inverse of the task’s integration

step size, h. The scheduling priority of the tasks is then assigned so that the highest fre-

quency task has the highest priority on down to the lowest frequency task, which has

the lowest priority. This assignment of priorities as a monotonic function of task exe-

cution rate is what gives RMS its name.

The execution time of each task cannot be so long that it causes itself or any lower-

priority tasks to miss deadlines. Upper bounds for each task’s execution time can be

determined such that the entire system will be guaranteed to not miss any deadlines as

long as all tasks stay within their allotted execution time.

Applying RMS to real-time simulation must involve consideration of the following

elements:

Task switch overhead must be accommodated in developing timing requirements



Data must be passed among the tasks in a properly timed manner

Extrapolation and interpolation may need to be performed on data passed between



different-rate tasks

I/O operation timing must be considered when selecting integration time step sizes



State variable integration must be coordinated with the intertask data transfers

If these items are properly handled, RMS can provide significant benefits in multi-

frame real-time simulations. Multiframe simulation is used when subsystems must be

simulated at different frame rates. Using RMS allows these different frame rates to run

on a single CPU while I/O operations occur at the correct times for each different frame

rate.

If the RMS-based multiframe simulation is carefully designed, it should be possible



to move tasks to different CPUs with relative ease as new CPUs are added to the sys-

tem. Tasks can be relocated among CPUs to provide optimization through load bal-

ancing.

RMS is a tool that developers can use to implement high-performance, real-time



simulations that make the best possible use of available resources.


hil simula

tion


Simulations can produce massive amounts

of data. It isn’t unusual for an HIL sim-

ulation to be capable of generating

and storing 100K of data per second of

operation. If this simulation is in oper-

ation for a full day, it will produce sev-

eral gigabytes of data. What do you do

with all this data?

A plan must be in place for per-

forming data reduction, analysis, and

archiving. Users who request test runs

of the simulation must specify what

types of data output they require. All

data that may be of interest at a later

date must be archived by the simula-

tion operation staff.

Data analysis and plotting pro-

grams are the most general tools used

to examine the raw simulation output

data. More specialized automated

tools can be a great help for generat-

ing summary reports and other tasks,

such as scanning the data to make sure

tolerances aren’t violated.

Configuration management must be part

of the process from the beginning. The soft-

ware that is used in the HIL simulation

must be maintained under adequate

configuration management at all

times. Making any changes to the soft-

ware in such a way that the change

cannot be undone is simply an unac-

ceptable scenario.

Sometimes an anomaly will occur

in a test situation that didn’t occur

during a similar test in the past. Of

course, in the time since the previous

test, both the system under test and

the simulation have most likely under-

gone significant changes. If the simu-

lation software has been under config-

uration control, it will be possible to

restore the software version used in

the previous test and run it again. In

attempting this, a problem may arise if

the hardware interfaces or embedded

system communication protocols have

changed between the two simulation

versions. In this case, additional effort

will be required to identify the source

of the anomaly.

Reasonably priced software config-

uration management tools are avail-

able that make the process relatively

painless. There really is no excuse for

not having your simulation under con-

figuration control.



Consider having the users of the results pro-

vide a simulation accreditation. As previ-

ously mentioned, verification and vali-

dation of the simulation are critical

steps in the simulation’s development.

Performing an additional step called

accreditation often makes sense. An

accredited simulation has had its veri-

fication and validation tests developed

based partially on inputs from the ulti-

mate users of the simulation results.

Those users then examine the results

of the verification and validation tests

and have authority to approve the sim-

ulation as demonstrating acceptable

fidelity.

Involving the expert users in the

verification and validation process can

provide powerful feedback that results

in an improved simulation and more

extensive simulation usage than would

otherwise occur. Accrediting the simu-

lation can provide crucial “buy-in”

from people who might otherwise be

dismissive when presented with simu-

lation results that don’t match their

expectations.

A reasonable simulation?

The development of an HIL simula-

tion can be a complex and time-con-

suming process. Getting system

experts to accept the results of an HIL

simulation as valid is often a formida-

ble obstacle. A simulated operational

environment will never be a perfect

representation of the real thing. Given

the costs and potential difficulties

involved, when does it make economic

and technical sense to develop and

use an HIL simulation?

An HIL simulation is a cost-effec-

tive and technically valid approach in

the following situations:

When the cost of an operational



system test failure may be unac-

ceptably high, such as when testing

aircraft, missiles, and satellites

When the cost of developing and



operating the HIL simulation can

be saved by reductions in the num-

ber of operational tests. For exam-

ple, this may be the case with an

automotive antilock brake control

system that must be tested under a

wide variety of operator input and

road surface conditions

When it is necessary to be able to



duplicate test conditions precisely.

This allows comprehensive regres-

sion testing to be performed as

changes are made to the SUT

When it would be valuable to per-



form development testing on sys-

tem component prototypes. This

allows subsystems to be thoroughly

tested before a full system proto-

type has even been constructed

HIL simulation is a valuable tech-

nique that has been used for decades

in the development and testing of

complex systems such as missiles, air-

craft, and spacecraft. By taking advan-

tage of low-cost, high-powered com-

puters and I/O devices, the advan-

tages of HIL simulation can be real-

ized by a much broader range of sys-

tem developers. A properly designed

and implemented HIL simulation can

help you develop your products faster

and test them more thoroughly at a

cost that may be significantly less than

the cost of using traditional system test

methods.

e

s



p

Jim A. Ledin, PE, is an electrical engineer

in Camarillo, CA. He has worked in the

field of HIL simulation for the past 14

years. He is a principal developer of HIL

simulations for several U.S. Navy tactical

missile systems at the Naval Air Warfare

Center in Point Mugu, CA. He can be

reached by email at jledin@ix.netcom.com.

60

FEBRUARY 1999  



Embedded Systems Programming

It isn’t unusual for an HIL simulation to be capable of 

generating and storing 100K of data per second of operation. 

What do you do with all this data?



Yüklə 150,44 Kb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə