The data captured from sensors or used to command actuators in ARAF are based on ISO/IEC 23005-5 Data formats for interaction devices (MPEG-V Part 5).
MPEG-V provides an architecture and specifies associated information representations to enable the representation of the context and to ensure interoperability between virtual worlds. Concerning ARAF, MPEG-V specifies the interaction between the virtual world and the real world by implementing support for accessing different input/output devices, e.g. sensors, actuators, vision and rendering, robotics.
ARAF supports two manners to connect the scene to the sensor/actuators. A first manner is by using the InputSensor and OutputActuator nodes. The second manner is based on dedicate nodes in the scene graph that maps directly the sensor/actuator (e.g. the CameraSensor PROTO).
Usage of InputSensor and Script Nodes
The InputSensor node is used to receive the MPEG-V sensor data in a scene or to transmit data to MPEG-V actuators from the scene. It should be noted that the data is pushed in the scene and it is applied immediately when received. Figure 3 represents the architecture for accessing MPEG-V sensor data in ARAF scenes.
|
Figure 3 — Diagram of the architecture for accessing MPEG-V sensor data
|
As specified in ISO/IEC 14496-1, in order to add new devices for the InputSensor node it is necessary to define:
-
The content of the Device Data Frame (DDF) definition: this sets the order and type of the data coming from the device and then mandates the content of the InputSensor buffer.
-
deviceName string which will designate the new device.
-
Optional devSpecInfo of UIConfig
Orientation Sensor
The definition of MPEG-V Orientation Sensor DDF is the following:
MPEGVOrientationSensorType [
SFVec3F angles
]
The angles are specified as Euler angles as defined in ISO/IEC 23005-5. The deviceName is “MPEG-V:siv: OrientationSensorType”. The UIConfig.devSpecInfo contains one 32 bit integer specifying the desired refresh frame-rate for the sensor.
Position Sensor
The definition of MPEG-V Position Sensor DDF is the following:
MPEGVPositionSensorType [
SFVec3F position
]
The position is specified in meters. The deviceName is “MPEG-V:siv: PositionSensorType”. The UIConfig.devSpecInfo contains one 32 bit integer specifying the desired refresh frame-rate for the sensor.
Acceleration Sensor
The definition of MPEG-V Acceleration Sensor DDF is the following:
MPEGVAccelerationSensorType [
SFVec3F acceleration
]
The deviceName is “MPEG-V:siv: AccelerationSensorType”. The UIConfig.devSpecInfo contains one 32 bit integer specifying the desired refresh frame-rate for the sensor.
The definition of MPEG-V Angular Velocity Sensor DDF is the following:
MPEGVAngularVelocitySensorType [
SFVec3F AngularVelocity
]
The deviceName is “MPEG-V:siv: AngularVelocitySensorType”. The UIConfig.devSpecInfo contains one 32 bit integer specifying the desired refresh frame-rate for the sensor.
Global Position System Sensor
The definition of MPEG-V Global Position System Sensor DDF is the following:
MPEGVGPSSensorType [
SFVec2F location
]
The deviceName is “MPEG-V:siv:GPSSensorType”. The UIConfig.devSpecInfo contains one 32 bit integer specifying the desired refresh frame-rate for the sensor.
Altitude Sensor
The definition of MPEG-V Altitude Sensor DDF is the following:
MPEGVAltitudeSensorType [
SFFloat altitude
]
The deviceName is “MPEG-V:siv:AltitudeSensorType”. The UIConfig.devSpecInfo contains one 32 bit integer specifying the desired refresh frame-rate for the sensor.
Geomagnetic Sensor
The definition of MPEG-V Geomagnetic Sensor DDF is the following:
MPEGVGeomagneticSensorType [
SFVec3F geomagnetic
]
The deviceName is “MPEG-V:siv: GeomagneticSensorType”. The UIConfig.devSpecInfo contains one 32 bit integer specifying the desired refresh frame-rate for the sensor.
Example of integrating sensors in the ARAF scene
In the following example, it is shown how the InputSensor and Script node can be used to access MPEG-V sensors.
DEF SCRIPT Script {
eventIn SFVec3f updateOrientation
. . . .
url ["javascript:
function updateOrientation(rot)
{
if ( objrot.children.length == 0 )
return;
Azimuth = rot.x;
Pitch = rot.y;
Roll = rot.z;
conv = 3.14/180/2;
c1 = Math.cos(Azimuth * conv);
s1 = Math.sin(Azimuth * conv);
c2 = Math.cos(Pitch * conv);
s2 = Math.sin(Pitch * conv);
c3 = Math.cos(Roll * conv);
s3 = Math.sin(Roll * conv);
c1c2 = c1*c2;
s1s2 = s1*s2;
w = c1c2*c3 - s1s2*s3;
x = c1c2*s3 + s1s2*c3;
y = s1*c2*c3 + c1*s2*s3;
z = c1*s2*c3 - s1*c2*s3;
angle = 2 * Math.acos(w);
norm = x*x + y*y + z*z;
if ( norm < 0.001 ) {
x = 1;
y = z = 0;
}
else {
norm = Math.sqrt(norm);
x /= norm;
y /= norm;
z /= norm;
}
objrot.rotation = new SFRotation( x, z, y, angle );
}
. . . .
"]
}
DEF ORIENT_SENS InputSensor {
url [50]
buffer {
REPLACE SCRIPT.updateOrientation BY 0 0 0
}
}
Where "url [50]" is the object descriptor for the orientation sensor defined as follows:
ObjectDescriptor {
objectDescriptorID 50
esDescr [
ES_Descriptor {
ES_ID 50
decConfigDescr DecoderConfigDescriptor {
streamType 10
decSpecificInfo UIConfig {
deviceName "MPEG-V:siv:OrientationSensorType"
}
}
}
]
}
The camera frames are directly accessed by the ARAF player. Figure 5 presents the diagram for accessing the camera video stream.
-
|
Figure 5 - Diagram of the architecture for accessing the camera frames
|
Two following are defined:
-
An URN for the camera in order to initialize the input stream;
-
For the back camera of the device: hw://camera/back;
-
For the front camera of the device: hw://camera/front.
-
A type of video stream that doesn’t need to be decoded (RAW decoder). As specified in ISO/IEC 14496-1:2012 the following decoder specific info for the RAW decoder is defined:
class RAWVideoConfig extends DecoderSpecificInfo : bit(8) tag=DecSpecificInfoTag {
unsigned int(16) width;
unsigned int(16) height;
unsigned int(8) bit_depth;
unsigned int(32) stride;
unsigned int(32) coding4CC;
unsigned int(8) fps;
unsigned int(1) use_frame_packing;
unsigned int(7) frame_packing;
}
Usage of OutputActuator and Script Nodes
The OutputActuator proto is used to transmit data to MPEG-V actuators from the scene. It should be noted that the data produced by the scene is applied immediately when received by the actuators. Figure 4 represents the architecture for commanding MPEG-V actuators from the ARAF scenes.
Figure 4 — Diagram of the architecture for commanding MPEG-V actuators
In order to add new devices the same mechanism is used as for InputSensor therefore it is necessary to define:
-
The content of the Device Data Frame (DDF) definition: this sets the order and type of the data sent to the device and then mandates the content of the OutputActuator buffer.
-
deviceName string which will designate the new device.
-
Optional devSpecInfo of UIConfig
Light Actuator
The definition of MPEG-V Light Actuator DDF is the following:
MPEGVLightActuatorType [
SFFloat intensity
SFColor color
]
The deviceName is “MPEG-V:siv:LightActuatorType”. The light actuator will keep its current state (intensity and color) as long as a new command is not initiated.
Vibration Actuator
The definition of MPEG-V Vibration Actuator DDF is the following:
MPEGVVibrationActuatorType [
SFFloat intensity
]
The deviceName is “MPEG-V:siv:VibrationActuatorType”. The vibration actuator will keep its current state (intensity) as long as a new command is not initiated.
Tactile Actuator
The definition of MPEG-V Tactile Actuator DDF is the following:
MPEGVTactileActuatorType [
MFFloat intensity
]
The deviceName is “MPEG-V:siv:TactileActuatorType”. The tactile actuator will keep its current state (intensity) as long as a new command is not initiated.
Flash Actuator
The definition of MPEG-V Flash Actuator DDF is the following:
MPEGVFlashActuatorType[
SFFloat intensity
SFColor color
SFFloat frequency
]
The deviceName is “MPEG-V:siv:FlashActuatorType”. The flash actuator will keep its current state (intensity, color, and frequency) as long as a new command is not initiated.
Heating Actuator
The definition of MPEG-V Heating Actuator DDF is the following:
MPEGVHeatingActuatorType[
SFFloat intensity
]
The deviceName is “MPEG-V:siv:HeatingActuatorType”. The heating actuator will keep its current state (intensity) as long as a new command is not initiated.
Cooling Actuator
The definition of MPEG-V Cooling Actuator DDF is the following:
MPEGVCoolingActuatorType[
SFFloat intensity
]
The deviceName is “MPEG-V:siv:CoolingActuatorType”. The cooling actuator will keep its current state (intensity) as long as a new command is not initiated.
Wind Actuator
The definition of MPEG-V Wind Actuator DDF is the following:
MPEGVWindActuatorType[
SFFloat intensity
]
The deviceName is “MPEG-V:siv:WindActuatorType”. The wind actuator will keep its current state (intensity) as long as a new command is not initiated.
Sprayer Actuator
The definition of MPEG-V Sprayer Actuator DDF is the following:
MPEGVSprayerActuatorType[
SFFloat intensity
SFInt32 sprayingType
]
The deviceName is “MPEG-V:siv:SprayerActuatorType”. The sprayer actuator will keep its current state (sprayingType and intensity) as long as a new command is not initiated.
Scent Actuator
The definition of MPEG-V Scent Actuator DDF is the following:
MPEGVScentActuatorType[
SFFloat intensity
SFInt32 scent
]
The deviceName is “MPEG-V:siv:ScentActuatorType”. The scent actuator will keep its current state (scent and intensity) as long as a new command is not initiated.
Fog Actuator
The definition of MPEG-V Fog Actuator DDF is the following:
MPEGVFogActuatorType[
SFFloat intensity
]
The deviceName is “MPEG-V:siv:FogActuatorType”. The fog actuator will keep its current state (intensity) as long as a new command is not initiated.
The definition of MPEG-V Rigid Body Motion Actuator DDF is the following:
MPEGVRigidBodyMotionActuatorType[
MFVec3f direction
MFVec3f speed
MFVec3f acceleration
MFVec3f angle
MFVec3f angleSpeed
MFVec3f angleAcceleration
]
The deviceName is “MPEG-V:siv:RigidBodyMotionActuatorType”. The rigid body motion actuator will keep its current state (direction, speed, acceleration, angle, angleSpeed, and angleAcceleration) as long as a new command is not initiated. Each multi-valued field contains 3D values for X, Y, and Z component. The following table shows the mapping between the fields of DDF and the fields of MPEG-V:siv:RigidBodyMotionActuatorType.
Mapping between DDF and MPEG-V Rigid Body Motion Actuator Type
MPEGVRigidBodyMotionActuatorType
|
MPEG-V:siv:RigidBodyMotionActuatorType
|
direction[0]
|
directionX of MoveTowardType
|
direction[1]
|
directionY of MoveTowardType
|
direction[2]
|
directionZ of MoveTowardType
|
speed[0]
|
speedX of MoveTowardType
|
speed[1]
|
speedY of MoveTowardType
|
speed[2]
|
speedZ of MoveTowardType
|
acceleration[0]
|
accelerationX of MoveTowardType
|
acceleration[1]
|
accelerationY of MoveTowardType
|
acceleration[2]
|
accelerationZ of MoveTowardType
|
angle[0]
|
pitchAngle of InclineType
|
angle[1]
|
yawAngle of InclineType
|
angle[2]
|
rollAngle of InclineType
|
angleSpeed[0]
|
pitchSpeed of InclineType
|
angleSpeed[1]
|
yawSpeed of InclineType
|
angleSpeed[2]
|
rollSpeed of InclineType
|
angleAcceleration[0]
|
pitchAcceleration of InclineType
|
angleAcceleration[1]
|
yawAcceleration of InclineType
|
angleAcceleration[2]
|
rollAcceleration of InclineType
|
Kinesthetic Actuator
The definition of MPEG-V Kinesthetic Actuator DDF is the following:
MPEGVKinetheticActuatorType[
MFVec3f position
MFVec3f orientation
MFVec3f force
MFVec3f torque
]
The deviceName is “MPEG-V:siv:KinestheticActuatorType”. The kinesthetic actuator will keep its current state (position, orientation, force, and torque) as long as a new command is not initiated. Each multi-valued field contains 3D values for X, Y, and Z component.
Dostları ilə paylaş: |