Iso/iec jtc 1/sc 29 N


ReImgComp (Remote Image Recognition Registration Composition)



Yüklə 1,86 Mb.
səhifə15/19
tarix19.10.2018
ölçüsü1,86 Mb.
#74906
növüApplication form
1   ...   11   12   13   14   15   16   17   18   19

ReImgComp (Remote Image Recognition Registration Composition)


MAREC provides one or multiple ARAF compliant processing servers URLs and a video source URL where the recognition-tracking-composition process shall be performed. The ARAF Browser sends video frames to processing server, which is detecting and recognizing target resources that are already stored in remote database(s). The processing server is in charge of doing the composition between the video frames and the corresponding augmentation media of the recognized target resources. The composed video frames are sent back to the ARAF Browser as described below in Functionality and Semantics.


BIFS Textual Description

EXTERNPROTO ReImgComp [

exposedField SFString videoSource ""

exposedField SFInt32 streamingType 0

exposedField MFString processingServerURL []

exposedField SFBool enabled FALSE

exposedField MFVec2f recognitionRegion []

exposedField SFInt32 fps -1

exposedField MFString onAugmentedStream []

eventOut SFInt32 onError

]”org:mpeg:remote_image_recognition_registration_composition”

Functionality and semantics

MAREC provides one or multiple ARAF compliant processing servers URLs and the video source URL that can be a local or remote real time capturing of a 2D camera or a locally/remotely stored, pre-recorded 2D video. The ARAF Browser uses the provided processing server URLs as an external resource that is able to perform the recognition (and tracking) of the target resources that are already stored in remote databases, the registration and the composition of the video frames that are retrieved from the ARAF Browser. The result of the processing server is a URL to the composed video stream where the corresponding augmentation media of the recognized target resources is overlaid.

A compliant Processing Server shall understand the HTTP requests presented in the following table:



ARAF Browser request

Request Type

Description

Processing Server response

Description

pServer/alive

GET

Get the unique key and the server parameters

unique key 64-bit ,

communication protocol code



The key shall be used to indentify future requests from ARAF Browser. The communication protocol code specifies the streaming communication protocols that are supported by the processing server. The ARAF Browser decides which protocol is used by considering the server capabilities and the MAREC preference (see streamingType).

pServer

key + video stream

POST/

RTSP/ RTP/ DASH



Send the video stream to the server

Composed video stream

The server reponse is the composed video stream where the associated augmentation resources are overlaid on top of the original video.
















ARAF Browser – Processing Server Communication Workflow

Communication Protocol Workflow:

  1. ARAF Browser interrogates the Processing Server (GET /alive) in order to detect its status and to receive the server parameters. The server returns:

    1. a unique key that shall be transmitted by ARAF Browser in future requests,

    2. the list of codes describing the supported streaming communication protocols. See table Communication Protocols for the supported codes and their meaning.

The server response must be in the following format:

key=unique key 64-bit

&comm_protocol_codes=[video frame format codes];
E.g of a possible server response:

key=2e45325f4f&comm_protocol_codes=0

  1. Once the key has been received, the ARAF Browser knows that the processing server is ready to perform the recognition-tracking-composition process. The ARAF Browser decides on one communication protocol for streaming the video data, based on the user preference (if specified) and on the server's response.

The ARAF Browser sends video frames to the processing server using the chosen streaming protocol. The server response is a URL pointing to the composed video stream where the associated augmentation resources are overlaid on top of the original video.

  1. The loop starts over from point 2 whenever the ARAF Browser has to initiate a new new streaming connection with the processing server.

videoSource is a SFString specifying the URI/URL of the video where the recognition process shall be performed on. The videoSource can be one of the following:

  1. Live 2D video camera feed

    1. a URI to one of the cameras available on the end user’s device. The possible values are specified in Table Camera URIs.

    2. a URL to an external camera providing live camera feed.

  2. A URL to a prerecorded video file stored

  • locally on the end user’s device.

  • remotely on an external repository in the Web.

The accepted video formats are specified in table Video formats.

The accepted communication protocols are specified in table Communication Protocols.




Camera URI

Description

worldFacingCamera

Refers to the primary camera, usually located at the back of the device (back camera)

userFacingCamera

Refers to the secondary camera, usually located at the front of the device (front camera)

Camera URIs

Video file format

Reference

Raw video data

ISO/IEC14496-1:2010/Amd2:2014 Support for raw audio-visual data

MPEG4 Visual

ISO/IEC14496-2:2004 Visual

MPEG4 AVC

ISO/IEC14496-10:2012 Advanced Video Coding

Proprietary

See Annex B (ARAF support for proprietary formats)

Video formats

Communication protocol name

Reference

Code

RTP

RFC3550-2003 Real Time Transport Protocol

0

RTSP

RFC2326-2013 version 2.0

Real Time Streaming Protocol



1

HTTP

RFC2616-1999 Hypertext Transfer Protocol

2

DASH

ISO/IEC23009-1:2012

Dynamic Adaptive Streaming over HTTP



3

Communication Protocols

processingServerURL is a MFString used by the MAREC to specify one or multiple web addresses where ARAF compliant processing servers are available. A valid URL is one that points to a processing server that is able to understand the ARAF Browser requests and perform the recognition, tracking and composition of target resources as defined in the prototype description and in table ARAF Browser – Processing Server Communication Workflow.

streamingType field is an MFInt32 used by the MAREC to specify the desired streaming protocol for the video data that is sent to the processing server. If multiple codes are specified by the MAREC, the ARAF Browser chooses the first streaming protocol that matches the server capabilities. The possible pre-defined codes of the communication protocols and their meaning are listed in table Communication Protocols. If the MAREC does not specify any code the ARAF Browser uses a default one, based on the processing server capabilities. The MAREC should not specify any code unless he knows that the processing server gives better results with one or another. The field is optional.

enabled is a SFInt32 value indicating if the recognition-tracking-composition process is enabled (running). MAREC can control the status of the process or he can let the ARAF Browser to decide whether the recognition-tracking-composition process should be running or not. The following table specifies the supported integer values of the enabled field.


Code

Description

-1

ARAF Browser decides when the recognition-tracking-composition process is enabled. If not supported, the process is always disabled unless a value of 0 or 1 is set by the MAREC.

0 (default)

The recognition-tracking-composition process is disabled

1

The recognition-tracking-composition process is enabled
A value of -1 specifies that the ARAF Browser decides the status of the process.

The recognition-tracking-composition process is inactive while enabled is 0



While enabled is 1, we differentiate the following cases, based on the video source:

  • local live video camera feed: the frames coming from the local live video camera feed are considered by the ARAF Browser in the recognition-tracking-composition process.

  • remote live video camera feed: the frames coming from the remote live video camera stream are considered by the ARAF Browser in the recognition-tracking-composition process. Technically the only difference between the first case and the second one is the source of the video frames. In this case, a streaming protocol should be used to fetch the remote video camera stream.

  • local prerecorded video file: as long as enabled is 1, the ARAF Browser plays the video file and the corresponding video frames are used in the process. Whenever enabled is 0 the video play back is paused. On 1, the video starts playing from the point where it was last paused. The video play back starts from the beginning when the end of the video stream is reached and enabled is 1.

remote prerecorded video file: idem as in the previous case except that the remote file has to be downloaded first. If a streaming protocol is being used, the ARAF Browser may request (if possible) video frames whenever enabled is 1, as it would play back the video remotely

MAREC has the possibility of choosing the quality of his MAR experience by imposing a desired number of frames received per second. MAREC controls this by setting the fps field. The ARAF Browser shall not present the composed video stream received from the processing server as long as the requirement imposed by MAREC is not fullfilled, meaning that the number of frames per second of the composed video stream is lower than the value of fps that has been set my the MAREC.

recognitionRegion a MFVec2f field specifying two 2D points that are relative to the center of the video frame on which the recognition (and tracking) algorithm is performed. The first point indicates the lower left coordinate and the second one the upper right coordinate of a rectangle. By using this field, the MAREC suggests that only the inside area given by the rectangle has to be used in the recognition (and tracking) process, not the entire video frame. The recognition (and tracking) process can be improved by using a video frame region rather than the whole video frame but on the other hand the way how the original video frame is pre-preprocessed (e.g. cropped) may introduce delays. The ARAF Browser cannot ensure that by using a recognition region the overall processing speed is improved.

onAugmentedStream is an output event of type MFString where the URL of the composed video stream is referenced. The protocol used by the ARAF Browser to fetch the augmented stream can be any of the ones specified in table Communication Protocols. If the address pointing to the augmented stream file is not supported by the ARAF Browser an error code is triggered (see table Error codes).

onError is an output event of type SFInt32.

Table Error codes presented below specifies onError possible values and their meaning.



Error code

Meaning

-1

The video source URL is invalid or not supported.

-5

Unknown error

-6

None of the available communication protocols are supported by the processing server.

-7

The augmented stream cannot be fetched. There might be a communication protocol incompatibility or the augmented stream is not available at the given address.

Error codes

Switch

XSD Description



























Functionality and semantics

As specified in ISO/IEC 14772-1:1997, section 6.46.

The Switch grouping node traverses zero or one of the nodes specified in the choice field.

ISO/IEC 14772-1:1997, section 4.6.5, Grouping and children nodes, describes details on the types of nodes that are legal values for choice.

The whichChoice field specifies the index of the child to traverse, with the first child having index 0. If whichChoice is less than zero or greater than the number of nodes in the choice field, nothing is chosen.

All nodes under a Switch continue to receive and send events regardless of the value of whichChoice. For example, if an active TimeSensor is contained within an inactive choice of an Switch, the TimeSensor sends events regardless of the Switch's state.

With the following restriction specified in ISO/IEC 14496-11 (BIFS), section 7.2.2.122.2:

If some of the child sub-graphs contain audio content (i.e., the subgraphs contain Sound nodes), the child sounds are switched on and off according to the value of the whichChoice field. That is, only sound that corresponds to Sound nodes in the whichChoice’th subgraph of this node are played. The others are muted.

Transform

XSD Description



































Functionality and semantics

As specified in ISO/IEC 14772-1:1997, section 6.52.

The Transform node is a grouping node that defines a coordinate system for its children that is relative to the coordinate systems of its ancestors. See ISO/IEC 14772-1:1997, section 4.4.4, Transformation hierarchy, and ISO/IEC 14772-1:1997, section 4.4.5, Standard units and coordinate system, for a description of coordinate systems and transformations.

ISO/IEC 14772-1:1997, section 4.6.5, Grouping and children nodes, provides a description of the children, addChildren, and removeChildren fields and eventIns.

The bboxCenter and bboxSize fields specify a bounding box that encloses the children of the Transform node. This is a hint that may be used for optimization purposes. The results are undefined if the specified bounding box is smaller than the actual bounding box of the children at any time. A default bboxSize value, (-1, -1, -1), implies that the bounding box is not specified and, if needed, shall be calculated by the browser. The bounding box shall be large enough at all times to enclose the union of the group's children's bounding boxes; it shall not include any transformations performed by the group itself (i.e., the bounding box is defined in the local coordinate system of the children). The results are undefined if the specified bounding box is smaller than the true bounding box of the group. A description of the bboxCenter and bboxSize fields is provided in ISO/IEC 14772-1:1997, section 4.6.4, Bounding boxes.

The translation, rotation, scale, scaleOrientation and center fields define a geometric 3D transformation consisting of (in order):


  • a (possibly) non-uniform scale about an arbitrary point;

  • a rotation about an arbitrary point and axis;

  • a translation.

The center field specifies a translation offset from the origin of the local coordinate system (0,0,0). The rotation field specifies a rotation of the coordinate system. The scale field specifies a non-uniform scale of the coordinate system. scale values shall be greater than zero. The scaleOrientation specifies a rotation of the coordinate system before the scale (to specify scales in arbitrary orientations). The scaleOrientation applies only to the scale operation. The translation field specifies a translation to the coordinate system.

As specified in ISO/IEC 14496-11 (BIFS), section 7.2.2.131.2:

If some of the child subgraphs contain audio content (i.e., the subgraphs contain Sound nodes), the child sounds are transformed and mixed as follows. If each of the child sounds is a spatially presented sound, the Transform node applies to the local coordinate system of the Sound nodes to alter the apparent spatial location and direction. If the children are not spatially presented but have equal numbers of channels, the Transform node has no effect on the childrens’ sounds. If the children are not spatially presented but have equal numbers of channels, the Transform node has no effect on the childrens’ sounds. The child sounds are summed equally to produce the audio output at this node. If some children are spatially presented and some not, or all children do not have equal numbers of channels, the semantics are not defined.


Transform2D

XSD Description



































Functionality and semantics

As specified in ISO/IEC 14496-11 (BIFS), section 7.2.2.132.2

The Transform2D node allows the translation, rotation and scaling of its 2D children objects. The rotation field specifies a rotation of the child objects, in radians, which occurs about the point specified by center. The scale field specifies a 2D scaling of the child objects. The scaling operation takes place following a rotation of the 2D coordinate system that is specified, in radians, by the scaleOrientation field. The rotation of the co-ordinate system is notional and purely for the purpose of applying the scaling and is undone before any further actions are performed. No permanent rotation of the co-ordinate system is implied.

The translation field specifies a 2D vector which translates the child objects. The scaling, rotation and translation are applied in the following order: scale, rotate, translate. The children field contains a list of zero or more children nodes which are grouped by the Transform2D node. The addChildren and removeChildren eventIns are used to add or remove child nodes from the children field of the node. Children are added to the end of the list of children and special note should be taken of the implications of this for implicit drawing orders.

If some of the child subgraphs contain audio content (i.e., the subgraphs contain Sound nodes), the child sounds are transformed and mixed as follows. If each of the child sounds is a spatially presented sound, the Transform2D node applies to the local coordinate system of the Sound2D nodes to alter the apparent spatial location and direction. If the children are not spatially presented but have equal numbers of channels, the Transform2D node has no effect on the childrens’ sounds. After any such transformation, the combination of sounds is performed as described in ISO/IEC 14496-11, section 7.2.2.117.2.

If the children are not spatially presented but have equal numbers of channels, the Transform node has no effect on the children’s sounds. The child sounds are summed equally to produce the audio output at this node. If some children are spatially presented and some not, or all children do not have equal numbers of channels, the semantics are not defined.

Viewpoint

XSD Description

























Functionality and semantics

As specified in ISO/IEC 14772-1:1997, section 6.53.

The Viewpoint node defines a specific location in the local coordinate system from which the user may view the scene. Viewpoint nodes are bindable children nodes (see ISO/IEC 14772-1:1997, section 4.6.10, Bindable children nodes) and thus there exists a Viewpoint node stack in the browser in which the top-most Viewpoint node on the stack is the currently active Viewpoint node. If a TRUE value is sent to the set_bind eventIn of a Viewpoint node, it is moved to the top of the Viewpoint node stack and activated. When a Viewpoint node is at the top of the stack, the user's view is conceptually re-parented as a child of the Viewpoint node. All subsequent changes to the Viewpoint node's coordinate system change the user's view (e.g., changes to any ancestor transformation nodes or to the Viewpoint node's position or orientation fields). Sending a set_bind FALSE event removes the Viewpoint node from the stack and produces isBound FALSE and bindTime events. If the popped Viewpoint node is at the top of the viewpoint stack, the user's view is re-parented to the next entry in the stack. More details on binding stacks can be found in ISO/IEC 14772-1:1997, section 4.6.10, Bindable children nodes. When a Viewpoint node is moved to the top of the stack, the existing top of stack Viewpoint node sends an isBound FALSE event and is pushed down the stack.

An author can automatically move the user's view through the world by binding the user to a Viewpoint node and then animating either the Viewpoint node or the transformations above it. Browsers shall allow the user view to be navigated relative to the coordinate system defined by the Viewpoint node (and the transformations above it) even if the Viewpoint node or its ancestors' transformations are being animated.

The bindTime eventOut sends the time at which the Viewpoint node is bound or unbound. This can happen:



  1. during loading;

  2. when a set_bind event is sent to the Viewpoint node;

  3. when the browser binds to the Viewpoint node through its user interface described below.

The position and orientation fields of the Viewpoint node specify relative locations in the local coordinate system. Position is relative to the coordinate system's origin (0,0,0), while orientation specifies a rotation relative to the default orientation. In the default position and orientation, the viewer is on the Z-axis looking down the -Z-axis toward the origin with +X to the right and +Y straight up. Viewpoint nodes are affected by the transformation hierarchy.

Navigation types (see ISO/IEC 14772-1:1997, section 6.29, NavigationInfo) that require a definition of a down vector (e.g., terrain following) shall use the negative Y-axis of the coordinate system of the currently bound Viewpoint node. Likewise, navigation types that require a definition of an up vector shall use the positive Y-axis of the coordinate system of the currently bound Viewpoint node. The orientation field of the Viewpoint node does not affect the definition of the down or up vectors. This allows the author to separate the viewing direction from the gravity direction.

The jump field specifies whether the user's view "jumps" to the position and orientation of a bound Viewpoint node or remains unchanged. This jump is instantaneous and discontinuous in that no collisions are performed and no ProximitySensor nodes are checked in between the starting and ending jump points. If the user's position before the jump is inside a ProximitySensor the exitTime of that sensor shall send the same timestamp as the bind eventIn. Similarly, if the user's position after the jump is inside a ProximitySensor the enterTime of that sensor shall send the same timestamp as the bind eventIn. Regardless of the value of jump at bind time, the relative viewing transformation between the user's view and the current Viewpoint node shall be stored with the current Viewpoint node for later use when un-jumping (i.e., popping the Viewpoint node binding stack from a Viewpoint node with jump TRUE). The following summarizes the bind stack rules (see ISO/IEC 14772-1:1997, section 4.6.10, Bindable children nodes) with additional rules regarding Viewpoint nodes (displayed in boldface type):



  1. During read, the first encountered Viewpoint node is bound by pushing it to the top of the Viewpoint node stack. If a Viewpoint node name is specified in the URL that is being read, this named Viewpoint node is considered to be the first encountered Viewpoint node. Nodes contained within Inline nodes, within the strings passed to the Browser.createVrmlFromString() method, or within files passed to the Browser.createVrmlFromURL() method (see ISO/IEC 14772-1:1997, section 4.12.10, Browser script interface) are not candidates for the first encountered Viewpoint node. The first node within a prototype instance is a valid candidate for the first encountered Viewpoint node. The first encountered Viewpoint node sends an isBound TRUE event.

  2. When a set_bind TRUE event is received by a Viewpoint node,

  • If it is not on the top of the stack: The relative transformation from the current top of stack Viewpoint node to the user's view is stored with the current top of stack Viewpoint node. The current top of stack node sends an isBound FALSE event. The new node is moved to the top of the stack and becomes the currently bound Viewpoint node. The new Viewpoint node (top of stack) sends an isBound TRUE event. If jump is TRUE for the new Viewpoint node, the user's view is instantaneously "jumped" to match the values in the position and orientation fields of the new Viewpoint node.

  • If the node is already at the top of the stack, this event has no affect.

  1. When a set_bind FALSE event is received by a Viewpoint node in the stack, it is removed from the stack. If it was on the top of the stack,

  • it sends an isBound FALSE event,

  • the next node in the stack becomes the currently bound Viewpoint node (i.e., pop) and issues an isBound TRUE event,

  • if its jump field value is TRUE, the user's view is instantaneously "jumped" to the position and orientation of the next Viewpoint node in the stack with the stored relative transformation of this next Viewpoint node applied.

  1. If a set_bind FALSE event is received by a node not in the stack, the event is ignored and isBound events are not sent.

  2. When a node replaces another node at the top of the stack, the isBound TRUE and FALSE events from the two nodes are sent simultaneously (i.e., with identical timestamps).

  3. If a bound node is deleted, it behaves as if it received a set_bind FALSE event (see c.).

The jump field may change after a Viewpoint node is bound. The rules described above still apply. If jump was TRUE when the Viewpoint node is bound, but changed to FALSE before the set_bind FALSE is sent, the Viewpoint node does not un-jump during unbind. If jump was FALSE when the Viewpoint node is bound, but changed to TRUE before the set_bind FALSE is sent, the Viewpoint node does perform the un-jump during unbind.

Note that there are two other mechanisms that result in the binding of a new Viewpoint:



  1. An Anchor node's url field specifies a "#ViewpointName".

  2. A script invokes the loadURL() method and the URL argument specifies a "#ViewpointName".

Both of these mechanisms override the jump field value of the specified Viewpoint node (#ViewpointName) and assume that jump is TRUE when binding to the new Viewpoint. The behaviour of the viewer transition to the newly bound Viewpoint depends on the currently bound NavigationInfo node's type field value (see ISO/IEC 14772-1:1997, section 6.29, NavigationInfo).

The fieldOfView field specifies a preferred minimum viewing angle from this viewpoint in radians. A small field of view roughly corresponds to a telephoto lens; a large field of view roughly corresponds to a wide-angle lens. The field of view shall be greater than zero and smaller than . The value of fieldOfView represents the minimum viewing angle in any direction axis perpendicular to the view. For example, a browser with a rectangular viewing projection shall have the following relationship:

display width tan(FOVhorizontal/2)

-------------- = -----------------

display height tan(FOVvertical/2)


where the smaller of display width or display height determines which angle equals the fieldOfView (the larger angle is computed using the relationship described above). The larger angle shall not exceed and may force the smaller angle to be less than fieldOfView in order to sustain the aspect ratio.

The description field specifies a textual description of the Viewpoint node. This may be used by browser-specific user interfaces. If a Viewpoint's description field is empty it is recommended that the browser not present this Viewpoint in its browser-specific user interface.

The URL syntax ".../scene.wrl#ViewpointName" specifies the user's initial view when loading "scene.wrl" to be the first Viewpoint node in the VRML file that appears as DEF ViewpointName Viewpoint {...}. This overrides the first Viewpoint node in the VRML file as the initial user view, and a set_bind TRUE message is sent to the Viewpoint node named "ViewpointName". If the Viewpoint node named "ViewpointName" is not found, the browser shall use the first Viewpoint node in the VRML file (i.e. the normal default behaviour). The URL syntax "#ViewpointName" (i.e. no file name) specifies a viewpoint within the existing VRML file. If this URL is loaded (e.g. Anchor node's url field or loadURL() method is invoked by a Script node), the Viewpoint node named "ViewpointName" is bound (a set_bind TRUE event is sent to this Viewpoint node).

The results are undefined if a Viewpoint node is bound and is the child of an LOD, Switch, or any node or prototype that disables its children. If a Viewpoint node is bound that results in collision with geometry, the browser shall perform its self-defined navigation adjustments as if the user navigated to this point (see ISO/IEC 14772-1:1997, section 6.8, Collision).


Viewport

XSD Description



























Functionality and semantics

As specified in ISO/IEC 14496-11 (BIFS), section 7.2.2.137.2

A Viewport node can be placed in the viewport field of a Layer2D or CompositeTexture2D node or in the scene tree as a 2D node. It defines a new viewport and implicitly establishes a new local coordinate system. The bounds of the new viewport are defined by the size and position field. The new local coordinate system’s origin is at the center of the parent node in the parent’s local coordinate system.

The orientation field specifies the rotation which is applied to the viewport in the parent node’s local coordinate system with respect to the X-axis. Viewport nodes are bindable nodes (see ISO/IEC 14496-11, section 7.1.1.2.14) and thus there exists a Viewport node stack which follows the same rules than other bindable nodes (e.g. Background2D).

The description field specifies a textual description of the Viewport node. The alignment and fit fields specify how the viewing area is mapped to the rendering area of the parent node (i.e. Layer2D, CompositeTexture2D, or the 2D top-node).

If the fit field is set to 0, the viewing area is scaled to fit the rendering area without preserving the aspect ratio. If the fit field is set to 1, the viewing area is scaled preserving the aspect ratio to fit entirely inside the rendering area. The scaling operation is performed possibly after rotation as specified by the orientation field. If the fit field is set the 2, the viewing area is scaled preserving the aspect ratio to cover entirely the rendering area. The scaling operation is performed possibly after rotation as specified by the orientation field.

The alignement field is an MFInt32 field that contains two values. The first value specifies alignment along the X-axis and the second value specifies alignment along the Y-axis. The first value belongs to the following set of SFInt32: -1, 0, 1. The second value belongs to the following set of SFInt32: -1, 0, 1. An empty alignement field is equivalent to the default value. When the fit field is set to 0, the alignment field is ignored.


Form

XSD Description

"FormType">



"xmta:IS" minOccurs="0"/>

"children" form="qualified" minOccurs="0">



"xmta:SF2DNodeType" minOccurs="0" maxOccurs="unbounded"/>







"size" type="xmta:SFVec2f" use="optional" default="-1 -1"

/>

"groups" type="xmta:MFInt32" use="optional"/>

"constraints" type="xmta:MFString" use="optional"/>

"groupsIndex" type="xmta:MFInt32" use="optional"/>

"xmta:DefUseGroup"/>



"Form" type="xmta:FormType"/>
Functionality and semantics

As specified in ISO-14496-11 section 7.2.2.62.2

The Form node specifies the placement of its children according to relative alignment and distribution constraints. Distribution spreads objects regularly, with an equal spacing between them.

The children field shall specify a list of nodes that are to be arranged. The children’s position is implicit and order is important.

The size field specifies the width and height of the layout frame.

The groups field specifies the list of groups of objects on which the constraints can be applied. The children of the Form node are numbered from 1 to n, 0 being reserved for a reference to the form itself. A group is a list of child indices, terminated by a -1.

The constraints and the groupsIndex fields specify the list of constraints. One constraint is constituted by a constraint type from the constraints field, coupled with a set of group indices terminated by a –1 contained in the groupsIndex field. There shall be as many strings in constraints as there are –1-terminated sets in groupsIndex. The n-th constraint string shall be applied to the n-th set in the groupsIndex field. A value of 0 in the groupsIndex field references the form node itself, otherwise a groupsIndex field value is a 1-based index into the group field.

Constraints belong to two categories: alignment and distribution constraints.

Groups referred to in the tables below are groups whose indices appear in the list following the constraint type. When rank is mentioned, it refers to the rank in that list.

The semantics of the , when present in the name of a constraint, is the following. It shall be a number, integer when the scene uses pixel metrics, and float otherwise, which specifies the space mentioned in the semantics of the constraint.

In case the form itself is specified in alignment constraint (group index 0), the form rectangle shall be used as the base of the alignment computation and other groups in the constraint list shall be aligned as specified by the constraint



Yüklə 1,86 Mb.

Dostları ilə paylaş:
1   ...   11   12   13   14   15   16   17   18   19




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə