Prima perception Recognition and Integration for Observing and Modeling Activity



Yüklə 9,33 Mb.
tarix01.07.2018
ölçüsü9,33 Mb.
#52468


PRIMA Perception Recognition and Integration for Observing and Modeling Activity

  • James L. Crowley, Prof. I.N.P. Grenoble

  • Augustin Lux, Prof. I.N.P. Grenoble

  • Patrick Reignier, MdC. Univ. Joseph Fourier

  • Dominique Vaufreydaz, MdC UPMF


The PRIMA Group Leaders



The PRIMA Group Members



The PRIMA Group, May 2006

  • Permanents :

    • James L. Crowley, Prof. I.N.P. Grenoble
    • Augustin Lux, Prof. I.N.P. Grenoble
    • Patrick Reignier, MdC. U.J.F.
    • Dominique Vaufreydaz, MdC. UPMF.
  • Assistante :

    • Caroline Ouari (INPG)
  • Contractual Engineers

    • Alba Ferrer, IE INRIA
    • Mathieu Langet, IE INPG


The PRIMA Group, May 2006

  • Doctoral Students :

    • Stan Borkowski (Bourse EGIDE)
    • Chunwiphat, Suphot (Bourse Thailand)
    • Thi-Thanh-Hai Tran (Bourse EGIDE)
    • Matthieu Anne (Bourse CIFRE - France Telecom)
    • Olivier Bertrand (Bourse ENS Cachan)
    • Nicolas Gourier (Bourse INRIA)
    • Julien Letessier (Bourse INRIA)
    • Sonia Zaidenberg (Bourse CNRS - BDI)
    • Oliver Brdiczka (Bourse INRIA)
    • Remi Emonet (Bourse MENSR)


Plan for the Review

  • 1) Presentation of Scientific Project

    • Objectives
    • Research Problems and Results
    • Bilan 2003 - 2006
    • Evolutions for 2007-2010


Objective of Project PRIMA

  • Develop the scientific and technological foundations for context aware, interactive environments

  • Interactive Environment:

  • An environment capable of perceiving, acting, communicating, and interacting with users.



Experimental Platforme : FAME Augmented Meeting Environment

  • 8 Cameras

  • 7 Steerable

  • 1 fixed, wide angle

  • 8 Microphones

  • (acoustic Sensors)

  • 6 Biprocessors (3 Ghz)

  • 3 Video Interaction Devices

  • (Camera-projector pairs)



Augmented Meeting Environment



Research Problems



Research Problems

    • Context-aware interactive environments
    • New forms of man-machine interaction (using perception)
    • Real Time, View Invariant, Computer Vision
    • Autonomic Architectures for Multi-Modal Perception


Software Architecture for Observing Activity

  • Sensors and actuators: Interface to the physical world.

  • Perception and action: Perceives entities, Assigns entities to roles.

  • Situation: Filter events, Describes relevant actors and props for services.

  • (User) Services: Implicit or explicit. Event driven.



Situation Graph

  • Situation: An configuration of entities playing roles

  • Configuration: Set of Relations (Predicates) over entities.

  • Entity: Actors or Objects

  • Roles: Abstract descriptions of Persons or objects

  • A situation graph describes a state space of situations

  • and the actions of the system for each situation



Situation and Context

  • Basic Concepts:

  • Property: Any value observed by a process

  • Entity: A “correlated” set of properties

  • Composite entity: A composition of entities

  • Relation: A predicate defined over entities

  • Actor: An entity that can act.

  • Role: Interpretation assigned to an entity or actor

  • Situation: A configuration of roles and relations.



Situation and Context

  • Role: Interpretation assigned to an entity or actor

  • Relation: A predicate over entities and actors

  • Situation: An configuration of roles and relations.

  • A situation graph describes the state space of situations

  • and the actions of the system for each situation

  • Approach: Compile a federation of processes to observe the roles (actors and entities) and relations that define situations.



Acquiring Situation Models

  • Objective:

    • Automatic acquisition of situation models.
  • Approach:

    • Start with simple sterotypical model for scenario
    • Develop using Supervised Incremental Learning
  • Recognition:

    • Detect Roles with Linear Classifiers
    • Recognize Situation using probablisitic model


Video Acquisition System V2.0



Audio-Visual Acquisition System



Research Problems

    • Context-aware interactive environments
    • New forms of man-machine interaction (using perception)
    • Real Time, View Invariant, Computer Vision
    • Autonomic Architectures for Multi-Modal Perception


Steerable Camera Projector Pair





Portable Display Surface



Rectification by Homography

  • For each rectified pixel (x,y), project to original pixel and compute interpolated intensity



Real Time Rectification for the PDS



Luminance-based button widget



Striplet – the occlusion detector



Striplet – the occlusion detector



Striplet – the occlusion detector



Striplet-based SPOD



Projected Calculator



Research Problems

    • Context-aware interactive environments
    • New forms of man-machine interaction (using perception)
    • Real Time, View Invariant Computer Vision
    • Autonomic Architectures for Multi-Modal Perception


Chromatic Gaussian Basis



Real Time, View Invariant Computer Vision

  • Results

    • Scale and orientation normalised Receptive Fields computed at video rate. (BrandDetect system, IST CAVIAR)
    • Real time indexing and recognition (Thesis F. Pelisson)
    • Robust Visual Features for Face Detection
      • (Thesis N. Gourier)
    • Direct Computation of Time to Crash
      • (Masters A. Negre)
    • Natural Interest "Ridges"
      • (Thesis Hai Tranh)


Scale and Orientation Normalised Gaussian RF's



Natural Interest Points (Scale Invariant "Salient" image features)



Natural Ridge Detection [Tran04]

  • Compute Derivatives at different Scales.

  • For each point (x,y,scale)

    • Compute second derivatives: fxx,fyy,fxy
    • Compute eigenvalues and eigenvectors of Hessian matrix
    • Detect local extremum in the direction corresponding to the largest eigenvalue.
    • Assemble Ridge points,




Real Time, View Invariant Computer Vision

  • Current activity

    • Robust Visual Features for Face Detection
    • Direct Computation of Time to Crash
    • Natural Interest "Ridges" for perceptual organisation.


Research Problems

    • Context-aware interactive environments
    • New forms of man-machine interaction (using perception)
    • Real Time, View Invariant, Computer Vision
    • Autonomic Architectures for Multi-Modal Perception


Supervised Perceptual Process

  • Supervisor Provides:

  • Execution Scheduler • Command Interpreter

  • Parameter Regulator • Description of State and Capabilities



Detection and Tracking of Entities

  • Entities: Correlated sets of blobs

    • Blob Detectors: Backgrnd difference, motion,color, receptive fields histograms
    • Entity Grouper: Assigns roles to blobs as body, hands, face or eyes


Autonomic Properties provided by process supervisor

  • Auto-regulatory: The process controller can adapt parameters to maintain a desired process state.

  • Auto-descriptive: The process controller provides descriptions of the capabilities and the current state of the process.

  • Auto-critical: Process estimates confidence for all properties and events.

  • Self Monitoring: Maintaining a description of process state and quality of service



Self-monitoring Perceptual Process

  • Process monitors likelihood of output

  • When an performance degrades, process adapts processing (modules, parameters, and data)



Autonomic Parameter Regulation

  • Parameter regulation provides robust adaptation to

  • Changes in operating conditions.



Research Contracts (2003-2006)

  • National and Industrial:

    • ROBEA HR+ : Human-Robot Interaction (with LAAS and ICP)
    • ROBEA ParkNav: Perception and action dynamic environments
    • RNTL ContAct: Context Aware Perception (with XRCE)
    • Contract HARP (Context aware Services - France Telecom)
  • IST - FP VI:

  • IST - FP V:

    • Project IST - CAVIAR: Context Aware Vision for Surveillance
    • Project IST - FAME: Multi-modal perception for services
    • Project IST - DETECT : Publicity Detection in Broadcast Video
    • Project FET - DC GLOSS : Global Smart Spaces
    • Thematic Network: FGNet (« Face and Gesture »)
    • Thematic Network: ECVision - Cognitive Vision


Collaborations

  • INRIA Projects

    • EMOTION (INRIA RA): Vision for Autonomous Robots; ParkNav, ROBEA (CNRS), Theses of C. Braillon and A. Negre
    • ORION (Sophia): Cognitive Vision (ECVision), Modeling Human Activity
  • Academic:

    • IIHM, Laboratoire CLIPS: Human-Computer Interaction, Smart Spaces; Mapping Project, IST Projects GLOSS, FAME, Thesis: J. Letissier
  • Univ. of Karlsruhe (Multimodal interaction): IST FAME and CHIL.

  • Industry

    • France Telecom: (Lannion and Meylan) Project HARP, Thesis of M. Anne.
    • Xerox Research Centre Europe: Project RNTL/Proact Cont'Act
    • IBM Research (Prague,New York): Situation Modeling, Autonomic Software Archictures, Projet CHIL


Knowledge Dissemination



Conferences and Workshops Organised



APP Registered Software



Start-up: Blue Eye Video



Blue Eye Video Activity Sensor (PETS 2002 Data)



Blue Eye Video Activity Sensor (Distributed Sensor Networks)



Evolutions for 2006-2010

  • Context-aware interactive environments

    • Adaptation and Development of Activity Models
  • New forms of man-machine interaction

    • Affective Interaction
  • Real Time, View Invariant, Computer Vision

    • Embedded View-invariant Visual Perception
  • Autonomic Architectures for Multi-Modal Perception

    • Learning for Monitoring and Regulation
    • Dynamic Service Composition


Automatic Adaptation and Development of Models for Human Activity



Affective interaction

  • Interactive objects that recognize interest and affect and that learn to perceive and evoke emotions in humans.



Embedded View-invariant Visual Perception

  • Embedded Real Time View Invariant Vision in phones and PDA’s (Work with ST MicroSystems)



Distributed Autonomic Systems for Multi-modal Perception



PRIMA Perception Recognition and Integration for Observing and Modeling Activity

  • James L. Crowley, Prof. I.N.P. Grenoble

  • Augustin Lux, Prof. I.N.P. Grenoble

  • Patrick Reignier, MdC. Univ. Joseph Fourier

  • Dominique Vaufreydaz, MdC UPMF



Yüklə 9,33 Mb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə