User Tools

Site Tools


gestures:fusion:gestureworks_core_multimodal_internals

MultiModal Interaction Engine

This is a top level view of the multimodal interaction engine used in GestureWorks Gameplay and GestureWorks Fusion to create blended HCI controls schemes.

The area in grey marked “Interaction Point Manager & Feature Fusion” contain methods for rich pose extraction and context driven skeletal feature fusion. These methods allow features to be filtered and selectively combined into rich interaction points which base on user defined parameters can be passed into the gesture pipeline for motion analytics.

Once the temporal-spacial properties of the interaction points have been classified, in the grey area marked “Gesture Manager & Context Fusion” the interaction points are then selectively filtered and fused (using typed context fusion) and passed to the gesture event manager to allow for gesture conflict mitigation, sequence analysis and gesture mapping before being dispatched as a fully qualified multimodal gesture event.

As seen above this user configurable interaction engine uses multiple typed input from a variety of devices to create multimodal interaction schemes. It can be used to create rich fluid HCI schemes for a variety of applications from desktop, gaming, mobile and head mounted display based AR & VR.

For more information about (DML) Device Mark-up Language and structured methods for setting up multiple input devices see: deviceml.org
For more information about (VCML) Virtual Control Mark-up Language and blended HCI control schemes see: virtualcontrolml.org


Interaction Point Index

gestures/fusion/gestureworks_core_multimodal_internals.txt · Last modified: 2016/02/12 21:11 by paul