Welcome to the GestureML Wiki
Gesture Markup Language (GML) is an XML based multitouch gesture user interface language. GML can be used in combination with CML to create rich, dynamically defined user experiences.
GML is an extensible markup language used to define gestures that describe interactive object behavior and the relationships between interactive objects in an application. Gesture Markup Language has been designed to enhance the development of multi-user multitouch and other HCI device driven applications.
The declarative form of GML can be used effectively to create complete, human readable, descriptions of multitouch gesture actions and specify how events and commands are generated in an application layer. When GML is used with a GestureWorks engine in combination with Creative Markup Language (CML): objects can be dynamically constructed along with well defined, dynamic display properties and interactive behaviors.
Central to the design of GML structure is conceptual framework of Objects Containers Gestures and Manipulators (OCGM). In conjunction with OCGM are included methods for defining Human Computer Interaction (HCI) design principles such as affordances and feedback. One of the primary goals of GML is to present a standard markup language for integrating a complete range of Natural User Interface (NUI) modes and models which would allow for the creation of multiple discrete or blended user interfaces. GML can be used to construct gestures for a wide variety of input methods such as: tangible objects, touch surfaces, body tracking, accelerometer, voice and brain-wave. When GML is combined with CML it has been designed to enable the development of the complete spectrum of post-WIMP NUIs (or RBI's) such as: organic UI’s, Zoomable UI's, augmented reality, haptics, multiuser and full range immersive multitouch environments.
- Custom multitouch gesture definition
- Runtime gesture editing
- Gesture action matching
- Gesture mapping
- Gesture value boundaries
- Gesture delta limits
- Gesture set definition
- Device specific gestures
- Input specific gestures
- Continuous and discrete gesture definitions
- CML support
GML has been developed an open standard that can be used to rapidly create and share gestures for a wide variety of Human Computer Input(HCI) devices. Promoting these features by presenting a user friendly method for shaping complex interactions provides a corner stone with which to build the next generation of dynamic, production level HCI applications.
Current implementations of GML in the form of a external gesture engine (as in GestureWorks3) present 100+ base gestures that can be integrated directly into an application layer using an open gesture event protocol. This model effectively provides an infinite number of possible gestures each with potential to be recast or refined after related applications have been compiled and distributed. This approach puts interaction development directly into the hands of the UX designer and even allows end user management.
Examples of Use
From a UI development standpoint multitouch gestures are relatively new and in many cases methods of best practice for UX development has remained closely linked to application type and available devices or modes. In order to effectively explore new UX paradigms any complete gesture description must provide and inherent flexibility in the way gestural input is recognized and mapped within applications but also remain outside the compiled application. Loosely coupling gesture recognition to the application in this manner provides a standard method to dynamically define gestures. This model allows users to define equivalent gestures or variable gesture modes for different input types and device types without requiring further application level development.
For example as multitouch input devices continue to increase the number of supported touch points and grow in size, touch screen UX is seeing a shift towards full hand multitouch and multi-user application spaces. Providing methods and by which developers can create gestures that use 2 finger pinch to zoom or five finger zoom will be essential step in developing multitouch software. This can be seen at GestureWorks.com and OpenExhibits.org.
Each gesture defined in the GML document uses a fully customizable system that can be conceptually broken down into a four step process:
The first step is the definition of the gesture action. This definition is used to match the behavior of the input device to the trigger entry into the gesture pipeline. This can be a simple as defining the minimum number of touch points or describing a detailed vector path.
The second step is the assignment of the analysis module. Currently GML allows you to specify a specific analysis module from the set of built in compiled algorithms. However the GML specification is also designed to accommodate custom code blocks that can be directly evaluated at run time and directly inserted into the gesture processing pipeline.
The third step is the establishment of post processing filters. For example: values returned from the gesture analysis algorithm can be pass through a simple low pass filter which helps smooth out high frequency noise which can present in the form of touch point “jitter”. The “noise filter” can help smooth out these errors and reduce the wobble effect. In addition to this the values returned from the noise filter can also be feed into a secondary “inertial” filter that can be used to give the effect of inertial mass and friction to gestures resulting attributing psudo-physical behaviour to touch objects associated with the gesture. In this way multiple cumulative filters can be applied to the gesture pipeline in much the same way as multiple filters can be added to display object in popular image editing apps.
The fourth and final step in defining a gesture using GML is a description of how to map returned values from analysis and processing directly to a defined touch object property or to a gesture event value for a gesture dispatched on the touch object.
With these four steps GML can be used to define surface gestures by performing configured geometric analysis on clusters of points or single touch points. The return values can then be easily processed and assigned to customizable display object properties. This can be done at runtime with re-compiling which effectively separates the gesture interactions from the application code in such a way as to externalize the scripting of touch UI/UX enabling interaction designers to work along side application developers.
A single GML document can be used to define all gestures used in an application. These gestures can be divided into groups called gesture sets. Each gesture set consists of a series of defined gestures or “gesture objects” which can selectively be applied to any touch object defined in the CML or in the application code.
GML Example Syntax
<Gesture id="n-drag" type="drag"> <match> <action> <initial> <cluster point_number="0" point_number_min="1" point_number_max="5" translation_threshold="0"/> </initial> </action> </match> <analysis> <algorithm> <library module="drag"/> <returns> <property id="drag_dx"/> <property id="drag_dy"/> </returns> </algorithm> </analysis> <processing> <inertial_filter> <property ref="drag_dx" release_inertia="true" friction="0.996"/> <property ref="drag_dy" release_inertia="true" friction="0.996"/> </inertial_filter> </processing> <mapping> <update> <gesture_event> <property ref="drag_dx" target="x" delta_threshold="true" delta_min="0.01" delta_max="100"/> <property ref="drag_dy" target="y" delta_threshold="true" delta_min="0.01" delta_max="100"/> </gesture_event> </update> </mapping> </Gesture>
Editing Gestures in GML
Editing point min max number
Editing gesture name
Activating and adjusting delta limits
Activating and adjusting gesture boundaries
Adding code comments to gestures
Editing hold event duration
Editing tap event translation max threshold
Editing double tap interevent duration
Editing flick gesture acceleration min threshold
Editing swipe gesture acceleration max threshold
Editing rotate gesture touch inertia
Editing drag gesture release inertia
GML Example Index
Simple GestureML Descriptions (surface touch gestures)
Simple N point drag gesture “n-drag”
Two point drag gesture “1-finger-drag”
Two point drag gesture “2-finger-drag”
Three point drag gesture “3-finger-drag”
Four point drag gesture “4-finger-drag”
Five point drag gesture “5-finger-drag”
Simple N point rotate gesture “n-rotate”
Two point rotate gesture “2-finger-rotate”
Three point rotate gesture “3-finger-rotate”
Four point rotate gesture “4-finger-rotate”
Five point rotate gesture “5-finger-rotate”
One point pivot gesture “1-finger-pivot”
Simple N point scale gesture “n-scale”
Two point scale gesture “2-finger-scale”
Three point scale gesture “3-finger-scale”
Four point scale gesture “4-finger-scale”
Five point scale gesture “5-finger-scale”
Simple N point hold
Simple N point tap
Simple N point double tap
Simple N point triple tap
Simple N point flick gesture “n-flick”
Simple N point swipe gesture “n-swipe”
Simple N point scroll gesture “n-scroll”
Three point tilt “3-finger-tilt”
five point orient “5 finger-orient”
One point pivot “1 finger-pivot”
Advanced GestureML Descriptions (surface gestures)
N point noise filtered rotate gesture “n-rotate”
N point drag gesture with physics filter “n-drag”
N point scale gesture with physics filter “n-scale”
N point rotate gesture with physics filter “n-rotate”
One finger pivot gesture with physics filter “n-pivot”
N point rotate gesture with physics & noise filter “n-rotate”
Simple mapping with target change
Setting property boundaries
Setting delta thresholds
Geometrically defined clusters
Pressure augmented gestures
- Consistent model for understanding and describing gesture analysis
- Separation of interactions and behaviors from content
- Easy to read xml structure
- A range of gestures can be defined for a single application
- Allows for crowdsourcing gesture development
- Device agnostic
- Input method agnostic
- NUI + OCGM structure for developing flexible UX models
- XML based open standard, easy to post and share gml
- Clear separation between touch input protocol and gesture definition
- Simple method for describing a complete gesture library
- Native transformations
- Ad-hoc blended interactions (cumulative transformations)
- Manageable complexity (gesture block principle)
Proposed Expansion of Schema
- Custom gesture event definitions, type, value
- Explicit gesture event targeting
- Upload user profiles, prefer-ed interfaces
- Device based gesture definitions
- Direct UI/UX state integration
- Gesture sequence definitions
- Compound gesture definitions
- Direct gesture feedback definitions
- Direct gesture command and event definitions
GestureWorks3: ActionScript3 framework for use with Flash Flex and Air (Uses GML, CML and CSS)
OpenExhibits2: ActionScript3 framework for use with Flash Flex and Air (Uses GML, CML and CSS)