Quantifizierung der Gestenformanalyse (quantitative GFA) Anpassung der GFA an Gesten in Gebärdensprache und Entwicklung eines interdisziplinären Kodiersystems
Zusammenfassung der Projektergebnisse
This research fellowship promoted the elaboration of gesture form analysis (GFA in short), an approach to gesture conceptualisation which decodes the process of seeing meaningful gesture in the movement of the hand, or any another articulator. Seven spatial operations in different permutations account for the spatial processing from the hand to the referent form in all gesture types. The modular nature of GFA also makes a structural comparison possible, since every gesture type is defined by one to six modular building blocks which are constituents of identifying the referent of a gesture. This referent may be a physical object as is often the case in stereotypical index-finger pointing, but it can just as well be an region of interest or a complete volumetric scene that includes multiple physical objects, or no physical object at all in the case of pointing to objects in an imaginary scene or in abstract pointing, as in metaphorically to a discourse topic. All these different loci of interest, ranging from a 0D point, over a 1D path structure or 2D areas up to 3D volumes constitute different referent forms that are being created in the mind of the observer by pointing gestures. That pointing gestures include multidimensional forms is an insight that contributes to the dissolution of the divide between “iconic” and “deictic” gestures (McNeill, 1992), two gesture groups which do not lend themselves to quantitative investigation because they are not mutually exclusive. In fact, it could be shown how the multidimensional referent forms of gestures almost always share spatial elements commonly associated with both iconicity and deixis. The outcome of this fellowship was to advance GFA theoretically and empirically, to widen its applicability to sign language and to develop a coding system that allows making quantitative codings of data from any corpus that contains video material of gestural action. The coding system, precedent codings, as well as video and 3D data were made available under http://hdl.handle.net/11022/1009-0000-0007-C34C-8. As a surprising side effect of collaborations with scholars from the Universidade de Sao Paulo, the investigations of gesture categories in this research project also yielded a system of gestures that can be implemented in gesture controls for personal transportation, connecting the driver to the world outside the car and enabling the guiding of an otherwise autonomous car by gesture.
Projektbezogene Publikationen (Auswahl)
- (2016). Producing and perceiving gestures conveying height or shape. Gesture 15(3), 404-424
Hassemer, J. & Winter, B.
(Siehe online unter https://doi.org/10.1075/gest.15.3.07has)