The 20th 道臧輯要 century was a period of vast growth in dance especially in Western cultures, with multiple genres being created and codified techniques being developed. Along with the explosion of new works (danced texts) came an upsurge of research into dance and its acceptance as a scholarly discipline. However, research into movement, and movement-based arts, depends greatly on the ability to peruse documentation beyond static written texts and photographic (still) images. Thus, as visual capture technologies developed, the preferred means of recording and studying a dance work is film and/or video (i.e. visual data). As beneficial as access to film has been to the discipline, this method of preserving and accessing dance contains its own challenges.
Word | Sound | Lead Consonant | Vowel Sound | End Consonant |
Strike | s tr I ke | s | I | Ke |
The current practice of viewing hours of film hinders researchers’ abilities to (a) find movement-derived data (b) find that data quickly (c) find data accurately described and (d) reuse the data. Further, while there are standards for preserving video, there are no standards for providing access and any attempt at mining data from a moving image is fraught with difficulty. Therefore, a new model is required, one that exploits advances in computer software and hardware and can enhance research and innovation into movement-based research in the humanities.
With funds from an NEH Level II Digital Start-up Grant, the ARTeFACT Movement Thesaurus (AMT) uses motion capture technologies to study movement patterns through a corpus of movement-derived data. In the third phase of the ARTeFACT project, a multi-disciplinary project first developed at the University of Virginia, the AMT includes over 200 movements derived from codified techniques: ballet, jazz, modern dance, and tai chi. Prior to motion capture of movements, we defined and categorized each movement ‘STEP’ in order to develop an ontology (saved as xml files). An eight-camera Vicon system captures individual movements and movement phrases typically seen in the studio and on stage. Using custom Matlab software, 3-D data of individual movements are quantified through mathematical interpretation of joint positions, using the ground truth data of the VICON motion capture system for its input to develop the algorithms. In future, the program, idMove, (developed for the DH SUG) will be modified to use only 2-D data.
The second planned future component is the development of algorithms to convert 2-D images from the video into files of position data (in practice the second component, creating the position data from video would be applied first to generate the data for movement identification). These two components will be worked on in parallel. In addition, work will be undertaken to examine and improve the robustness of the algorithms when data sets are incomplete. We will validate that the code works with dancers of different morphologies and levels of ability. Also there are often times in dances when body parts are obscured by other dancers or by the dancer him or herself and we will develop our algorithms to work when sections of movements can not be seen. Finally, we will consider movement phrases, a series of dance moves, (the current software is designed for films of a single dance move) and will develop the ability to identify the individual moves within a string of moves.
As choreographers rarely create dances based solely on individual STEPS, and prefer to use them to create a new vocabulary per the requirements of the dance work, we are moving beyond the codified technical movements to incorporate conceptualized movements into the AMT. At this time, we are using Lakoff and Johnson’s work on conceptual metaphor as a basis for pattern recognition of embodied semantics, and we plan to utilize corpus linguistics methods as a basis to formulate a statistical analysis of the STEPS (words) ‘spoken’ in a dance work. This approach is admittedly problematic, in that a movement phrase does not parallel written phrases; however, we are continuing to work with the ideas of statistical analysis against distinct movement vocabularies created as representative of a concept. Thereby we are creating a lexicon of dance based both upon technical description of the moves (STEPS) as well as theme based moves. We are striving toward a future in which researchers will be able to upload videos and have the dance “annotated” by the AMT software for data mining of movement-based texts.
Dance movement, as a non-verbal language, cuts across cultures without the need of ‘translation.’ The body speaks through a kinesthetic voice. While appreciating that there may be cultural differences at work in choreography, in western theatre dance there is generalized understanding of movement techniques and vocabulary. Therefore, the response to a work, especially as to the meaning of a dance, allows most viewers to understand the work. In other words, there is a set of movements that can be read either by an understanding of the technical form or through mutual conceptual frameworks. That said, the most common verbal languages used in dance are English and French with steps codified to such an extent that dancers and researchers the world over understand a passé, a fondue, a frappe, a fouette, a flat back, a brush knee, etc. Thus, we have begun loading the AMT with codified movements. This will allow researchers to view these movements, performed by a subject-matter-expert, via the step name or by individual movements of body parts (at this time, the knee and foot). By extending the AMT to include conceptual movements, we will enable researchers to search based on an idea (the first conceptual set we are including incorporates movements based on the conceptual metaphor Conflict).
We will present the NEH funded portion of the ARTeFACT project: the AMT. This is a major step toward providing access to movement-derived data through sophisticated data mining technologies. By using motion capture technologies we are developing a sophisticated, open source tool that can help make film searchable for single movements and movement phrases. By bringing together engineers, movement specialists, and mathematicians we will forge ahead to break new ground in movement research and take one step closer to the creation of an automated means of mining danced texts and filmed movements.
Ahmad, K., A. Salway, J. Lansdale, H. Selvaraj, and B. Verma (1998). (An)Notating Dance: Multimedia Storage and Retrieval. Conference Proceedings, International Conference on Computational Intelligence and Multimedia Applications, World Scientific. Singapore, p. 788.
Bailey, H., M. Bachler, S. Buckingham Shum, A. Le Blanc, S. Popat, A. Rowley, and M. Turner (2009). Dancing on the Grid: Using e-Science Tools to Extend Choreographic Research. Philosophical Transactions of the Royal Society A (13 July 2009) 367 (No. 1898): 2793.
Coartney J., and S. Wiesner (2009) Performance as digital text: Capturing signals and secret messages in a media-rich experience. Literary and Linguistic Computing 24: 153.
Lakoff, G., and M. Johnson (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenges toWestern Thought. New York: Basic Books.
Lakoff, G., and M. Johnson (1980). Metaphors We Live By Chicago: U of Chicago P.
Starkweather, J. A. (2003). Overview: Computer-Aided Approaches to Content Recognition. In G. Gerbner et al. (eds.), The Analysis of Communication Content. New York: John Wiley & Sons, p. 339.
Turner, V. (1974). Dramas, Fields, and Metaphors London: Cornell UP.
Wiesner, S. L. (2011) Framing Dance Writing: A Corpus Linguistics. Saarbrücken: Lambert Academic Publishing.