Blog
About

191
views
0
recommends
+1 Recommend
1 collections
    4
    shares
      • Record: found
      • Abstract: found
      • Conference Proceedings: found
      Is Open Access

      Physicalizing Time Through Orientational Metaphors for Generating Rhythmic Gestures

      ,

      Electronic Visualisation and the Arts (EVA)

      Electronic Visualisation and the Arts

      9 - 13 July 2018

      Interactive performance, Generating rhythmic gestures, Dance and music interaction

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Possibilities for cross-disciplinary interactive performance continue to grow as new tools are developed and adapted. Yet, the qualitative aspects of cross-disciplinary interaction have not advanced at the same rate. We suggest that new models for understanding gesture in different media will support the development of nuanced interaction for interactive performance. We have explored this premise by considering models for generating musical rhythmic gestures that enable implicit interaction between the gestures of a dancer and the generated music. We create and implement a model for generating dynamic rhythmic gestures that flow in, around, or out of goal points. Goal points can be layered and quantized to a meter, providing the rhythmic structure expected in music, while the figurations enable the generated rhythms to flow with the performer responding to the more qualitative aspects of performer.

          Related collections

          Most cited references 4

          • Record: found
          • Abstract: found
          • Article: not found

          Mental representations for musical meter.

          Investigations of the psychological representation for musical meter provided evidence for an internalized hierarchy from 3 sources: frequency distributions in musical compositions, goodness-of-fit judgments of temporal patterns in metrical contexts, and memory confusions in discrimination judgments. The frequency with which musical events occurred in different temporal locations differentiates one meter from another and coincides with music-theoretic predictions of accent placement. Goodness-of-fit judgments for events presented in metrical contexts indicated a multileveled hierarchy of relative accent strength, with finer differentiation among hierarchical levels by musically experienced than inexperienced listeners. Memory confusions of temporal patterns in a discrimination task were characterized by the same hierarchy of inferred accent strength. These findings suggest mental representations for structural regularities underlying musical meter that influence perceiving, remembering, and composing music.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Twelve-Tone Rhythmic Structure and the Electronic Medium

             M Babbitt (1962)
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Generating Time: Rhythmic Perception, Prediction and Production with Recurrent Neural Networks

              In the quest for a convincing musical agent that performs in real time alongside human performers, the issues surrounding expressively timed rhythm must be addressed. Current beat tracking methods are not sufficient to follow rhythms automatically when dealing with varying tempo and expressive timing. In the generation of rhythm, some existing interactive systems ignore the pulse entirely, or fix a tempo after some time spent listening to input. Since music unfolds in time, we take the view that musical timing needs to be at the core of a music generation system. Our research explores a connectionist machine learning approach to expressive rhythm generation, based on cognitive and neurological models. Two neural network models are combined within one integrated system. A Gradient Frequency Neural Network (GFNN) models the perception of periodicities by resonating nonlinearly with the musical input, creating a hierarchy of strong and weak oscillations that relate to the metrical structure. A Long Short-term Memory Recurrent Neural Network (LSTM) models longer-term temporal relations based on the GFNN output. The output of the system is a prediction of when in time the next rhythmic event is likely to occur. These predictions can be used to produce new rhythms, forming a generative model. We have trained the system on a dataset of expressively performed piano solos and evaluated its ability to accurately predict rhythmic events. Based on the encouraging results, we conclude that the GFNN-LSTM model has great potential to add the ability to follow and generate expressive rhythmic structures to real-time interactive systems.
                Bookmark

                Author and article information

                Contributors
                Conference
                July 2018
                July 2018
                : 280-286
                Affiliations
                Columbia College Chicago

                Chicago, IL, USA
                Illinois State University

                Bloomington, IL, USA
                Article
                10.14236/ewic/EVA2018.54
                © Corness et al. Published by BCS Learning and Development Ltd. Proceedings of EVA London 2018, UK

                This work is licensed under a Creative Commons Attribution 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

                Electronic Visualisation and the Arts
                EVA
                London, UK
                9 - 13 July 2018
                Electronic Workshops in Computing (eWiC)
                Electronic Visualisation and the Arts

                Comments

                Comment on this article