2,064
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      Celebrating 65 years of The Computer Journal - free-to-read perspectives - bcs.org/tcj65

      scite_
       
      • Record: found
      • Abstract: found
      • Conference Proceedings: found
      Is Open Access

      YomeciLand x Bunjil Place: The sounding body as play

      proceedings-article
      , , ,
      Proceedings of EVA London 2020 (EVA 2020)
      AI and the Arts: Artificial Imagination
      6th July – 9th July 2020
      Play, Sounding, Performance, Interactive installation, Sound recognition
      Bookmark

            Abstract

            The advancing capabilities of computational systems to learn and adapt autonomously with datasets have provided new opportunities for designers and artists in their creative practice. This paper examines YomeciLand x Bunjil Place (Nguyen 2019), a playable sound-responsive installation that uses audio recognition to capture, recognise and categorise human sounds as a form of input to evolve a virtual environment of ‘artificial’ lifeforms. The potential of artificial intelligence in creative practice has recently drawn considerable interest, however our understanding of its application in sound practice is only emerging. The project is analysed in relation to three key themes: artificial intelligence for sound recognition, the ‘sounding body’ as play, and digital audiovisual composition as performance. In doing so, the research presents a framework for how artificial intelligence can aid sound recognition in a sound-responsive installation with YomeciLand x Bunjil Place shared as a case study to demonstrate this in practice.

            Content

            Author and article information

            Contributors
            Conference
            July 2020
            July 2020
            : 91-95
            Affiliations
            [0001]RMIT University

            Melbourne VIC 3000
            [0002]RMIT University, Vietnam

            Ho Chi Minh City, Vietnam
            Article
            10.14236/ewic/EVA2020.15
            2644efb3-8047-49fd-912e-040d57117f71
            © Riley et al. Published by BCS Learning & Development Ltd. Proceedings of EVA London 2020

            This work is licensed under a Creative Commons Attribution 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

            Proceedings of EVA London 2020
            EVA 2020
            30
            London
            6th July – 9th July 2020
            Electronic Workshops in Computing (eWiC)
            AI and the Arts: Artificial Imagination
            History
            Product

            1477-9358 BCS Learning & Development

            Self URI (article page): https://www.scienceopen.com/hosted-document?doi=10.14236/ewic/EVA2020.15
            Self URI (journal page): https://ewic.bcs.org/
            Categories
            Electronic Workshops in Computing

            Applied computer science,Computer science,Security & Cryptology,Graphics & Multimedia design,General computer science,Human-computer-interaction
            Sound recognition,Performance,Play,Interactive installation,Sounding

            REFERENCES

            1. 2009 Play and the experience of interactive art (Doctoral dissertation)

            2. 2004 Cultural probes and the value of uncertainty. interactions 11 5 53 56

            3. 2017 Audio Set: An ontology and human-labeled dataset for audio events IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5-9 March 2017

            4. 2017 CNN architectures for large-scale audio classification IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 5-9 March 2017

            5. 2019 YomeciLand x Bunjil Place (interactive installation)

            6. 2010 Developing a language of interactivity through the theory of play (Doctoral dissertation)

            7. 2014 Very deep convolutional networks for large-scale image recognition arXiv preprint arXiv:1409.1556

            8. 2018 Multi-level attention model for weakly supervised audio classification arXiv preprint arXiv:1803.02353

            Comments

            Comment on this article