Invisible War : An audio-visual installation with laser light and Twitter API data

Ziwei Wu Shuai Xu Yingyi Wang Artist and Researcher Artist and Researcher Artist and Researcher The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong 8Building postoffice Alibaba (China) Co., Ltd:969 West Wen Yi Road, Yu Hang District, Hangzhou, China China Academy of Arts, Hangzhou, Zhejiang, China zwubu@connect.ust.hk xushuai2018@foxmail.com wang_yingyi0708@sina.com


INTRODUCTION
"Invisible War" is an Audio-Visual Installation consists of series of parametric data sculptures which aims to visualise forms of cyber violence around us, it will be achieved through a unique artistic approach by adapting methodologies of fetching data from Twitter API and mapping on the immersive environment with laser light. The idea of this project is to explore the connections between the Internet environment and cyber violence through media arts. For the part of data visualisation, we fetch the data from comments on social media that are conceived as offensive, inflammatory and contumelious, in these situations that could be characterised as cyber violence, the internet no longer appears itself as a space that encourages free and equal speech, but a space full of hateful emotions conveyed by netizens who look for ways to vent their angers in ignorance of fairness and justice.
With our data-generated artwork "Invisible War," we intend to make the invisible visible by revealing the conditions that make cyber violence happen, it will be realised by giving forms to the fetched data on social media through artistic methodologies used in media arts, in order to create a new way to experience the internet environment for the viewers by arousing the effects of immersion and phantasmagoria on the viewers towards their surroundings. In an ideal way, this project hopes to contribute to the contemporary discourse around cyber violence, which has gained more and more significance in the 21st century.

INSPIRATION AND CONCEPT
This project is inspired by a real experience of cyber violence happened on a friend of ours, who was rampantly abused by some of the netizens by words because of an unrelated matter. She was in such a situation that no matter what and how she tried to explain, it did not help to stop the violence. This experience of being immersed with malicious information had deeply affected her working status and even her family's daily life, what happened next was the same netizens who attacked her quickly turned to another hot topic with a new victim, their acts of sending malicious messages across the screen has simply become a gesture of venting their angers.
In the era of big data, most people enjoy the conveniences brought by achievements of technological developments, but not all of us consider the negative effects and the reason that we could be so easily "stripped off" in online and offline space with the aid of the same technological achievements. Therefore, the anticipated impact of this work is to reveal the invisible negative effects of the Internet environment and explore the phenomenon of cyber violence through visual and audio presentation, which will be generated by realtime data. The laser light is used as a way to simulate the movement of sending messages and re-create the violent vibe among the viewers, the viewers will be expected to experience the situation of cyber violence in a different form. It will be an attempt to reveal how some of the netizens attempt to vent their anger by tapping on the keyboard and deliver demagogic and slanderous comments.
By combining data visualisation, augmented data sculptures and audio-visual effects generated by laser light installation, this project as a whole debut new advances in technology will allow the viewers to interpret cyber violence with the aid of aesthetic realisation. We will use UE4 as the main platform in order to trigger visual, sound, laser light to function at the same time. Real-time API data from Twitter will be fetched and put into UE4 in order to generate visual effects.  For the final installation, we will connect external laser light devices with the visual and audio effects.
These data sets constitute the building blocks of the unique algorithms that will be used in the multi-dimensional visual structures on display. The viewers could also observe how the trending topics continuously change.

TECHNICAL APPROACHES
From the software design's point of view, the system is designed to meet these criteria: 1. The system is capable to run modern standard graphics 2. The extendibility of the system allows future updates/adaptation. 3. It is based on an architecture that allows multiple clients (exhibitions of the artwork) to be running concurrently.
There are three major components in our system, a Twitter API fetch system (written in nodejs), a visualisation system built by unreal engine and a socket.io communication mechanism, which allow those two components to communicate with each other. We use the Twitter API and nodejs Twitter module as our key components for data fetching. By filtering the search results on search engine, we are able to keep on updating the stream on one specific topic asynchronously. By adapting the socket.io protocol, a stable and consistent communication between the nodejs deployed on a remote server and unreal engine code running locally will be established. The nodejs code starts a socket.io server to distribute the filtered tweeter stream fetched from Twitter API.
On the part of the unreal engine, a particle system is adapted as the main component of it to achieve the visualisation. By receiving the data stream from Twitter on specific subjects, a particle is generated and triggered within this system, the speed and attribute of the particle are mapped in correspondence with messages sent from the users on Twitter, the re-tweets are also presented in the system and the visualisation in order to recreate the recursive process and impact of a discussion happened on Twitter.

FURTHER DEVELOPMENT
There could be numerous possibilities on the development of the current software architecture.
Benefiting from the open-source community and the extendable nature of socket.io, the system could be extended in various ways:

Hardware interaction (laser/ lighting fixtures)
Our original goal was to exhibit this work in the physical space, however, at the time that this paper is being written, this goal becomes infeasible at the current stage due to the covid-19 outbreak, if the conditions will allow, the current system could be converted into a physical format in the future, one way of doing it will be adding controllable lighting fixtures to it. The open-source community will allow us to integrate the control of the lighting by using the DMX 512 control protocol (a standard for digital communication networks that are commonly used to control stage lighting and effect).

NLP analysis
It is possible to involve sentiment analysis and machine learning advance in this project, some of the existing researches have pointed out some possibilities when it comes to the research topic of social media, which could be adapted in an art project following this direction. From the architecture point of view, a TensorFlow Nodejs could also be used if we wish to enable the capacities of machine learning in the system.