Research Workshop 2: Technical and conceptual innovations

The EVA London Research Workshop is one of the more unique elements of our Conference that we have been keen to develop over many years. Often Postgraduate Students, at Masters or PhD level, or Unaffiliated Artists may feel excluded from prestigious conferences until their research is complete and they can submit a full paper proposal. However apart from their Tutors, Supervisors and Mentors, EVA London provides an almost unique opportunity to submit projects which can be truly described as ‘Work in Progress’. With an audience of International Academics and acknowledged experts in a field, the Research Workshop presentations have often lead to very positive interest and support, and sometimes to future collaborations, or returning to EVA London a year later with a completed piece of Research and a successful Full Conference Proposal. In previous years the presentations have been very popular with our delegates and as Chair of the Research Workshops I can think of a number of occasions where an audience question that began “Have you thought of...” has lead to very exciting new lines of discovery. Sadly in 2020 we will miss that particular interaction, however, we have, as always selected an exciting, ground breaking and quite eclectic group of RW delegates. We hope that by publishing their papers here, either grouped together, around themes, or individually published, you will be keen to contact our RW authors to discuss and develop ideas, as if we had all been able to meet up together in July 2020. Technical. Data visualisation. Body Music.


Graham Diprose, Co-Chair EVA London
Our second Research Workshop grouped paper may be titled 'technical and conceptual', but as you will read, is no less based on strong and appealing visual ideas. As stated in my Introduction to RW grouped Paper 1 titled 'Explorations in the Visual Arts', there is inherently a blurring between the Visual and Technical Research in any EVA London paper, and the Research Workshop format allows us to explore the inferred and real edges of these boundaries with new and innovative ideas.
Many of these are very clearly ongoing research, often at the beginning or an early stage. As a former University Lecturer of 30 years at London College of Communication, my Dean received a very strange request about three years before I retired. I asked them if as well as working with the Design School Masters Students, I could become part of our Foundation Course and take on a group of 25, mainly teenagers, with all their raw untapped talent and belief that anything was possible. I think it is safe to say that both these students and I, had the time of our lives, and many are still in regular touch with me some 10 years later.
There is a parallel here, although all these Authors in RW Paper 2 are at Master's Level or above. Each are at an early stage of what we hope will be a wonderful future career as an Artist, Designer, Innovator and Researcher. Readers of this paper will appreciate that I consider myself very fortunate to have the role of developing the EVA London Research Workshop, which some other International EVA's are also progressing. Gathering together Research Workshop papers and finding and meeting delegates is enormous fun. Am I allowed to say that research can be great fun? I do hope so! All of us who teach in academia must have experienced that wonderful moment when some very early stage talent that you spotted and helped to nurture becomes an acknowledged success in their own right. Over the seven or eight years that we have run the EVA London Research Workshop, I can think of numerous contributors where this was their first serious attempt at writing an academic paper and presenting to a conference audience, where both skills are now second nature to them. So please keep a look out for the names and ideas in all of our Group and Individual Research Workshop Papers here. It is very sad that we cannot all meet up at EVA London in this summer of 2020, but I really hope a time will come when we can arrange this.
Finally, as I ended with Paper 1, whether you are an academic with digital arts students or still studying, or are an unaffiliated artist, we are very keen each year to hear from you, if you might like to join the following year's EVA London Research Workshop. Just get in touch!

Introduction
Recently there has been mounting interest in scientific technology and human beings. One popular field concerns human life extension, which connects scientific technology and human beings most directly, resulting in a number of research institutions starting to pay for it. Since 2007, more than $4 billion has been invested directly in research that extends human life. Investors include companies such as Google, Amazon, and Facebook. Some of researchers are opposed to this behaviour, as an anti-natural point, which argues that life extension is harmful for the development of humanity and our entire society from both a sociological and anthropological perspective.
However, it is therefore well worthwhile to discuss what scientific technology has brought to humanity through life extension, from various perspectives. This paper will start by describing the fictional story of the film I am planning, the technology that I will employ, and the plans for its final presentation. Following the introduction of the film, the future concept and final conclusion will be presented. Allow me to explain in advance, that this work should be seen as a long-term project. It will be presented purely video work, which is based on both sci-fi film and mock documentary.

'Baqi' is a 280-year-old female artist
'Baqi' (My imaginary character in this mock autobiography) is currently a 280-year-old female artist, who was born in China in 1994. As one the very first generation who extended her life through scientific technology, she decided to present her whole life to the public before submitting her death application to the government. The entire mock autobiography will start from the highly technical development environment of the Baqi's youth and her teenage years.
Based on the timeline, she continues to talk about her career as a professional artist in adulthood, as well as the turning point of achieving external life throughout her own life. At the end of the film, the whole process of submitting her death application to the government will be represented.

Methodology
Machine learning and deep learning will be employed in this film to generate the story of the mock autobiography from online information. Firstly, some of Baqi's keywords, which is relate to her identity will be listed, such as female, artist, Chinese etc. Then the posts according to those key words will be collected from social media through data collecting. Finally, the autobiographical film script would be generated using machine learning and I will then follow this script to make the film.
Since all of the information for the script will be collected from online sources, the film will be uploaded back onto YouTube as a series of VLOGS (separate episodes) with a small topic for each episode. That is to say, this series of videos will be filmed from point of view.

Social issues
Various social issues will be discussed through the topic of life extension, so that human beings can reflect on their current behaviour. For example, after life becomes unequal, will it be used as capital for capital operation?

Conclusion
What will happen next when the earth cannot hold such large population? Some older generations whose attitudes are out-dated have conservative attitudes to certain issues, such as immigration and gender issues. If these people have been alive, what will happen to society? At present, many European countries retain the street view of a hundred years ago, while current China has turned upside down compared with ten years ago. What kind of experience is it to experience this historical change in person?

Introduction
Using cameras to obtain accurate body dimensions, rather than measuring these manually, is a developing research direction in the apparel (fashion) industry with huge practical potential. Systems like 3D body scanners and Motion Capture (Mo-Cap) technologies are becoming more capable of capturing accurate human body measurements rather than using the traditional way. The researchers can measure the human body's expansion or contraction in motion with the recent improvement in Mo-Cap technology. However, such technologies are still too expensive and require too much space to set up. With advances in AI-powered technologies, such as deep learning and computer vision, we can capture high-quality data from human body reconstructions in seconds with Smartphones.
As the habit of shopping online for clothing becomes more popular, so is the tendency to return garments with poor fit, often due to a mismatch between what is experienced visually online and the real-life experience of wearing the order. The high rate of returns (for some items up to and over 80%) represents a major challenge for the industry.

Garment fit
To secure accurate measurements, it is clear that consistent body landmark is important for basic pattern development (Bye et al. 2006). Therefore, a clear understanding of the human body can help us to obtain accurate data for garment fit. The understanding of the relationship between the human body and garments is the theory of garment fit. According to Elizabeth et al. (2008) and Ashdown et al. (1995), "Apparel fit" is the relationship of the human body to the garment. To satisfy customers' needs, several factors have influenced fit as well as the comfort and visual fits that is important the retailer understand them well before.

Body Measurement Methods
The evolution of body measuring is divided into three main aspects: linear methods, 3D body scanners and smart-phone body scanning ( Figure  3.1).
Linear methods. Traditionally, pattern drafting in the apparel industry involved a tape measure applied to the body's surface, a process of obtaining linear measurements, and then drafting a pattern based on approximation and mathematical foundations, and finally applying these measurements (Paek 2009). Linear measurements are taken between two points of the body. To record the essential two-dimensional data related to three-dimensional form, the traditional measuring devices were used, such as tape measures, anthropometers and callipers.

3D body scanners.
These are created to provide the full body of an individual in three dimensions, with high potential to capture valuable 3D data of the human body and improve the garment fit. The client stands and holds a pose between the cameras and the sensors for the duration of the scan. By "stitching" the images, the 3D software reconstructs the final 3D model.

Smartphone body scanning.
The advancements in AI-powered technologies, such as deep learning and computer vision, have enabled the next generation of smartphone scanning solutions. To provide body data within seconds or minutes, most of the body scanning solutions require two photos (front/side) from the customer. This technology promises to reduce significantly the costs in time and money and to improve efficiency.  (Fan et al. 2004;Paek 2009).  (Fan et al. 2004).

Datasets
To the best of my knowledge, none of the datasets currently in use for human body data were collected specifically to explore the task of human body measurement. Below we demonstrate existing datasets that have been used for human body data and focus on a specific set of attributes.

Challenges
Among the most significant challenges are: (1) the complexity of the human body in non-rigid areas; (2) variability in the human physique; (3) the complexity of human skeletal structure; (4) the impact of breathing, when the body expands and contracts; (5) the variability in lighting conditions, causing shadows; (6) depth and the loss of 3D data which results from observing the pose from 2D planar image projections; and (7) the complexity of capturing parts of the human body which are covered by loose clothes.

Methodology
The purpose of our research is to study and progress where needed how state-of-the-art body and motion capture technology can be made a reliable and affordable option for the apparel/fashion sector. We review in this paper many of the available body scanning technologies (including 3D body scanners, Mo-Cap systems and mobile body scanning). Another, more challenging focus of our work is to study how the body's movements, as well as its changes, impact the experience of wearing a piece of clothing. We also conduct a state-of-the-art survey of how recent machine learning methods are providing new ways to conduct such studies and analyses as a function of new and growing databases. Capturing the human body in motion by obtaining data from a video instead of several photos can help us to achieve extra 3D data information, considered as a way of calculating the surface of the human body in more detail.

Objective:
• Is the Mo-Cap system a reliable method of measuring the human body's reconstruction in motion? • Are there any discrepancies between a body measurement extracted from the Mo-Cap system and a measurement made with a 3D body scanner? • Are there any changes over the human body surface during activities? • What are the maximum and minimum changes of the selected upper body measurements in motion?

Introduction
I am a first year MA Interaction Design student from University of the Arts, London. I am interested in both negative and positive sides of using 'Interactive Design' as a new medium, considering any limitations for the audience for a perception of art, such as for younger people or the disabled, and how such technologies can affect art in future. I plan to explore if it is possible to appreciate the aesthetics of a new piece of equipment, while simultaneously learning how to use it?

Re-designing the UAL student identity card
Traditional art used to plan in advance our perception of information: each installation would be positioned in a specific place with an artist's name attached. An Interactive Design points us to a very different cultural setting. 'We have to read instructions which tell us how to use it; Then we then have to go through the process of learning it's own unique navigational metaphors. In this way, I was inspired to create a self-initiated project, which has a unique ability to change in real time, and respond to the presence of a viewer. Such experience allows us all to discover new ways of thinking and perception, and to build a bridge between reality and cyberspace.

My idea
Interactive Design can rethink a wide range of problems today since it acts as a mediator, not only between an artist and a viewer, but are also able to include those phenomena and objects in the dialogue, the nature of which is problematic. Every UAL student has their own ID card with a unique number and photograph which allows not only access the University but also to their classes, workshops, libraries, the canteen and other resources. My idea was to re-design the ID card so that students would be able to use it as a personal tool to check and manage their stress level in real time. I wanted to find a solution to help support the mental health of UAL students with a focus on monitoring any anxiety and depression

The concept
Any changes in the person's heartbeat are visualised by different colours on the watch screen, so the person can track his or her emotional being and heart health. The watch screen has been designed to be simple enough to operate in a state of panic, and even in the dark, thanks to a built-in light. When in the user's hand, the sensor (pulse sensor) picks up on their heart rate and mimics it through a softly glowing light animation and a slight pulsing sensation (Circuit playground express). Turning Quantitative data into Qualitative visual information through the aesthetic of colour. When the heartbeat too Low or High, the watch will automatically visualise it on the screen, and give the wearer instructions on what they should do next, e.g. breath deeply, or call a doctor. Also, it will help other people to see if the person is feeling unwell, so they can receive help immediately or at least check if they are alright. In this way, it is no longer just your own personal data anymore, but can inspire students to help each other through social engagement. I wanted people to feel comfortable to talk about their mind in the same way that we talk about our bodies. Sharing experiences and insights are such a very powerful tool in normalising any conversation.

Introduction
This paper explores recent developments in dynamic music notation, focusing particularly on accessibility and expression -two of the most important factors in current music making.
Anyone who has witnessed a child learning western common practice notation (CPN) will know how complex it is in practice and what an effect this complexity can have on their access to musical expression. For many years suggestions have been proposed to simplify, extend or reinvent the centuries-old system, generally without much success; the barrier between those who can and cannot 'read' music remains as obdurate as ever.
There has been significant traction in the idea that CPN is indeed a problem in itself; the majority of the world's music is based on improvisation, which uses no notation at all, or at most some very basic material (for instance a set list). Similar efforts have been made into liberating performers from the need to assimilate into mental and muscle memory the physical complexities of performing instrumentally; the NIME (New Interfaces for Musical Expression) conference has a twenty-year history of these experiments.
And yet few would deny that music notation is uniquely capable of producing some of the most remarkably expressive and impressive music, or that most music is produced by musicians who are able to play physical, acoustically-based instruments, whether they can read music or not.

Background
For a number of cultural reasons, Common Practice Notation (CPN) generally increased in complexity from its inception in the 9 th Century (Bent et al. 2001). One of the most prominent reasons is probably the necessity for composers and musicians to please, flatter and impress their patrons (church, aristocracy, bourgeoisie, arts councils, etc.). In addition the development of the ideas of the composer as hero (Goehr 2007) led to the more commercially focused advantages of the finalised, printed and so 'inflexible' score.
During the twentieth-century accepted musical norms were challenged. Many performers and audiences were honest about their attitude to 'the precise notation which results in imprecise performance' (Foss 1963), prompting Milton Babbitt to counter: "Who Cares if You Listen?" (Babbitt 1958). Earle Brown investigated another notational route in the seminal December 1952 (Figure 5.1) and this work has proved influential in presaging many subsequent notational experiments.

Dynamic scores
Technical advances in music notation have emphasised particular features. The major 'pro' notators (Finale, Sibelius and more recently Dorico) focus on CPN with varying degrees of support for more extended practices (Dimpker 2013) depending on their perceived clientele. This has led those who are interested in more experimental ideas to move away from such 'restrictive' software towards more graphics-based tools such as Adobe Illustrator (Bean 2015). Others have used more graphics focused programming languages such as OpenFrameworks (see Figure 5.3 by Ryan Ross Smith). There are, however a couple of software packages that are enabling increasing flexibility of expression in this balance between CPN and more graphic, extended and dynamic practices. These are the Max environment, which hosts MaxScore (Didkovsky & Hajdu 2008) and bach (Agostini & Ghisi 2015). INScore (Fober 2017) presents another approach, uniquely using the screen as a 'scene' on which varying objects, including blocks of CPN can be placed and manipulated programmatically.

Unthinking things
Unthinking Things is a composition for sixteen voice choir and electronics, which uses dynamic notations. The piece was originally commissioned in 2017 by the St Augustine's Singers in Cambridge, UK, and first performed in March 2018.
Later that year, an extended version was commissioned which had its first performance (https://www.youtube.com/watch?v=d07-BEYq_6g) in March 2020. I have had a number of years' experience in composing using dynamic notation, (e.g. Calder's Violin, Hoadley 2012), but have collaborated with professional performers who by definition have the ability to generate a 'performance' no matter how challenging the score and its circumstances. The members of the choir, however, while very good amateur singers, are not professional, and most have no particular experience with contemporary music, presenting particular challenges. One of the major influences when composing the piece was Cornelius Cardew, and in particular his monumental work The Great Learning. Written significantly in order to promote accessibility in performance while not compromising on expression, it uses many forms of extended and improvisatory ( Figure 5.2). So while Unthinking Things includes CPN for the choir, (e.g. Figures 5.5 c and d), it also includes specifically graphic, though still dynamic notations (e.g. Figures 5.5 a and b). The sections figures 5a and 5b are taken from require the choir to play stones and resonant pieces of metal respectively. These represent two of the Unthinking Things that Bishop George Berkeley described in his Treatise Concerning the Principles of Human Knowledge. I have plans to include other unthinking things in the sequence including wood and water.
Having attended all rehearsals in which the piece was rehearsed for performances in 2018 and 2020, the choir was able to respond to all forms of dynamic notation well. In particular, the graphic forms where either a cursor ran across a series of static images ( Figure 5.5 a: stones) or the objects themselves moved across the screen and were played when they reached a fixed cursor ( Figure  5.5 b: metals). Using these methods was itself an experiment to see which would work better, with the ambiguous but happy result that both work well.

Accessibility
One of the main features of many of these systems including those used in Unthinking Things is the investigation of greater accessibility to music. Even for amateurs familiar with CPN, there are aspects of contemporary music practice and notation that remain opaque. For others, including children, who are unfamiliar with even the most basic CPN this can be a crucial issue. While audiences very much enjoy experiencing unique and magical live performances, there are many ways of composing music that is more accessible, sometimes those from the past or those deliberately composed for this demographic. There are also many examples of (often non-'classical') music, which use more basic notations and/or improvisation (such as a simple set-list).
My immediate reaction to the stone and metal sections of Unthinking Things as well as to elements of Ryan Ross Smith's 'mechanisms' was to consider how such notations might be used by, for instance, children, and then how they might be used as forms of 'intermediate' notations, gradually approaching more 'mature' notations such as CPN. My workshop proposal was to discuss and investigate these ideas.

Unthinking
Things is written using the SuperCollider audio programming environment to generate live all sound. Open Sound Control is used to control the INScore augmented score viewer, which is used to generate all visual aspects.

Conclusions
As with any deep issue, the more one considers dynamic notation the more compelling it becomes, and the more aspects of music, its creation and performance become involved. As developments (or the lack of them) among the major software notators show, it is very likely to be impossible to invent a comprehensive system encompassing all the expressive needs of composers and performers (see for instance, some of the detailed discussions at https://www.w3.org/community/music-notation/). Add to this complexity the need for dynamism and it becomes clear that for the moment there will be necessary compromises to make between these conflicting expressive requirements. However, these systems do enable more visually explicit, if not formalised forms of notation as can be seen in Figures 5.3 and 5.5 in particular.
It is of course possible that the use of the Digital Audio Workstation itself is taking over the role of notation in new and innovative ways, just as digital reproductions of acoustic instruments are now common in some musical environments. However, it seems more likely that where liveness, subtlety and precision are still required notation technologies are likely to continue to develop and mature, even if in more sophisticated environments.     Richard Hoadley Unthinking Things (2018-20).