Physical to Digital to Physical: Capturing human performances for multi-disciplinary theatre visualisation

A collaborative, digital design environment delivered through pre-visualisation can be enhanced with the use of motion capture technology to include human performance in the design decisions, putting the priority back on the human subjects in theatre. For choreographers, the development of the design of dance can be assisted by incorporating a digitised 3d version of the dancers into a real-time visualisation incorporating all other design elements. This encourages design iteration between all creatives involved in the production to happen at an early phase when a lack of understanding or design collaboration can have cost and artistic implications further down the process. This research seeks to understand and define new methods of design visualisation that specifically incorporates human performance via motion capture, starting with digital prototypes which demonstrates the state of the art currently and suggesting future processes to be tested in real-world scenarios.


INTRODUCTION
Theatre design often relies on a process that relies on tangible design information being passed to production and technical departments directly via physical methods; plans, models, verbal language and body language (Pavelka & Chitty 2015). I term this type of process 'physical to physical'. When digital methods are introduced into this process it is primarily for the purposes of transferring physical information in a digital format, such as digital 2D cad drawings or photo scans of reference artwork (Carver & White 2003). Whilst these processes are in themselves digital they are only extensions of physical media which has been replicated and passed on in a format that is more convenient to the users or for the benefit of replicating data where multiple copies benefit a large production team (Mitchell 2009, p.79). The issue with this partial digital prototype is that the design process doesn't benefit from the wide array of digital tools that can help designers and technicians adapt and develop the product through new ideations (Palmer 2001). This paper is part of a body of research focusing on the challenge of transferring physical design information to a digital space and the interactions this creates amongst a design team. In order for the digital space to be adopted fully, as many disciplines of a production design as possible need to be created or transferred to digital. An example of this would be an artist painting a scenic backcloth design directly into a digital painting tool such as Photoshop, which takes the ideas directly from the artist's vision to a digital product. There is also 3d scanning or photogrammetry of design references such as model boxes, reference materials, material samples and location/venue data -this will all be covered in a later experiment.
As the theatre industry matures in its use of 3d visualisation, methods for bringing each creative production design role into the digital world are being tested. Productions benefit greatly from the collaborative and iterative nature of the digital visualisation environment when designers are able to experience their ideas digitally. For this to work, it relies on as many creative elements as possible to be captured and digitally re-created for it to exist inside a virtual, digital environment. Once designing in a visualiser has concluded, the creative output needs to be reconnected with the real world -hence going from physical, to digital and back to physical again.
This body of research covers many areas of technology, process and design but, in this paper, it is addressing the existing state of the art in motion capture for visualisation with tests using digital mock-ups of a theatrical design.
These tests are designed to bring human movement into digital visualisation, so that it can be combined with all other design elements in a single platform. Here it will be possible for the design team to review and analyse in a collaborative way, the essential attributes of the production design. Further work is being undertaken to test performers wearing augmented reality headsets to assist in their spatial understanding of the space in coordination with the digitised versions of themselves.
To test the impact of this process on a multidisciplinary design team, and to understand the impact it has on design iteration, further tests will be conducted using motion capture suits to bring a choreographers' work into a visualisation in realtime, allowing a real-world scenario to be played out.

MOTION CAPTURE IN THEATRE
As the lines between gaming, film and theatre blur around the digital edge, we are starting to see motion capture performance influencing theatre design. "Playable shows are the future" (Barrett Felix 2018), however, this is mainly as a way of creating new layers of spectacle for an audience and not necessarily as an aide for design teams to assess their productions.
An example of motion capture being used in a very visible way in theatre is the Royal Shakespeare Company's production of The Tempest (Borsuk 2019). For the role of Ariel, performed by Mark Quartley, the costume incorporated an inertial motion capture suit called X-sense which allowed him to perform offstage and be recreated as a digital double, projected onto the scenery but also to perform as a live actor and digital double simultaneously in front of the audience.
The innovation with this use of motion capture technology is that it brought motion capture data to a live performance instead of being pre-recorded and enhanced for film and game cinematics. Had this approach been used on The Tempest, it would have stalled the design process as any change to the performance or positioning of the character would have needed to be re-rendered or cleaned up before it could be seen on stage in coordination with all other design elements.
Being instantaneous allowed the performer and the design team to develop the performance and the wider production, using the combined visual elements seen together on stage to make critical design choices. This example demonstrates the risk and reward of designing iteratively in a digital environment.

Design iteration through visualisation
By introducing visualisation as a method for design prior to the staged production period, much of the technical and artistic work can be developed in advance and design iteration can occur naturally in a digital environment.
The reason this paper begins with Physical to Digital is because we need to move all our physical assets into the digital space before we can engage in this process of digital iteration. As is true in physical theatre, this is made hard when one or many components of the design are missing during the process. In real-life, if scenery is incomplete or some of the lights are not working, or an actress is being played by an understudy, it de-values the work being undertaken by the rest of the team who can never experience their contribution to the wider design in its entirety due to the missing design elements that reflect off of one another when seen together in the physical space.
The same is true of visualisation and although many elements of visualisation are able to be recreated digitally using todays CGI and gaming technology (scenery, lighting, audio) it requires a technology bridge to bring performers into the same digital environment. Without them, the visualisation is passive, lacking in narrative, a sense of time and relationship between the physical elements, the story and the audience.
By exploring the ways in which motion capture can be incorporated into a visualisation experience, it is hoped that we will develop a greater understanding of the uses of motion capture in design visualisation. Kade et al. (2016, p.10) discovered through his research that actors can experience stronger emotions, reactions and therefore a more powerful performance by rehearsing their scenes using VR as visual and auditory stimulation. In one example, (Kade n.d.,p.38) suggests that a director could provide realistic explosions and environmental effects through VR which would create a stronger performance as the effect of these explosions would be more realistic than could ever be achieved on stage.

Ethics of immersive media in design visualisation
However, this opens an ethical dilemma where the power of immersive media could be too emotional for the performer and even create long lasting symptoms such as post-traumatic stress. For example, imagine a scene where a mother has to grieve for her child who died suddenly. Performers train to create their own emotional reactions by visualising in their minds eye something that would elicit the reaction befitting the performance (Kade 2015, p.77). With the power of AR, a director could create a simulation of a child who appears holographically to an actor during a rehearsal, who is suddenly and brutally hit by a car or truck.
Depending on the scale of reaction the director intends to create, there may be simulated blood, dismemberment and even audible sounds of someone being killed in an impact with a vehicle. If a performer weren't ready for this experience, the shock would permeate beyond their own emotional control mechanisms, particularly if they have personal experience with such a situation. The reaction would move from being a performance to being real-life and whilst some directors use method acting in this way, it potentially crosses an ethical line when a director expects a digital artist to act on their behalf in delivering shock treatment to an actor without thorough testing and medical research to support it.

COLLABORATIVE MULTI-DISCIPLINARY DESIGN WITH MOTION CAPTURE
The following examples demonstrate the evolutionary development of a hypothetical production design. It examines the motion capture performance in the context of the overall design and offers insights into the nature of the relationship between motion captured performance and individual designed attributes.
The experiment was produced in Unity Engine using open source content, scenic objects, characters and motion capture data that has been collated into this project for testing and analyses before experimentation begins with real actors and design teams.
The plan for future research is to follow the role of the choreographer and helping them share their designs digitally for inclusion in the digital visualisation. The objective would be to perform all of the ensemble dancers parts for the entire production to be merged together into a single visualisation where every part shares the same digital space.
Bringing together other pre-visualisation tools such as AR and VR into the process, to assist the choreographer in the development of their ideas and also for the entire production team to review the amalgamated design elements in one model. It would allow us to test the collaborative working relationships of a multi-disciplinary team. Figure 1 starts with a simple wooden mannequin, based on an artist's reference model as it demonstrates form and posture without revealing details of a performer. This draws attention to the topic of the "uncanny valley" effect where life-like images fall short of total realism and leave the audience uncomfortable with the not-quite-alive form they are viewing (Rubin 2018). Some people using and engaging with the final visualisation may be more comfortable with an obviously in-human mannequin doll with lifelike movements, or a 3d scan/model of a human being that more closely resembles the real performer.
Ideally, to support the idea of a collaborative design visualisation, we would want an accurate and representative model of the human performer, such as in Figure 2, to be consistent with the other design elements, such as costume which wouldn't look believable or be usefully represented on a wooden mannequin. But this may open up ethical issues such as stereotyping race, build, height and gender before the show has been cast. A wooden mannequin suggests no decisions or preconceptions have been made around the casting of a performer. This could later be updated once the performer is known to include their likeness, but would the design team assembled to review the visualisation at an early stage be comfortable making decisions when their reference for a human being is a wooden mannequin with life like movements? Would the uncanniness of the visual representation of a human's movements in an obviously inanimate object be too much for the subjects to accept?   As these elements are included there becomes greater opportunity to compare design elements against each other, to understand how they coordinate and work together and ultimately creating an opportunity for design iteration and development of the overall piece.
We can start to see how colour of the wooden boards doesn't fit with the colour of the light and the colour of the wood on the barrels. The lights in the house are too deep an orange and are distracting and the scenery flying in at speed is distracting for the audience and dancer.
The costume doesn't work with the dance that has been choreographed as the epilates on the shoulders are digging in when his arm is lifted over his head and the fabric around his legs is flapping between his legs and getting caught. As the fabric simulation is based on physical properties we can consider this eventuality in real-life likely. Even though all of the design elements are clearly digital representations of something from real life and in some cases, doesn't look photo-realistic, there is enough information in the scene to guide a design team to understand the composition of the design elements.
We can enhance the visual quality using photorealistic tools such as ray traced rendering directly out of the Unity editor using Octane render engine. Octane is a GPU rendering solution and although it still takes time to render the image, the assets from the real-time model can be repurposed without any modification to create better quality rendered images. Figure 5 shows this process and straight away we can see there is more complex lighting information, which may be different to the intended lighting created in Unity in real-time. This is because there is an element of post-production and effects that are applied to the rendering as it processes the frames.
These settings can be manipulated in the Unity editor, but the results aren't visible until a test frame has been created. These settings are mostly based on controls familiar to photographers, such as fstops, exposure times and manual focus, so the image that you see in Figure 5 more closely resembles a photograph of the 3d model in Unity, rather than an accurate ray traced image from the Unity editor. This removes the believability of images because they are manipulated by the "photographer" who wants to create an image that has clarity and impresses the audience visually.
A design team can be involved in this process to ensure the finished look resembled their intent, but it removes the benefits of real-time visualisation if you have to regularly update a render to understand where the design sits at a particular point in time. Even disregarding the time of this process, the constant stopping and re-developing of images will hinder the freedom to iterate. Figure 6 demonstrates the solution to this problem using a new system in Unity (which at the time of writing is still being developed) called the High Definition Render Pipeline and is Unity's answer to the concerns over its lack of visual fidelity. The quality of the lighting, textures and post production effects are so high that they resemble, if not surpass, the Octane render looks -but without the need to render. These images can be created in real-time directly in the editor with the ability to control light, scenery and move objects around live in the scene whilst the animations are running. This view can be seen directly on a screen or shared with VR and AR devices using a light weight version of this new render pipeline. For the choreographic experiment, the design team would benefit from seeing all of these elements together. It would be a new experience for a choreographer to explore the dance for the first time with the intended lighting, scenery and props visible to them holographically through an AR headset. Once the captured content is available to be viewed back in the amalgamated visualisation model, it would be possible to see how lighting reacts with the dance as they move about the space.
An example of the sort of discovery that requires design iteration, would be compromises made due to the props being positioned incorrectly on stage, perhaps affecting the optimum lighting position and requiring scenery or lighting to be moved to accommodate the dance. There are three design teams involved in finding a compromise here: the choreographer, the set designer and the lighting designer -overseen of course by the director. The choices are to light the dancer another way, move the scenery and the lights or recreate the choreography. Each idea can be tested live; barrels can be moved, lights repositioned and dances recaptured or "tweaked" manually in the animation settings in Unity. This will allow all alternative compromises to be compared and agreed on, which may even present an idea which solves all of the problems in one go, such as cutting a hole out of a prop and placing a light inside. This is something that could then be designed into the set and allowed for during construction and budgeting.

THE EFFECT ON THE DESIGN TEAM
The challenge for a design team in the early stages of creation is meshing everyone's creative direction into one vision -that of the directors. A strong director can do this verbally but is always limited by the pre-conceptions and experiences of the other artists who may unwittingly be going in a different artistic direction. There is a layer of trust where the director is conveying a creative direction and has to assume that the rest of the team is following along.
For some elements of the production, it is possible to review their work in progress and ensure that the design maintains the original vision, such as scenic design, costume and audio. This is because these elements can be prototyped or built before working in the venue, making it easier for a design team to access together. Even so, there is an assumption that the director has enough experience to absorb these elements individually and then visualise them in their own minds eye to imagine how they might work together when eventually introduced to each other on stage.
For disciplines such as lighting, the final product is only visible at the final stage of the process. The limitations of the equipment which requires installation and the venue to be available, as well as the other design elements to receive the light, means that the design team and the lighting designer never know if they are correctly sharing the design vision until dress rehearsals, possibly even later.
Choreography is often one of the first elements to be created, shortly after, or in tandem with the scenery. As it is such a dynamic element of the design, it can continually change and compromise to consider the other design elements. However, this isn't a perfect solution as dancers have to learn new parts and every change creates a ripple of other design changes that have to follow it from other departments. Changes under these circumstances aren't always positive design iteration as they are reactionary. Design changes work best in a circular process where the ripple eventually cycles back to the original creator of the change with the compromise.
The advantage to a choreographer is in the ability to define their part of the production in something digital at a very early stage so that every other design element can be considered with each other in mind. Changes and iterations to the design can happen as often as the team want, with little overhead costs in terms of time or money to prototype these and receive instant visual feedback.

CONCLUSION
Bringing the human element into a digital design completes what has been missing from theatrical pre-visualisation in the past. When lighting a show for real, even if the cast are not available, stage managers will act as stand-ins to allow the lighting to be lit for the human -the human being is of course the main reason most people come to the theatre and not many productions have been produced without their presence in the design. Given that the audiences focus is drawn to them the most and that the director, for better or worse, often gives priority to the performers over other design elements, it should be considered necessary to include this as an element in the visualisation too. For visualisations to be engaged with and perform as effectively as real-life, the visualisation needs to contain all of the same elements digitally as exist physically.
In the next phase of this research, work will be undertaken to test these processes with a practicing choreographer, creating motion captured performances to be seen live or pre-recorded as part of a wider digital 3d model of a production. A design team will contribute to this process, treating the early stages of the digital design in the same way they would a real production and assessing the effectiveness of the total-pre-visualisation process in their own roles.