HoloDash: A holographic interface for autonomous vehicles

The HoloDash system is being developed as an interface for both "connected cars" and fully autonomous vehicles. Although the interface has applications as a driver information system, its use in fully driverless vehicles might prove more impactful. It is described as a HHVI: Holographic Human Vehicle Interface.


INTRODUCTION
The HoloDash system is being developed as an interface for both "connected cars" and fully autonomous vehicles. Although the interface has applications as a driver information system, its use in fully driverless vehicles might prove more impactful. It is described as a HHVI: Holographic Human Vehicle Interface.
HoloDash is an extension of the HoloTube system that uses a reflected screen image within a clear plastic tube, enabling the display of motioncaptured avatars and other information. The interface is via a Leap Motion sensor and utilises its set of gestures. These have been assigned to various controls with the aim of making the gestural interface as fluid as possible. Testing of the HoloDash at Ravensbourne using a mock-up of a car dashboard enabled DoubleMe to rate the effectiveness of this approach and look at various schemas for the gestures. At that point the system was still intended for use in both driver-operated and driverless vehicles. However this version of HoloDash has been developed to work with a specific autonomous vehicle case study. User testing at Ravensbourne revealed the following: 1. Potential users are positive about gesture. 2. They still have to figure out how the system can best be deployed; 3. They prefer to control simpler functions through gesture; 4. Potential users are positive about the use of human avatars.

THE GREENWICH DRIVERLESS VEHICLE
The HoloDash system has been proposed for use in several scenarios involving the driverless vehicles being trialled by the GATEway project, at North Greenwich.
The GATEway project at North Greenwich, which is based with Digital Greenwich and the Transport Research Laboratory (TRL), is developing an autonomous vehicle for use within the new developments on the Peninsula, and to connect the area to the historic core of Greenwich. The latter proposal will be primarily aimed at tourists moving between the O2 Arena and the heritage sites within Maritime Greenwich.

245
The concept is for a slow multi-occupant shuttle that moves on dedicated routes in areas potentially shared with pedestrians. The vehicle uses LIDAR to scan its environment and will provide connectivity between other transport hubs.
[The] pod uses a special software system called Selenium that enables real-time navigation, planning and perception, developed by Oxbotica. [...] Over an eight-hour period of operation, a single Gateway shuttle [will] collect 4 terabytes of data (Fearn 2017).
The HoloDash concept is intended to produce journey-specific in car content for use on several different themed routes between Maritime Greenwich and the North Greenwich peninsula.

Figure 3: Interior combines projection mapping and HoloDash motion-controlled screen
Based on results from DoubleMe's initial user testing of the HoloDash interface, there was a clear preference for utilising it in driverless rather than driver-engaged situations. The gestural system lends itself more to the user's full attention than a substitute for dashboard touchscreens that are found in most modern cars. Also the anthropomorphic content tends to require more user focus in any case.
The system will also involve more multimodal interaction through audio cues and combine projection-mapped content throughout the interior of the driverless vehicle with the Holo|Dash itself [see Figure 2] This is partly because of the opportunities afforded by projection within the vehicle, and partly because other modes of communication are essential for engaging the passengers of the vehicle if, for instance, they are partially sighted: There are two main reasons to adopting a multimodal approach to designing the in-car UIs. First, the mainly visual interaction used in existing vehicles will no longer be sufficient to communicate alerts from the car to the driver, because users' attention may be directed at something else [...] Second, with the potential for older persons and persons with disabilities to become drivers, they may have a sensory impairment (Ferati et al.).
The use of AR in these situations could lead to new opportunities in advertising, especially in terms of personalised content: If passengers do decide to look out of the window, augmented reality integration and heads up display technology will mean that in milliseconds the car could interpret a giant QR code on a billboard and use it to display a targeted advertisement (Pryor, 2017).

NEXT STAGES
Two key factors will influence perception of autonomous: trust in the vehicles themselves and their in-built intelligence; and transparency of the systems for navigation, information and entertainment. A HHVI will assist this process of building trust through the use of human avatars and motion control.