Profile-Aware Multi-Device Interfaces: An MPEG-21-Based Approach for Accessible User Interfaces

The wide diversity of consumer devices has led to new methodologies and techniques to make digital content available over a broad range of devices with minimal effort. In particular the design of the interactive parts of a system has been the subject of a lot of research efforts because these parts are the most visible and are critical for the usability (and thus use) of a system. One thing that is missing in many current approaches is the ability to combine these new methodologies and techniques with a user-centric approach to ensure preferences from and requirements for a specific user are taken into account besides the device adaptations. In this paper we analysed the applicability of MPEG-21, part 7: Digital Item Adaptation, for the adaptation of a user interface to user characteristics. We show how the high-level XML-based user interface description language UIML in combination with an MPEG-21-based user profile enables designers to create accessible and personalised multi-device user interfaces. Using this combination results in user interfaces that can be deployed on a broad range of devices while taking into account user preferences with minimal effort. This approach enhances accessibility to digital items on various platforms, since all interactions with digital items should be supported by an appropriate user interface.


INTRODUCTION
The design of user interfaces has a great influence on the usability of a system.The complexity of the interface design process is not only related to the complexity of the software system the user interface is being built for, but is also influenced by other factors such as the user or the context-of-use.Because of the evolution of different types of consumer devices like PDA's, cellphones and set-top boxes, together with a continuous growing user audience, creating accessible user interfaces is becoming a more difficult challenge everyday.A multitude of consumer devices can offer the same services by adapting the user interface to the capabilities of the system.A wide distribution of these services by means of different channels (mobile, interactive digital television, web, …) enforces the designer to make the user interface "granny-proof" i.e. usable by anyone.This leads to a contradiction: the user interface of a service should be generalised on the one hand because it has to be deployed on multiple devices, and specialised towards different user profiles on the other hand to make it more usable.We address this challenge by combining high-level user interface descriptions to create multi-device user interfaces with user profiles to determine the user preferences and requirements for a user interface.
XML-based high-level user interface descriptions (UIDL) [9] have proven to be very suitable to support multi-device user interface design.These UIDLs are labelled "high-level" because the interface is abstracted away from the concrete implementation: e.g. an UIDL could define a "choice from a range" that can be mapped onto a slider widget, a list widget or a text entry widget when the UIDL is transformed into the concrete final user interface.We will use the UIML [2] as a high-level UIDL; this language offers several advantages useful for creating multi-device user interfaces such as a separation of concern, domain-specific vocabularies and a structured description of the user interface.
On the content side, the MPEG-21 specification [4,12] is a framework that embodies and relates several existing MPEG standards.It aims to enable the use of multimedia resources on a wide range of devices, independent on network infrastructure.As such it is an emerging standard to enable universal multimedia access.The work presented in this paper focuses on Part 7 of the MPEG-21 specification, Digital Item Adaptation [13] (see figure 1), Accessible Design in the Digital World Conference 2005 and its influence on the adaptation of the user interface.In combination with a high-level UIDL this promises to offer a powerful means to build effective personal multi-device multimedia interfaces.To demonstrate the feasibility of this approach we implemented a user agent that combines an UIML document with an MPEG-21-based profile and renders the adapted user interface.
The remainder of this paper is structured as follows: section 2 gives an overview of related work in this area: both XML-based UIDLs and existing approaches towards user profiling are presented here.Next, section 3 gives more detail about the adaptation of the final user interface according to the user preferences.To support this, section 4 shows a case study where the usability of this approach is assessed.Finally, section 5 concludes this paper with a discussion about the obtained results and gives an overview of the future work.

RELATED WORK
XML-based high-level UIDLs have already been widely investigated and have proven to be suitable for multidevice user interface creation [9].Besides the integration of a user model in a few existing UIDLs, only the traditional web browsers show practical use of user agents that can take a user profile into account.UsiXML [8] is an example of a language that takes the user profile into account, and can provide the intended users with a generated interface that meets their specific requirements.The drawback is the lack of a straightforward definition for a user profile: this information is only stored in mapping rules that can influence the user interface instead of as a separate profile that serves as input for such mapping rules.The most suitable UIDL for our goals is UIML which is described into detail in section 3.1.The separation of user interface style properties and other aspects of a user interface provides the necessary benefits to build multi-device and personalized user interfaces.Our implementation uses Uiml.net, an extensible open source implementation of this specification that can be used on multiple devices [10].
MPEG-21 is a well-known standard framework, but most of the existing research is focused on multimedia content transformation such as scalable video coding for example.Although MPEG-21 has an extensive user profile definition, it has been used only rarely to adapt the user interface itself.Other related work using MPEG-21 can be found in [11]: this work emphasises universal multimedia access by adapting the content to the user's environment.The Digital Item Adaptation Engine, shown in the middle of figure 1, is the core of the MPEG-21 content adaptation.Since the internals for this engine are not specified in the MPEG-21 standard, its operation is dependent on the specific implementation.In this work we are mainly interested in the Usage Environment Description Tools specification as input for a user agent that can adapt the user interface according to this information.Listing 1 shows an example listing of a part of an MPEG-21 document that specifies the visual impairment of the user.The use of intelligent agents in combination with MPEG-21 can also be found in other work such as [5] and [7] but with a different scope than proposed here.An intelligent agents is defined as a software entity that perceives and acts in an environment, being social, pro-active, re-active and autonomous.The goal of intelligent agents in [7] is to reduce computer-user interaction and to enhance the user's experience while consuming multimedia items.Just as in [7], the work presented in [5] is focused on content delivery, but this time a broader context is used such as the location and surroundings of the user.The agent's main task is to take into account user preferences solely when selecting content, as most existing agent implementations do.Intelligent agents intercept the request a user makes and instead of returning a complete set of choices concerning the chosen multimedia item, it makes a significant number of choices for the user automatically.We want to take this approach one step further by letting intelligent agents adapt the user interface itself instead of deciding on which content to deliver, and enhancing the user's experience as well as the accessibility of the content itself.

High-Level User Interface Descriptions with UIML
An UIML document exists of several parts [1] that are shown in figure 2. The Interface section describes four parts of the user interface: Structure: describes the "hierarchy" of the user interface.It defines the different parts that are contained in the user interface, and the interactor name of each part.

Style:
describes properties of the parts defined in the structure.This allows to change properties of the interactors like text, colour, …

Content:
separates the content of the interface of the other parts.
Behaviour: defines rules with actions that are triggered when some condition is met.Some kind of event mechanism is offered to the user interface designer this way.
Vocabularies are referred to as peers in the UIML specification: this contains the mapping with the concrete user interface toolkit.The logic section describes the mappings with the software interface to communicate with the application logic.The style section of UIML is partly generic and partly platform specific.The former contains the set of properties that can be applied independent of the target widget set and consumer device that is being used, the latter contains the set of properties that are specific for a widget set and device.If the style description is limited to the set of generic properties, the user interface can be transferred to a broader range of consumer devices.Nevertheless, the platform specific properties make sure the special capabilities of a particular widget set or device can also be used which is important to build more attractive user interfaces.
Listing 2 shows an example of a very basic UIML document.In the structure part, a container and a button are defined as "c_body" and "b_hello".Throughout the document we refer to these parts with these identifiers.In the style part, we define the style properties of the user interface elements.The body has a size, and the button gets a position and a label.The content part is left out in this example, and in the behavior section, an action is defined when a specified condition is fulfilled.When the user clicks the button (b_hello == ButtonPressed), the function "Console.println" is called with the specified parameter.Finally, the peers section links to an external vocabulary, shown in Listing 3, defining the mappings between the classes and properties used in this document and the "swf" (System.Windows.Forms) specific classes and properties.

Profile-aware properties
A user interface property can be expressed as a triple , where (a user profile should not remove properties that were already set).This definition implies a user agent can change the value of a property that was specified in the user interface description or set a value for a property that was not specified in this description.Figure 3 shows the user agent's primary function: it serves as a property adaptation mechanism v p v p that transforms the properties contained in an UIDL according to the profile.The user agent executes a profile on a set of interface properties and does so by interpreting the data in the profile and relating it to the properties of the user interface.The user agent only changes the style properties of a UIML document, the other parts, structure, content and behavior, are left untouched.Since the behavior and the structure of a user interface are mostly device independent this poses no problem.The content could differ according to the device that is being used.We will rely on other MPEG-21-based mechanisms to support the content adaptation, such as the systems shown in [5] or [7].Content adaptation such as scalable video coding is not the scope of this work; it is mostly dependent on the device profile and is almost always dependent on functionality of the service provider to transform the content appropriately.After the user agent has updated the properties, the UIML document is reassembled and can be rendered on screen.Notice the properties of the resulting UIML document can no longer be divided into generic and device specific properties, but can be labelled as "profile specific" because the properties are now targeted towards a particular device and user.
In our approach the structure of an interface is also considered as a property.For example, the Gnome usability guidelines [3] contain an entry that states: "Arrange controls in your dialog in the direction that people read".The direction people read can be part of the user profile and requires a more complex behaviour of the user agent since it also has to restructure the structure as defined by the user interface specification.The solution is to consider the arrangement (direction) of controls as a property of their parent part, since parts are hierarchically nested and each subpart has a property that defines its placement within the parent part.

Encoding Accessibility
To provide the reader with a concrete idea about the responsibilities of the user agent software, table 1 lists some examples of MPEG-21 user characteristics and their effect on the user interface properties.In the left column the MPEG-21 characteristic is listed, and the right column shows which user interface properties will be manipulated to apply to this characteristic.The properties that can be manipulated are defined in a generic UIML vocabulary that can be used to create multi-device interfaces.The mapping from the left column to the right column is executed by the user agent as part of the mapping function identified in section 3.2.For this purpose the rule shown in the right column of table 1 is written as the triple , and the user agents changes the value according to some predefined adaptation strategy.For example: the Colorvisiondeficiency characteristic can impose a change of the values and of , where p matches all parts that belong to the class "labels", v is the foregroundcolour property and v is the backgroundcolour property.

Displaypresentation
Change the content property of the affected user interface parts.

Audio
Change the visibility property of every button linked to an Audio-fragment, change the visibility property of substitute labels with an explanatory text.

Colorvisiondeficiency
Change foregroundcolour and backgroundcolour properties of labels.

MobilityCharacteristics
Depending on the mobility of a user, e.g.highway vehicular, urban vehicular and pedestrian, the size and visibility properties are changed.

Illuminationcharacteristics
Change the colour property of the user interface parts.Notice these rules should not be hard-coded in the user agent, but, if required, a set of universally applicable rules is selected.By making the properties of a user interface explicit through a high-level UIDL, the method for customisation of the user interface that leads to a better usable, personal and accessible user interface has become more straightforward and flexible.

CASE STUDY: A LOCATION-AWARE NOTE-TAKING SERVICE
In previous work, we created a mobile guide framework to support the deployment of custom location-aware services as layers upon a map-based interface.[14] Nowadays, such services are being accessed by a very diverse user audience.As an example we built a service similar to the GeoNote concept [6]: physical locations can be related with pieces of information.An annotation has a content type, can contain text, movies, audio, images or self-made drawings.A user can create annotations at any point, can add information to it and save this annotation.Figure 4 shows this service: 4(a) provides a mapbased interface that shows where notes are positioned, and figure 4(b) shows the dialog for adding, reading or editing notes.Notice the user profile specifies a preference for buttons with large text instead of icons for people with a visual impairment.
Since only the dialog in figures 4(b) and 4(c) is built using UIML, there is a clear difference between the adaptation of the foreground dialog and the background map-based interface.The former adapts according to the user characteristics, while the latter does not change its representation.An application can be made completely "profileaware" if it uses UIML for the complete user interface description.The benefit of our approach is the possibility to extend an existing user interface with new, UIML-based, dialogs that are profile-aware.
Several properties of this interface are adapted according to the user profile.For example: when the current user is visually impaired, we can change the style of the UIML document, by increasing the font-size, enlarging the buttons and making the icons in this control invisible (or vice versa).

CONCLUSIONS AND FUTURE WORK
We described an approach to create personalized and more accessible multi-device user interfaces by combining user profiles and a high-level XML-based user interface description language.A user agent processes the user characteristics described in a MPEG-21 compliant user profile, and selects and manipulates properties of the user interface accordingly.UIML is used to describe these user interfaces because it provides a clean separation of the user interface structure, style, content and behaviour.UIML is used to create multi-device user interfaces, but it only describes the presentation of a user interface.In a broader context, the user profile does not only influence the presentation, but also the tasks the user executes and the navigation through the user interface.The next step in our work is to also take into account the task model that specifies the different tasks the user interface supports and adapt this model according to the user profile.
We demonstrated the use of this approach with an example case study.A location-aware service for a PDA could adapt its user interface according to the user profile, although its interface was only written once in UIML.This case study proves the approach is feasible and can reduce the time needed to build user interface that can adapt according to a user profile and a device.The designer can create these type of interfaces by means of two orthogonal specifications (MPEG-21 and UIML) that are used independent of each other.

FIGURE 2 :
FIGURE 2: Structure of an UIML document

FIGURE 3 :
FIGURE 3: Applying a user profile to a high-level user interface description volume property of audio used in the application.

Figure 4 (
c) shows what the same user interface looks like with the profile of a visually impaired person.Accessible Design in the Digital World Conference 2005 (a) Service running on PDA (b) Normal sight (c) Visually impaired

FIGURE 4 :
FIGURE 4: Location-aware note-taking service with different user profiles

TABLE 1 :
Example Profile Mapping Rules