Technology-enhanced Learning for Music with I-maestro Framework and Tools

– This paper presents a project called i-Maestro (www.i-maestro.org) which develops interactive multimedia environments for technology enhanced music education. Guided by an analysis of pedagogical needs, the project develops enabling technologies to support music performance and theory training, including tools based on augmented instruments, gesture analysis, audio analysis and processing, score following, symbolic music representation, cooperative support and exercise generation for tuition, self-learning, and collaborative work scenarios. This paper briefly describes the context and background of the project, together with an overview of the framework and a number of different tools to support technology-enhanced music learning and teaching.


INTRODUCTION
i-Maestro [1,18,12] explores novel solutions for music training in both theory and performance with a particular focus on bowed string instruments.Music performance is not simply to play the right note at the right time.Among the many challenging aspects of music education, we are particularly interested in linking music practice and theory training, looking at interactivity, expressivity and accessibility.New pedagogical approaches are being studied with interactive cooperative and self-learning environments, and computer-assisted tuition in classrooms including gesture interfaces and augmented instruments.The project is particularly interested in linking music practice and theory training.

i-Maestro
Guided by an analysis of pedagogical needs, the project develops enabling technologies to support music performance and theory training, including tools based on augmented instruments, gesture analysis, audio analysis and processing, score following, symbolic music representation, cooperative support and exercise generation for tuition, self-learning, and collaborative work scenarios.Key objectives of the project include: • Basic research and development to support and enhance music learning and teaching • Exploration of new pedagogical approaches in music education to improve access to musical knowledge • Enhancement of the connection between practice and theory training • Creation of an interactive multimedia environment with tools and services for technology-enhanced music education i-Maestro offers a flexible, interactive multimedia framework and supporting tools which builds on recent innovations resulting from the development of computer and information technologies.It offers pedagogic solutions and tools to maximise efficiency, motivation, and interests in the learning processes and improve accessibility to musical knowledge.i-Maestro components include:

FRAMEWORK AND TOOLS
An overview of the framework is given in the diagram as presented in Figure 1.In addition, guidelines for accessibility in technology-enhanced music training have been developed [2,5].This section highlights several i-Maestro tools that support different aspects of music learning and teaching.

Symbolic Music Representation
Music notation is one of the fundamentals in music education.i-Maestro is promoting MPEG Symbolic Music Representation (SMR), an ISO standard for the representation of music notation with enhanced multimedia features [3,4,8,9,15].Cooperative work is another key area of music education.It allows different components of the i-Maestro framework to be used across a network.
An MPEG SMR player/decoder, within the IM1 MPEG-4 reference software has been implemented (see Figure 2).

Music Training Supports
The i-Maestro Sound and Gesture Lab includes advanced audio analysis, gestureand score-following algorithms that provide feedback and accompaniment allowing new kinds of musical interaction (see Figure 3a).The Gesture Follower can track a performed gesture in real time and compare it with pre-recorded gestures for a variety of

Group of students
Group of students pedagogical applications (see Figure 3b).The Augmented Violin allows bowing gestures to be tracked and studied (see Figure 3c).The i-Maestro 3D Augmented Mirror (AMIR) [11,13,14,17,18] captures and visualises the performance in 3D.It offers a number of different analyses to support the teaching and learning of bowing technique and body posture.The tool provides interactive multimodal feedback, online and offline with visualisation and sonification [19].Figure 5 shows the bowing tracking with automated bowing annotation on SMR.Using data from the extracted features and analyses we developed an approach to visualise the similarity of bowing features in order to facilitate their study in relation to bowing technique and musical expression descriptors.
We use the k-Mean clustering method to compute the mean of a particular dataset in order to study the differences in captured data for a range of these qualities.For example by recording data for the same musical material played with different bowing styles (e.g.tenuto, staccato, spicato etc) and for each technique P 1 , P   For an input dataset j p , for example capture of a student practicing, we find the epsilon ! for each cluster and find the cluster with the minimum ! .This indicates that the data from the capture is similar to that in a specific cluster (and hence corresponds to the bowing style or expression represented by that cluster): To visualise the extracted features and the relationships between analyses of different bow strokes we have created the mav.graph.3dobject.This object currently takes up to three different feature inputs (note that the clustering analysis is not limited to three features), from arbitrary mav.stats objects, and plots clusters in three-dimensional space.Figure 7 illustrates an example of how the features can be displayed to show the grouping and variation of bow strokes based on analysis of the bow-velocity, bowsection, and bow-length features from a recorded performance by linking each feature to one of the dimensions in the graph.Bow strokes played with a similar expression appear in the same area of the graph as clusters, and as a new stroke is added the matching cluster is highlighted to show that the stroke was similar.The more similar bow strokes are; the more compact the clusters will appear in the graph.We also indicate the distance from the centroid of the cluster.We implement this technique in such a way to provide an easy to understand, high-level interface to non-technical users like musicians and teachers to study the similarity of different bow strokes.As well as using this system to compare strokes to previous strokes in the performance, it is also possible to load in analysis data sets from other recordings to facilitate the comparison between two performances.
The SDIF format is being used for the storage of 3D motion and other sensor data allowing data exchange between the i-Maestro gesture analysis tools [20].Figure 8. SDIF data rendering using S&G Lab with an example of the bow-stroke direction and angle data from the AMIR and recorded sound.

Cooperative Environment
Cooperative work is another key area of music education.The i-Maestro Cooperative Environment allows different components of the i-Maestro framework to be used across a network.

Exercise Generator
The i-Maestro Exercise Generator supports (semi-)automated creation of exercises and variations of music material using an extensible set of algorithms and templates.

School Server
The i-Maestro School Server, offers online access to stored lesson material.

Integration
The combination of tools leads to new functionality, e.g. the automatic annotation of a score with bowing symbols in real time while a musician is playing, which is reached by combining, score follower, motion capture and SMR support.An application (called the i-Maestro Start) has been created to offer students and teachers a unique tool to start all the tools offered by i-Maestro.With the tools now available, initial validation is started with teachers in music schools and conservatories.

CONCLUSION
The project continued its work on pedagogical aspects, enabling technologies, software components, integrations and validation activities.An overarching pedagogical approach and model [16] for technology-enhanced teaching and learning has been developed.On this basis, a set of detailed pedagogical scenarios related to the use of the i-Maestro tools has been created.This paper presented a brief overview of the i-Maestro project.With the introduction, the paper presented the overall framework design and introduced several tools to support music learning and teaching including MPEG SMR for theory training, gesture analysis for performance training, with a particular focus on the 3D gesture and posture support using the 3D Augmented Mirror.
The final results consist of a framework for technology-enhanced music training, that combines proven and novel pedagogical models with technological tools such as collaborative work support, symbolic music processing, audio processing, and gesture interfaces.Offering accessible tools for music performance and theory training will ensure wide participation.
Prototype tools are now available and are expected to be incorporated in various new products and services, which will be made available to both the general public and educational establishments.These are being validated and refined and the project is inviting music teachers and students to take part in the validation phase.We are particularly interested in testing the system in real pedagogical situations to see how teachers and students interact with the technology.At the ICSRiM -University of Leeds (UK), open lab sessions are being organised for people to come and try out the i-Maestro 3D augmented mirror system with a 12-camera motion capture system.

Figure 1 .
Figure 1.An overview of the i-Maestro architecture.

Figure 3 .
Figure 3.The Score Follower listens to the player and provides automated "page turning" and accompaniment.

Figure 4 .
Figure 4. AMIR for 3D visualisation and sonification of a bowing exercise.

Figure 9 .
Figure 9. Cooperative interface for ear training

Figure 10 .
Figure 10.A view of the i-Maestro Braille editor for music.

Figure 11 .
Figure 11.A screen shot of the i-Maestro Exercise Generator.

Figure 12 .
Figure 12. i-Maestro School Server 2, … P N , a training dataset of feature vectors may be collected: