Please note that slight changes may be made (last update: 20th Jun 2012).
You can download the complete Programme Guide here (.pdf).
The final CMMR 2012 Proceedings are available for download here (.pdf).
Tuesday 19th June 2012
9:00-10:00 | Registration |
10:00-12:30 | Tutorial/Workshop 1: Pure Data and Sound Design (Andy Farnell) |
10:00-12:30 | Tutorial 2: Musicology and Music Information Retrieval Tools (Daniel Leech Wilkinson and Dan Tidhar) |
10:00-12:30 | CMMR 2012 Music Concert and C4DM Recording and Performance Space Tour |
11:00-12:00 | Coffee Break |
12:30-13:30 | Lunch |
13:30-17:00 |
Cross-Disciplinary Perspectives on Expressive Performance Workshop Supported by the Arts and Humanities Research Council (AHRC) |
15:00-17:00 | Tour of British Library Sound Studios |
15:00-16:00 | Coffee Break |
18:30-19:30 | Welcome Reception (Balconies of Wilton's Grand Music Hall) |
20:00-22:00 | New Resonances Festival at Wilton's Music Hall (Concert 1) |
Wednesday 20th June 2012
9:00-9:30 | Registration |
9:30-9.45 | Welcome and Announcements |
9:45 - 10:45 | Keynote Talk 1: "Hearing with our hearts: Psychological perspectives on music and emotions" (Prof. Patrik N. Juslin) |
10:45 - 11:00 | Coffee break |
11:00 - 12:20 | Oral session 1: Music Emotion Analysis (5 papers) |
12:20 - 12:40 | Yamaha Talk |
12:40 - 14:00 | Lunch |
14:00 - 15:00 | Poster Session 1: Music Emotion: Analysis, Retrieval, and Multimodal Approaches, Synthesis, Symbolic Music-IR, Spatial Audio, Performance, Semantic Web |
15:00 - 16:40 | Oral Session 2: 3D Audio and Sound Synthesis (5 papers) |
16:40 - 17:00 | Coffee break |
17:00 - 18:00 | Panel 1: "Production Music: Mood and Metadata" (Dr. Mathieu Barthet, David Marston, Andy Hill, Alex Black, Martyn Davies, Marco Perry) |
20:00 - 22:00 | New Resonances Festival at Wilton's Music Hall (Concert 2) |
Thursday 21st June 2012
9:00-9:30 | Registration |
9:30 - 10:30 | Keynote Talk 2: "The why, how, and what of sparse representations for audio and acoustics" (Prof. Laurent Daudet) |
10:30 - 11:00 | Coffee break |
11:00 - 12:20 | Oral Session 3: Computer Models of Music Perception and Cognition: Applications and Implications for MIR (4 papers) |
12:20 - 12:40 | myfii Talk |
12:40 - 13:40 | Lunch |
13:40 - 15:00 | Poster session 2: Computer Models of Music Perception and Cognition, Music Information Retrieval, Music Similarity and Recommendation, Computational musicology, Intelligent Music Tuition Systems |
15:00 - 16:40 | Oral session 4: Music Emotion Recognition (5 papers) |
16:40 - 17:00 | Coffee break |
17:00 - 18:30 |
Panel 2: "The Future of Music Information Research" (Prof. Geraint A. Wiggins, Prof. Joydeep Bhattacharya, Prof. Tim Crawford, Dr. Alan Marsden, Prof. John Sloboda) Supported by the EU-FP7 project "Roadmap for Music Information ReSearch" (MIReS) |
20:00 - 00:00 | Gala Dinner at the Under The Bridge venue (Chelsea Football Club) followed by BBT Concert and Open Jam Session |
Friday 22nd June 2012
9:00-9:30 | Registration |
9:30 - 10:30 | Keynote Talk 3: "Music In Cinema: How Soundtrack Composers Act On The Way People Feel" (Simon Boswell) |
10:30 - 11:00 | Coffee break |
11:00 - 12:40 | Oral Session 5: Music Information Retrieval (5 papers) |
12:40 - 13:40 | Lunch |
13:40 - 14:40 | Demo Session |
13:40 - 14:40 | Yamaha Showcase |
14:40 - 15:40 | Oral Session 6: Film Soundtrack and Music Recommendation (3 papers) |
15:40 - 16:00 | Coffee break |
16:00 - 17:20 | Oral Session 7: Computational Musicology and Music Education (4 papers) |
19:00 - 22:00 | New Resonances Festival at Wilton's Music Hall (Concert 3) |
Workshops Schedule
Cross-Disciplinary Perspectives on Expressive Performance Workshop
Tuesday 19th June 2012, 13:30-17:00
- J. P. Ito, "Focal Impulses and Expressive Performance"
- D. Leech-Wilkinson, "Rubato and Melodic Direction"
- F. Gualda and R. Yamaguchi, "(Re)Shaping Musical Gesture - an interface for studying performers' expressive cues"
- A. Kirke, E. Miranda and S. Nasuto, "Learning to Make Feelings: Expressive Performance as a part of a machine learning tool for sound-based emotion therapy and control"
- A. Tanaka, "Intention, Effort, and Restraint in Expressive Biosensor Musical Performance"
- E. Chew and C. Callender, "Absolute Tempo vs. Log(Tempo), Score Time vs. Performance Time"
- R. Timmers and S. Tucker, "Applying performance analysis techniques to assist practicing expression: two extensions of low-level visual feedback on expressive drum performances"
- S. Flossmann and G. Widmer, "Towards an Evaluation Scheme for Expressive Performance Renderings"
- K. Jensen and S. Frimodt-Møller, "MoCap and Audio Feature Analysis of Performances Portraying Different Emotions"
-
A. McPherson and A. Stark, "Modelling Expressive Piano Technique using Capacitive Touch Sensing on the Keyboard"
Oral, Poster, and Demo Sessions Schedule
Oral session 1: Music Emotion Analysis
Wednesday 20th June 2012, 11:00-12:20
-
11:00 - 11:20 Expressive Dimensions In Music
Tom Cochrane and Olivier Rosset -
11:20 - 11:40 Emotion in Motion: A Study of Music and Affective Response
Javier Jaimovich, Niall Coghlan and R. Benjamin Knapp -
11:40 - 12:00 Psychophysiological measures of emotional response to Romantic orchestral music and their musical and acoustic correlates
Konstantinos Trochidis, David Sears, Dieu-Ly Tran and Stephen McAdams -
12:00 - 12:20 CCA and Multi-way Extension for Investigating Common Components Between Audio, Lyrics and Tags
Matt McVicar and Tijl De Bie
Oral session 2: 3D Audio and Sound Synthesis
Wednesday 20th June 2012, 15:00-16:40
-
15:00 - 15:20 A 2D Variable-Order, Variable-Decoder, Ambisonics based Music Composition and Production Tool for an Octagonal Speaker Layout
Martin Morrell and Joshua Reiss -
15:20 - 15:40 Perceptual characteristic and compression research in 3D audio technology
Ruimin Hu, Shi Dong, Heng Wang, Maosheng Zhang, Song Wang and Dengshi Li -
15:40 - 16:00 Rolling Sound Synthesis: Work In Progress
Simon Conan, Mitsuko Aramaki, Richard Kronland-Martinet and Sølvi Ystad -
16:00 - 16:20 EarGram: an Application for Interactive Exploration of Large Databases of Audio Snippets for Creative Purposes
Gilberto Bernardes, Carlos Guedes and Bruce Pennycook -
16:20 - 16:40 From Shape to Sound: Sonification of Two Dimensional Curves By Reenaction of Biological Movements
Etienne Thoret, Mitsuko Aramaki, Richard Kronland-Martinet, Sølvi Ystad and Jean-Luc Velay
Poster session 1: Music Emotion: Analysis, Retrieval and Multimodal Approaches, Synthesis, Symbolic Music-IR, Spatial audio, Performance, Semantic Web
Wednesday 20th June 2012, 13:00-15:00
-
Music Emotion Regression Based on Multi-modal Features
Di Guan, Xiaoou Chen and Deshun Yang -
Application of Free Choice Profiling for the Evaluation of Emotions Elicited by Music
Judith Liebetrau, Sebastian Schneider and Roman Jezierski -
SUM: From Image-Based Sonification to Computer-Aided Composition
Sara Adhitya and Mika Kuuskankare -
Automatic Interpretation of Chinese Traditional Musical Notation Using Conditional Random Field
Rongfeng Li, Yelei Ding, Wenxin Li and Minghui Bi -
Music Dramaturgy and Human Reactions: Music as a Means for Communication
Javier Alejandro Garavaglia -
ENP-Regex - a Regular Expression Matcher Prototype for the Expressive Notation Package
Mika Kuuskankare -
Sonic Choreography for Surround Sound Environments
Tommaso Perego -
An Investigation of Music Genres and Their Perceived Expression Based on Melodic and Rhythmic Motifs
Débora C. Corrêa, F. J. Perez-Reche and Luciano Da F. Costa -
A Synthetic Approach to the Study of Musically-induced Emotions
Sylvain Le Groux and Paul Verschure -
Timing Synchronization in String Quartet Performance: a Preliminary study
Marco Marchini, Panos Papiotis and Esteban Maestre -
Predicting Time-Varying Musical Emotion Distributions from Multi-Track Audio
Jeffrey Scott, Erik Schmidt, Matthew Prockup, Brandon Morton and Youngmoo Kim -
Codebook Design Using Simulated Annealing Algorithm for Vector Quantization of Line Spectrum Pairs
Fatiha Merazka -
Pulsar Synthesis Revisited: Considerations for a MIDI Controlled Synthesiser
Thomas Wilmering, Thomas Rehaag and André Dupke -
Knowledge Management On The Semantic Web: A Comparison of Neuro-Fuzzy and Multi-Layer Perceptron Methods For Automatic Music Tagging
Sefki Kolozali, Mathieu Barthet and Mark Sandler
Oral session 3: Computer Models of Music Perception and Cognition: Applications and Implications for Music Information Retrieval
Thursday 21st June 2012, 11:00-12:20
-
11:00 - 11:20 The Role of Time in Music Emotion Recognition
Marcelo Caetano and Frans Wiering -
11:20 - 11:40 The Intervalgram: An Audio Feature for Large-scale Melody Recognition
Thomas C. Walters, David Ross and Richard F. Lyon -
11:40 - 12:00 Perceptual dimensions of short audio clips and corresponding timbre features
Jason Jiří Musil, Budr Al-Nasiri and Daniel Müllensiefen -
12:00 - 12:20 Towards Computational Auditory Scene Analysis: Melody Extraction from Polyphonic Music
Karin Dressler
Oral session 4: Music Emotion Recognition
Thursday 21st June 2012, 15:00-16:40
-
15:00 - 15:20 Multidisciplinary Perspectives on Music Emotion Recognition: Implications for Content and Context-Based Models
Mathieu Barthet, Gyorgy Fazekas and Mark Sandler -
15:20 - 15:40 A Feature Survey for Emotion Classification of Western Popular Music
Scott Beveridge and Don Knox -
15:40 - 16:00 Support Vector Machine Active Learning for Music Mood Tagging
Alvaro Sarasua, Cyril Laurier and Perfecto Herrera -
16:00 - 16:20 Modeling Expressed Emotions in Music using Pairwise Comparisons
Jens Madsen, Jens Brehm Nielsen, Bjørn Sand Jensen and Jan Larsen -
16:20 - 16:40 Relating Perceptual and Feature Space Invariances in Music Emotion Recognition
Erik Schmidt, Matthew Prockup, Jeffrey Scott, Brian Dolhansky, Brandon Morton and Youngmoo Kim
Poster session 2: Computer Models of Music Perception and Cognition*, Music Information Retrieval, Music Similarity and Recommendation, Computational musicology, Intelligent Music Tuition Systems
Thursday 21st June 2012, 12:40-15:00
(Posters for special session on Computer Models of Music Perception and Cognition are indicated with *)
-
Predicting Emotion from Music Audio Features Using Neural Networks*
Naresh Vempala and Frank Russo -
Multiple Viewpoint Modeling of North Indian Classical Vocal Compositions*
Ajay Srinivasamurthy and Parag Chordia -
Comparing Feature-Based Models of Harmony*
Martin A. Rohrmeier and Thore Graepel -
Music Listening as Information Processing*
Eliot Handelman and Andie Sigler -
On Automatic Music Genre Recognition by Sparse Representation Classification using Auditory Temporal Modulations
Bob Sturm and Pardis Noorzad -
A Survey of Music Recommendation Systems and Future Perspectives
Yading Song, Simon Dixon and Marcus Pearce -
A Spectral Clustering Method for Musical Motifs Classification
Alberto Pinto -
Songs2See: Towards a New Generation of Music Performance Games
Estefania Cano, Sascha Grollmisch and Christian Dittmar -
A Music Similarity Function Based on the Fisher Kernels
Jin S. Seo, Nocheol Park and Seungjae Lee -
Automatic Performance of Black and White n.2: The Influence of Emotions Over Aleatoric Music
Luca Andrea Ludovico, Adriano Baratè and Stefano Baldan -
The Visual SDIF interface in PWGL
Mika Kuuskankare
-
Application of Pulsed Melodic Affective Processing to Stock Market Algorithmic Trading and Analysis
Alexis Kirke and Eduardo Miranda -
A Graph-Based Method for Playlist Generation
Débora C. Corrêa, Alexandre L. M. Levada and Luciano Da F. Costa -
Compression-Based Clustering of Chromagram Data: New Method and Representations
Teppo Ahonen -
GimmeDaBlues: An Intelligent Jazz/Blues Player And Comping Generator for iOS devices
Rui Dias, Telmo Marques, George Sioros and Carlos Guedes
Oral session 5: Music Information Retrieval
Friday 22nd June 2012, 11:00-12:40
-
11:00 - 11:20 Automatic Identification of Samples in Hip Hop Music
Jan Van Balen, Martín Haro and Joan Serrà -
11:20 - 11:40 Novel use of the variogram for MFCCs modeling
Simone Sammartino, Lorenzo J. Tardon and Isabel Barbancho -
11:40 - 12:00 Automatic String Detection for Bass Guitar and Electric Guitar
Jakob Abesser -
12:00 - 12:20 Improving Beat Tracking in the Presence of Highly Predominant Vocals Using Source Separation Techniques: Preliminary Study
Jose Zapata and Emilia Gomez -
12:20 - 12:40 Oracle Analysis of Sparse Automatic Music Transcription
Ken O'Hanlon, Hidehisa Nagano and Mark Plumbley
Oral session 6: Film Soundtrack and Music Recommendation
Friday 22nd June 2012, 14:40-15:40
-
14:40 - 15:00 The influence of music on the emotional interpretation of visual contexts - Designing Interactive Multimedia Tools for Psychological Research
Fernando Bravo -
15:00 - 15:20 The Perception of Auditory-visual Looming in Film
Sonia Wilkie and Tony Stockman -
15:20 - 15:40 Taking Advantage of Editorial Metadata to Recommend Music
Dmitry Bogdanov and Perfecto Herrera
Oral session 7: Computational Musicology and Music Education
Friday 22nd June 2012, 16:00-17:20
-
16:00 - 16:20 Bayesan MAP estimation of piecewise arcs in tempo time-series
Dan Stowell and Elaine Chew -
16:20 - 16:40 Structural Similarity Based on Time-span Tree
Satoshi Tojo and Keiji Hirata -
16:40 - 17:00 Subject and Counter-subject Detection for Analysis of the Well-Tempered Clavier Fugues
Mathieu Giraud, Richard Groult and Florence Levé -
17:00 - 17:20 Enabling Participants to Play Rhythmic Solos Within a Group via Auctions
Arjun Chandra, Kristian Nymoen, Arve Voldsund, Alexander Refsum Jensenius, Kyrre Glette and Jim Torresen
Demo session
Friday 22nd June 2012, 12:40-14:40
-
Development of a Test to Objectively Asses Skills
Lily Law and Marcel Zentner -
Soi Moi...
n + n Corsino and Jacques Diennet