Exposing Parameters of a Trained Dynamic Model for Interactive Music Creation
- Dan Morris ,
- Ian Simon ,
- Sumit Basu
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2 |
Published by ACM
As machine learning (ML) systems emerge in end-user applications, learning algorithms and classifiers will need to be robust to an increasingly unpredictable operating environment. In many cases, the parameters governing a learning system cannot be optimized for every user scenario, nor can users typically manipulate parameters defined in the space and terminology of ML. Conventional approaches to user-oriented ML systems have typically hidden this complexity from users by automating parameter adjustment. We propose a new paradigm, in which model and algorithm parameters are exposed directly to end-users with intuitive labels, suitable for applications where parameters cannot be automatically optimized or where there is additional motivation – such as creative flexibility – to expose, rather than fix or automatically adapt, learning parameters. In our CHI 2008 paper, we introduced and evaluated MySong, a system that uses a Hidden Markov Model to generate chords to accompany a vocal melody. The present paper formally describes the learning underlying MySong and discusses the mechanisms by which MySong’s learning parameters are exposed to users, as a case study in making ML systems user-configurable. We discuss the generalizability of this approach, and propose that intuitively exposing ML parameters is a key challenge for the ML and human-computer-interaction communities.