The Journal of Creative Music Systems


The Journal of Creative Music Systems (JCMS) is a peer-reviewed online open-access journal focused on theoretical and empirical aspects of computational creative systems in the domain of music. JCMS is intended to draw together scholars in a number of disciplines in the new field of the Computer Simulation of Musical Creativity (CSMC) and to serve as a forum for scholarly dialogue regarding the most important issues in the field. Issues are published biannually and include articles, research reports, reviews and tutorials.

Issue 2 of the Journal of Creative Music Systems

http://jcms.org.uk/issues/Vol1Issue2/toc.html
The journal is open access and published by the University of Huddersfield Press.

Contents include:

Articles

* Formalizing Fado – A Contribution to Automatic Song Making
Tiago Gonzaga Videira, Bruce Pennycook, Jorge Martins Rosa

This article is a contribution to the formalisation of the process of song composition, focusing on a case study: Portuguese fado (a performance practice). Based on previous musicological studies, we describe a theoretical model which is parametrised using empirical data retrieved from a representative symbolic corpus. Starting from the model, we present the formalisation of a symbolic artificial intelligence implemented as a generative system. This system is able to automatically generate instrumental music based on the musics and vocal sounds typically associated with fado’s practice. The output of the system is evaluated resorting both to a supervised classification system and to a quasi-Turing test. We conclude that the output of our system approaches the one from the corpus and has relevant pleasantness to human listeners.

* Computer-Generated Stylistic Compositions With Long-term Repetitive and Phrasal Structure
Tom Collins, Robin Laney

This article describes and evaluates an algorithm called Racchmaninof-Jun2015, referred to hereafter as Racchmaninof, which generates passages of music in a specifiable style. For generating all four parts of a Bach hymn (one of two target styles evaluated as part of a listening study – the other being Chopin mazurkas), we found that only five out of 25 participants performed significantly better than chance at distinguishing Racchmaninof’s output from original human compositions. These participants had a mean of 8.56 years of formal musical training and mode “daily/weekly” regularity of playing an instrument or singing. In the context of relatively high levels of musical expertise, this difficulty of distinguishing Racchmaninof’s output from original human compositions underlines the promise of our approach. Current trends and issues in the area of automatic stylistic composition are introduced and discussed and we consider the potential for applying our algorithm to additional composers and/or genres of music.

* Generating Time: Rhythmic Perception, Prediction and Production with Recurrent Neural NetworksAndrew J. Elmsley, Tillman Weyde, Newton Armstrong

Keywords

Music perception, rhythm generation, machine learning, neural networks, expressive timing
Abstract

In the quest for a convincing musical agent that performs in real time alongside human performers, the issues surrounding expressively timed rhythm must be addressed. Current beat tracking methods are not sufficient to follow rhythms automatically when dealing with varying tempo and expressive timing. In the generation of rhythm, some existing interactive systems ignore the pulse entirely, or fix a tempo after some time spent listening to input. Since music unfolds in time, we take the view that musical timing needs to be at the core of a music generation system.

Our research explores a connectionist machine learning approach to expressive rhythm generation, based on cognitive and neurological models. Two neural network models are combined within one integrated system. A Gradient Frequency Neural Network (GFNN) models the perception of periodicities by resonating nonlinearly with the musical input, creating a hierarchy of strong and weak oscillations that relate to the metrical structure. A Long Short-term Memory Recurrent Neural Network (LSTM) models longer-term temporal relations based on the GFNN output.

The output of the system is a prediction of when in time the next rhythmic event is likely to occur. These predictions can be used to produce new rhythms, forming a generative model.

We have trained the system on a dataset of expressively performed piano solos and evaluated its ability to accurately predict rhythmic events. Based on the encouraging results, we conclude that the GFNN-LSTM model has great potential to add the ability to follow and generate expressive rhythmic structures to real-time interactive systems.

* Generative Live Music-making Using Autoregressive Time Series Models: Melodies and Beats
Roger Dean

Keywords

Computational creativity, generative music, time series analysis, joint modelling, BANG
Abstract

Autoregressive Time Series Analysis (TSA) of music can model aspects of its acoustic features, structural sequencing and of consequent listeners‘ perceptions. This article concerns generation of keyboard music by repeatedly simulating from both uni- and multi-variate TSA models of live-performed event pitch, key velocity (which influences loudness), duration and inter-onset interval (specifying rhythmic structure). The MAX coding platform receives performed, random or preformed note sequences, and transfers them via a computer socket to the statistical platform R, in which time series models of long segments of the data streams are obtained. Unlike many predecessors, the system exploits both univariate (e.g., pitch alone) and multivariate (pitch, velocity, note duration, and inter-onset intervals taken jointly) modelling. Simulations from the models are played by MAX on a MIDI-instrument. Data retention (memory) allows delayed or immediate sounding of newly generated melodic material, amplifying surprise. The resultant “Beat and Note Generator” (BANG), can function in collaboration with a MIDI-instrument performer who can also use the BANG interface, or autonomously. It can generate relatively large-scale structures (commonly chunks of 200 events) or shorter structures such as beats and glitches like those of electronic dance music.

* Aiding Soundtrack Composer Creativity through Automated Film Script-profiled Algorithmic Composition
Alexis Kirke, Eduardo Miranda

Keywords

Movie script, soundtrack, music and emotion, algorithmic composition
Abstract

The creation of soundtracks for feature length films is a major undertaking. One method involves watching the film for sections that inspire some emotional response, and then improvising live. The composing of a score is often a combination of bottom up and top down processes. For example small segments of score are composed first, while in parallel a broader structure is planned based on the narrative. We present a system that attempts to aid the composer in this creative process. The system focuses on the script rather than moving image of the film. This has two advantages: (a) a composer can begin their work earlier in the process of film production; (b) the automated analysis of text is simpler than the automated analysis of video and speech. The system performs an automated analysis that approximates the script structure. It also attempts, using word analysis, to give insight into the emotional trajectory of the story and the characters. These are then used as part of an algorithmic creativity system that suggests musical sketches to the composer for parts of the script.


* The Design of UrMus as a Meta-Environment for Mobile Music
Georg Essl, Sang Won Lee

Keywords

Mobile music, live-coding, patching, collaboration, networking
Abstract

UrMus is a meta-environment to support mobile music. By aiming to facilitate a design philosophy of openness and flexibility, it provides wide ranging technical support to be leveraged for diverse forms of musical expression on mobile devices. UrMus has evolved over a period of six years to support wide ranging music performance practices such as collaborative mobile live coding, networked mobile performances, and the use of machine-learning in mobile music. By taking a multi-paradigmatic approach, UrMus allows the user investigate representations for mobile music program directly on the mobile device.

* On the Ontological Category of Computer-Generated Music Scores
Nemesio García-Carril Puy

This article is devoted to examining the ontological foundations of computer-generated music scores. Specifically, we focus on the categorial question, i.e., the inquiry that aims to determine the kind of ontological category that musical works belong to. This task involves considerations concerning the existence and persistence conditions for musical works, and it has consequences for the determination of what it is to compose a musical work. Our contention is that not all the possible answers to the categorial question in the ontology of music are equally compatible with the hypothesis that creative music systems compose musical works. The thesis defended here is that musical Platonism is the proposal that best accommodates this hypothesis. We claim that musical Platonism is the answer to the categorial question that offers the most straightforward explanation for the possibility of considering creative music systems as genuinely composing musical works. Moreover, we uphold that the notion of creative-evaluative discovery as the characterization of what it is to compose a musical work entailed by Platonism is the simplest explanation of the process developed by a computer in producing musical works. For this purpose, we will take as empirical data the features of the Iamus computer, a system that produces musical works autonomously using evolutionary algorithms and following an evo-devo strategy. The works generated by this computer have been recorded by the London Symphony Orchestra and other renowned international soloists, and its impact has been notable in the literature (Ball, 2012; Coghlan, 2012; Berger, 2013).

Reviews

* Review of CSMC16: What is Computational „Creativity“ and How can it be Evaluated in the Context of Music?
René Mogensen

The 1st Conference on Computer Simulation of Musical Creativity (CSMC16) was held June 17–19, 2016 at the University of Huddersfield. Several themes emerged in the conference discussions addressing some of the fundamental questions of what “creativity” is, or could be, as well as issues regarding methodologies for evaluating the potential “creativity” of computational systems. For this conference review I invoke Wiggins’s (2006) formalisation of creative systems, understood as searches in conceptual spaces, and I use this as a working understanding of creativity in order to suggest some questions related to the papers and discussions; these questions may provide fuel for themes in future conferences in this cross-disciplinary field.


Ongoing Call for Papers

JCMS is intended to focus on computer systems which generate, perform or analyse music, and which either demonstrate a distinct degree of creativity or which shed light on the nature of creativity. Both empirical articles, which focus on the design and implementation of new techniques; as well as theoretical papers, which investigate the scientific and philosophical foundations of music-creative systems, are encouraged. In recognition of the inherent interdisciplinarity of the area, JCMS encourages submission of articles at the intersection of different fields, such as music (theory, analysis, history), artificial intelligence, music information retrieval (MIR), cognitive science, evolutionary theory, mathematics and philosophy.

We invite submissions on subjects related to our scope and aims.
Types of Submissions

JCMS accepts articles, research reports, reviews and tutorials. Articles should make a major theoretical or empirical contribution to knowledge. Research reports should describe research which is in a preliminary phase. Reviews provide critical commentary on scholarly books, articles and events such as conferences relevant to the field. Tutorials are intended to illustrate new technologies relevant to CSMC. For more information on the various types of submissions, please refer to the guidelines for authors.
Submission Instructions

To submit your manuscript, please follow the instructions in the guidelines for authors.
Further Information

For any enquiries, please contact Valerio Velardo, Associate Editor, at associate-editor@jcms.org.uk.

JCMS is supported by the University of Huddersfield and published by the University of Huddersfield Press.