‘Machine folk’ music composed by AI shows technology’s creative side
Bob Sturm is a Lecturer in Digital Media at the School of Electronic Engineering and Computer Science, Queen Mary University of London, specialising in audio and music signal processing, machine listening, and evaluation. Oded Ben-Tal is a composer with complementary research interests at the intersection of Music, Cognition, and Computing. His compositions range from instrumental works to interactive pieces combining live performers with electronics, and include multimedia collaborations with artist from other domains such as video, dance, and visual design.
Bob Sturm, Queen Mary University of London and Oded Ben-Tal, Kingston University
Folk music is part of a rich cultural context that stretches back into the past, encompassing the real and the mythical, bound to the traditions of the culture in which it arises. Artificial intelligence, on the other hand, has no culture, no traditions. But it has shown great ability: beating grand masters at chess and Go, for example, or demonstrating uncanny wordplay skills when IBM Watson beat human competitors at Jeopardy. Could the power of AI be put to use to create music?
This is not entirely unprecedented: an artificial intelligence co-wrote a piece of musical theatre, from the storyline to the music and lyrics. It premiered in London in 2016. The advancement of AI techniques and ever-larger collections of data to use to train them presents broad opportunities for creative research. The AI co-wrote its musical based on an analysis of hundreds of other successful musicals, for example. There are other projects aimed at providing creators of art and music with new artificial intelligence-based tools for their craft, such as Google’s Magenta project, Sony’s Flow Machines, or British start-up Jukedeck. And long before those was The Illiac Suite, a string quartet programmatically composed by a supercomputer in 1957.
Our research examines how state-of-the art AI techniques can contribute to musical practice, specifically the Celtic folk tradition of “session music”. Enthusiasts transcribe versions of folk tunes using ABC, a reduced form of music notation developed by Chris Walshaw of the University of Greenwich, using text characters as a rough guide to the musician. We trained our AI system using more than 23,000 ABC transcriptions of folk music, crowd-sourced from the excellent online resource thesession.org. And at our recent workshop at the Inside Out festival we had accomplished folk musicians performing some of this “machine folk” music.
Artificial compositions, human melodies
Our AI is trained so that given one ABC symbol it can predict the next, which means it can generate new tunes that draw upon patterns and structures learned from the original tunes. We have generated more than 100,000 new machine folk tunes, and it’s interesting to see what the AI has and has not learned. Many tunes have the typical structure of this style: two repeated parts of the same eight-bar length, that often complement each other musically. The AI also shows some ability to repeat and vary musical patterns in a way that is very characteristic of Celtic music. It was not programmed to do this with rules – it learned to do so because these patterns exist in the data we fed it.
However, unlike a human the system isn’t immediately able to generalise these properties beyond the immediate context. Much of what we originally thought the system learned about basic musical features (for example how rhythm works) in fact it hadn’t learned – it was simply able to reproduce those conventions. Venture slightly outside the conventions of the data and the system begins to act unusually. This is where things can get musically interesting:
To evaluate the AI’s compositions we consulted the experts: folk musicians. We asked for feedback on The Endless Traditional Music Session, and later about a volume of 3,000 tunes generated by our system. Feedback from members in the thesession.org forums shows divided opinions: some found the idea intriguing and identified “machine folk” tunes they liked and could work with. Others were dead against the entire notion of computer-generated music.
One obstacle was that not only was this music composed by computers, it was also played by computer synthesis, and so lacking the interpretation and expressivity of human musicians who bring each tune to life – elements not incorporated in the data the AI had trained on. So we recruited professional folk musicians and asked them to look at our volume of 3,000 tunes. One musician observed that about one in five tunes are actually fairly good.
By their nature, folk tunes are less fixed in nature and are treated as a frame upon which to elaborate: performers develop their own version and change elements in performance. The musicians found interesting features and some patterns that are unusual but work well within the style. Perhaps there are regions of this musical space that humans have not yet discovered – and can be reached with the help of a machine.
Much discussion around AI focuses on computers as competitors to humans. We seek to harness the same technology as a creative tool to enrich, not replace.
A concert, “Partnerships”, on May 23, 2017, will feature music co-created by humans and computers.
Bob Sturm, Lecturer in Digital Media, Queen Mary University of London and Oded Ben-Tal, Senior Lecturer in Music Technology, Kingston University
This article was originally published on The Conversation. Read the original article.
Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.
No comments yet.