From Computational Creativity to Creative AI and Back Again

Simon Colton is a British computer scientist, currently working as Professor of Computational Creativity in the Game AI Research Group at Queen Mary University of London, UK and in the Sensilab at Monash University, Australia. He previously had an appointment at Falmouth University, UK and led the Computational Creativity Research Groups at Goldsmiths, University of London and at Imperial College, London in the positions of Professor and Reader, respectively. Simon is the driving force behind thepaintingfool.com, an artificial intelligence that he hopes will one day be accepted as an artist in its own right.

Abstract

I compare and contrast the AI research field of Computational Creativity and the Creative AI technological movement, both of which are contributing to progress in the arts. I raise the sceptre of a looming crisis wherein public opinion moves on from the spectacle of software being creative to viewing the lack of authenticity in creative AI systems as being a major drawback. I propose a roadmap from Creative AI systems to Computationally Creative systems which address this lack of authenticity via the software expressing aspects of its computational life experiences in the art, music, games and literature that it produces. I posit that only by harnessing Creative AI technologies and Computational Creativity philosophies in the pursuit of truly creative software able to express the machine condition, will we gain maximum societal benefit in further understanding the human condition.

 

  1. Introduction

This year, we passed a milestone in my field, as the 10th annual International Conference on Computational Creativity (ICCC) was held in the USA. The conference brings together AI researchers who test the idea of software being independently creative, describing projects with goals ranging from enhancing human creativity to advancing our philosophical understanding of creativity and producing fully autonomous creative machines. The conference series was built on roughly ten years of preceding workshops [1], with interest in the idea of machine creativity going back to the birth of modern computing. For instance, in their 1958 paper [2], AI luminary Alan Newell and Nobel Prize winner Herbert Simon hypothesised that: “Within ten years, a digital computer will discover and prove an important mathematical theorem”. In [3], we proposed the following working definition of Computational Creativity research as:

“the philosophy, science and engineering of computational systems which, by taking on particular responsibilities, exhibit behaviours that unbiased observers would deem to be creative.”

In the last few years, we have seen unprecedented interest across society in generative AI systems able to create culturally interesting artefacts such as pictures, musical compositions, texts and games. Indeed, it’s difficult to read a newspaper or magazine these days without stumbling across a story about a new project to generate poems, or a symphony orchestra playing AI-generated music or an art exhibition in which AI systems are purported to be artists.

This wave of interest has been fuelled by a step change in the quality of computer-generated cultural artefacts, brought on largely by advances in machine learning technologies, and in particular the deep learning of artificial neural networks. Such techniques are able to generate new material by learning from data about the structure of existing material – such as a database of images, a corpus of texts or a collection of songs – and determining a way to create more of the same. An umbrella term for this groundswell of interest and activity in generative art/music/literature/games is “Creative AI”, and people from arts and sciences, within and outwith academia are actively engaged in producing art using AI techniques. We surveyed different communities engaged in generative arts – including Creative AI practitioners – in a recent ICCC paper [4].

While we might have expected the Creative AI community to have grown from the field of Computational Creativity, this is not the case. Indeed, somewhat of a schism has developed where the two communities have different aims and ambitions. Both communities have a main interest in the development of generative technologies for societal good. The Creative AI movement has an emphasis on quality of output and developing apps to commercial level for mass consumption. There is also a tendency to disavow the idea that software itself could/should be independently creative, in favour of a strong commitment to producing software purely for people to use to enhance their own creativity. In contrast, Computational Creativity researchers tend to be interested in the bigger picture of Artificial Intelligence, philosophical discourse around notions of human and machine creativity, novel ways to automate creative processes, and the idea that software, itself, could one day be deemed to be creative.

To highlight the schism: I personally find it difficult to think of any computational system as being “a Creative AI” if it cannot communicate details about a single decision it has taken, which is generally the case for approaches popular in Creative AI circles, such as Generative Adversarial Networks (GANs) [5]. I prefer therefore to describe Creative AI projects as “AI for creative people”, because the most literal reading of the phrase “Creative AI” is currently inaccurate for the majority of the projects under that banner. I often go further to point out that many Creative AI applications should be categorised as graphics (or audio, etc) projects which happen to employ techniques such as GANs that were originally developed by AI researchers.

As another example, I’ve argued in talks and papers many times that the end result of having more computer creativity in society is likely to be an increased understanding and celebration of human creativity, in much the same way that hand-made craft artefacts, like furniture or food, are usually preferred over machine-produced ones. I point out that I’ve met dozens of artists, musicians, poets and game designers, none of whom have expressed any concern about creative software, because they understand the value of humanity in creative practice. On the other hand, I’ve also spoken to Creative AI practitioners who remain convinced that truly creative software will lead to job losses, demoralisation and devaluation in the creative industries.

 

  1. Product versus Process

The Creative AI movement has helped to swing the global effort in engineering creative software systems firmly towards human-centric projects where AI techniques are used purely as tools for human use, with ease of use and quality of output disproportionately more important than any other considerations. I’ve been trying recently to put together arguments and thought experiments to help explain why I believe this is a retrograde step, and I’ve been trying to articulate ways in which the wealth of knowledge accrued through decades of Computational Creativity projects could be of use to Creative AI practitioners. Almost every project ever presented within Computational Creativity circles started with building a generative system with similar aims to Creative AI projects. Hence I feel we are well placed to consider the role that AI systems could have in creative practice, and to encourage Creative AI researchers and practitioners to consider some of the ideas we’ve developed over the years.

Imagine a generative music system created by a large technology company, which is able to generate 10,000 fully orchestrated symphonies in just 1 hour. Let’s say that each symphony would be lauded by experts as a beautiful work of genius had it been produced by a human composer like Beethoven; and each one sounds uniquely different to the others. If we accept the reality of an AI system (AlphaGo Zero) able to train itself from scratch to play Go, Chess and Shogi at superhuman levels [6], then we should entertain the idea that superhuman symphony writing is possible in our lifetimes. If we only concentrate on the quality of output and ease of which software can generate outputs as complex as a symphony, then the above scenario is presumably a suitable end point for generative music and would be a cause for celebration – it would certainly tick the box of huge technical achievement, as the AlphaGo project did. However, one has to wonder what the benefits of having these symphonies (and the ability to generate them so easily) are for society.

I would predict that the classical music world would find very few practical applications for a database of 10,000 high-quality symphonies, and it would likewise find little value in generating more such material. I would also predict that there would be little, if any, devaluation of symphonic music as a whole, and no devaluation of the work of gifted composers able to hand-produce symphonies. Superhuman chess playing by computers has been around since the time of Deep Blue, and has likely increased rather than decreased the popularity of the game. The chess world has responded to computer chess by being clearer about the human-centric struggle at the heart of every game of chess, and “[a]mong the chess elite, the idea of challenging a computer has fallen into the realm of farce and retort” [7]. It is clear that computer chess has made the game of chess more human. Part of the attraction of the music from composers such as Mozart and Beethoven is that these were mere mortals with superhuman creative abilities in composition. Society celebrates such creative people, often by lauding the works they produce, but also by applauding their motivations, exploring their backgrounds, expressing awe about their process, and by taking inspiration for a fresh wave of creative activity. Creativity in society serves various purposes, only one of which is to bring into being artefacts of value.

While board games have hugely driven forward AI research, chess isn’t some mathematical Drosophilia for AI problem solving (as some researchers would have you believe). It is actually a game and pastime played by two people, which can be elevated to highly competitive levels. Likewise, a symphony isn’t just a collection of notes to guide musicians to produce sound waves, but is created by human endeavour for human entertainment, often condensing into abstract form aspects of human life experience and expression. I would predict that – in an age of superhuman symphony generation – a huge premium would be placed on compositions borne of human blood, sweat and tears, with the generation of music via statistical manipulation of data by computer remaining a second class process.

 

  1. Computational Authenticity

To hit home with the points above, I usually turn to poetry, due to the highly human-centric nature of the medium: poems are condensed humanity, written by people, for people, usually about people. The following poem provides a useful focal point to illustrate the humanity gap [8] in Computational Creativity.

———————————————————————————————

Childbirth

by Maureen Q. Smith

The joy, the pain, the begin again. My boy.

Born of me, for me, through my tears, through my fears.

———————————————————————————————-

This short poem naturally invites interpretation, and we might think of the joy, pain, tears as fears as referring literally to the birth of a child, perhaps from the first-person perspective of the author, as possibly indicated by “My boy … Born of me”. We might also interpret the “begin again” as referring to the start of a baby’s life, but equally it might reflect a fresh start for the family.

Importantly, the poem was not actually written by Maureen Q. Smith. The author was in fact a man called Maurice Q. Smith. In this light, we might want to re-think our interpretation. The poem takes on a different flavour now, but we can still imagine the male author witnessing a childbirth, possibly with his own tears and fears, reflecting the joy and pain of a woman giving birth. However, I should reveal that Maurice Q. Smith was actually a convicted paedophile when he wrote this poem, and it was widely assumed to be about the act of grooming innocent children, which he referred to as “childbirth”. The poem now affords a rather sinister reading, with “tears” and “fears” perhaps reflecting the author’s concerns for his own freedom; and the phrases “Joy and pain” and “Born of me, for me” now taking on very dark tones.

Fortunately, as you may have guessed, the poem wasn’t written by a paedophile, but was instead generated by a computer program using a cut-up technique. Thankfully, we can now go back and project a different interpretation onto the poem. Looking at “Joy and pain”, perhaps the software was thinking about… Well, the part about “Born of me, for me” must have been written to convey… Hmmmm. We see fairly quickly that it is no longer possible to project feelings, background and experiences onto the author, and the poem has lost some of its value. If the words have been put together algorithmically with nothing resembling the human thought processes we might have expected, we may also think of the poem as having lost its authenticity and a lot, if not all, of its meaning. We could, of course, pretend that it was written by a person. In fact, it’s possible to imagine an entire anthology of computer generated poems that we are instructed to read as if written by various people. But then, why wouldn’t we prefer to read an anthology of poems written by actual people?

For full and final disclosure: I actually wrote the poem and found it remarkably easy to pen a piece for which a straightforward interpretation changes greatly as the nature of the author changes. I’ve been using this provocative poem to try to change the minds of researchers in Computational Creativity research for a few years, in particular to try and shift the focus away from an obsession with the quality of output judged as if it were produced by a person. I’ve argued that the nature of the generative processes [9], how software frames its creations [10], and where motivations for computational creativity come from [11] are more important for us to investigate than how to increase the quality or diversity of output. This led to a study of the notion of computational authenticity [12], which pays into the discussion below.

As with pretty much all things generative, the advent of deep learning has led to a step change in the quality of the output of poetry generators, which have a long history dating (at least) as far back as an anthology entitled: “The Policeman’s Beard is Half Constructed” [13]. On the whole, the scientists pushing forward these advances have barely thought of addressing the deficiencies with these poems, namely that they were made by an inauthentic process. It is not impossible to imagine a poem-shaped computer generated text that would have been classed as a masterpiece had it been written by a person, but is not accepted by anyone as even being a poem, because public opinion has swung against inauthentic generative processes. I have for many years advocated using the name “c-poem” for the poem-shaped texts produced by computers. Just as people know that they won’t be unwrapping a beautifully bound e-book for their birthday, they should know that their ability to project human beliefs, emotions and experiences onto the author of a c-poem will be very limited.

 

  1. Responses to the Rise of Creative AI

Returning to the observation that the quality of the artistic output of AI systems has much increased in recent years, we can consider some appropriate responses to this situation.

One response is to follow the lead from the Creative AI community, and disavow the idea that software should be developed to be fully creative, concentrating instead on using AI techniques to aid human creativity. This certainly simplifies the situation, with AI systems becoming just the latest tools for creative people. It is also a public-friendly response, as journalists, broadcasters and documentary makers (along with the occasional politician, member of the clergy, philosopher or royal) often publish missives about how AI software is going to take everyone’s job, strangle our cats and devalue our life. On the whole, I believe it would be very sad if this response dominates the discourse and drives the field, as it would certainly curtail the dream of Artificial General Intelligence, which brought many of us into AI, and it will limit the ways in which people interact with software, which has the potential to be much more than a mere muse or tool. Software systems we have developed in Computational Creativity projects can be seen as creative collaborators; motivating yet critical partners; and sometimes independent creative entities. We should not throw away the idea that software can itself be creative, as the world always needs more creativity, and truly creative AI systems could radically drive humanity forward.

A second response is to accept the point above that the processes and personality behind creative practice are indeed important in the cultural appreciation of output from generative AI systems. In this context, given that software won’t be particularly human-like anytime soon, we could say that it’s impossible to take an AI system seriously as an authentic creative voice. An extreme version of this argument is that machines will never be valuable in the arts because they are not human. I argue below that this is shortsighted and missing an opportunity to understand technology in-situ. A closely related opinion is that people should or could dislike computer generated material precisely because it has been made by computer. This point of view has certainly been simmering under the surface of many conversations I’ve had, leading people to talk of computers lacking a soul or a spark, and often employing other such obfuscating rhetoric. Perhaps surprisingly, I’ve argued on a number of occasions that such a view is not extreme, and is indeed perfectly natural: such a view would, in my opinion, be a suitable personal response to the childbirth poem above, if indeed it had been computer generated.

Well intentioned people would never dream of saying that they dislike something because it was produced by a particular minority (or majority) group of people. Hence it feels to those people that they are being prejudicial to say that a painting, poem or composition is inferior purely because it was computer generated. Moreover, the view that works such as paintings and novels should be evaluated in their own terms, i.e., independently from information about their author and the creative process, has been reinforced philosophically with movements such as the Death of the Author [14], and numerous artistic manifestos.

Software systems do not form a minority human group whose creative freedom has to be protected. Throughout the history of humanity, art has been celebrated as a particularly human endeavour, and the art world is utterly people-centric. Software is not human, but due to decades of anthropomorphic thinking on AI, it seems more acceptable to think of computers somehow as under-evolved or under-developed humans, perhaps like monkeys or toddlers, rather than non-humans with intelligence, albeit low. Disliking a work of art purely because of its computational origins is more akin to expressing a preference of one type of process over another, than it is to expressing preferences of one ethnicity, gender or religion over another. “I don’t like this painting because it is a pointillist piece” is not the same as: “I don’t like this painting because it was painted by a Brit”.

So, we could say that, while the output of the current/future wave of generative AI systems is remarkable, and could – under Turing-style conditions of anonymity – be taken for human works, there is a natural limiting factor in the non-humanity of computational systems which gives us a backstop against the devaluation of human artistic endeavour. This is a reasonable response and may lead to increased celebration of human creativity, which would be no bad thing. However, I believe that this response will also (eventually) be limiting and lead to missed opportunities, as I hope to explain below.

A third response, which I greatly favour, is to start from the truism that software is not human. In many research and industry circles, it often seems that creating human-like intelligence through nueroscience-inspired approaches such as deep learning, is the only goal and the only approach. Not every AI researcher wants to build a software version of the brain, but this fact is often lost, and helps to obfuscate the fact that software has different experiences to people. The Painting Fool is software that I’ve developed over nearly 20 years [15], and has met minor and major celebrities and painted their portraits in half a dozen different countries, often in front of large audiences in interesting venues ranging from science museums and art galleries to a pub in East London. I have, of course, anthropomorphised this experience and The Painting Fool didn’t experience it as I have portrayed. But it did have experiences, and those experiences were authentic in the sense that the software was present, did interact with people and created things independently of me which entertained and provoked people in equal measure.

We could therefore respond to the uptick in quality of output from Creative AI systems by agreeing to concentrate more on investigating plausible internal reasons for software to be creative, and developing ways in which it can impart its understanding of the world, through expressing aspects of its life experiences. Instead of challenging human creativity in terms of the quality of output, but failing due to lack of authenticity, Computational Creativity systems could be developed to explore aspects of creative independence such as intrinsic motivation, empowerment [10] and intentionality [8]. A side effect of this is that – if we get software to record and use its own experiences rather than pretending that it is a person having human experiences – we will gain a better understanding of computer processing, the impact of particular software systems and what it means for a machine to have a cultural existence in our human world. It may be that this communicative side effect actually becomes more important than having software be creative for the purpose of making things.

If software can express its experience of the world through artistic expression, surely this would add to our understanding of human culture in a digital age of tremendous, constant, technological change. While the non-human life experiences of software systems can seem other worldly, automation is very much a part of the human world, and our increasing interaction on a minute-by-minute basis with software means we should be constantly open to new ideas for understanding what it does. It’s not so strange to imagine building an automated painting system to add on to another piece of software so that it can express aspects of its experience. In fact, this would be a natural generalisation of projects such as DeepDream [16], where visualisations of deep-learned neural models were originally generated to enable people to better understand how the model processed image data. It turned out that the visualisations had artistic value as computational hallucinations, and were presented in artistic contexts, with this usage eventually dominating, fuelling a huge push in generative neural network research and development.

 

  1. A RoadMap from Creative AI to Computational Creativity

In a talk at a London Creative AI meetup event a while ago, I offered some advice for people in the Creative AI community who might be interested in pursuing the dream of making genuinely creative AI systems. At the time, there were already indications that Creative AI practitioners were beginning to see the limitations of mass generation of high-quality artefacts and were interested in handing over more creative responsibility to software. Some people were already testing the water using deep learning techniques in ways other than pastiche generation, for instance looking at style invention rather than just style transfer [17]. The advice I gave can be seen as a very rough roadmap, which reflects to some extent my own career arc in building creative AI systems, and provides one of many paths by which people can take their generative system into fascinating new territories.

While keeping much of the original, I will re-draw the roadmap below, from a fresh perspective of improving authenticity through expanding the recording and creative usage of life experiences that creative software might have. It is presented as a series of seven levels for Creative AI Systems to transition to via increased software engineering and cultural usage, with each level representing a different type of system that the software graduates to. Focused on generative visual art rather than poetry/music/games/etc., but intended to generalise over many domains, the roadmap offers direct advice to people who already have a generative system.

  • Generative Systems. So, you’ve designed a generative system and are having fun making pictures with it. You play around with input data and parameter settings, and realise that the output is not only high quality, but really varied. You write a little graphical user interface, which enables you to play around with the inputs/parameters, and this increases the fun and the variety. It becomes clear that the space of inputs/parameters is very You begin to suspect that the space of novel outputs is also vast. You’re at level one: you have an interesting generative system which is able to make stuff. 

 

  • Appreciative Systems. Generating images becomes addictive, and you gorge on the output. In your gluttony, you get a strong fear of missing out – what if I miss the parameters for a really interesting picture? You decide to systematically sample the space of outputs, but there are millions of images that can be produced. So, you encode your aesthetic preferences into a fitness function and get the software to rank/display its best results, according to the fitness function, perhaps tempered by a novelty measure to keep things fresh. You’re at level two: you have an appreciative system which is able to discern quality in output.

 

  • Artistic Systems. At some stage, some humility sinks in, and you begin to think that maybe… just maybe… your particular aesthetic preferences aren’t the only ones which could be used to mine images. You give the software the ability to invent its own aesthetic fitness functions and use them to filter and rank the images that it generates. You’re at level three, with an artistic system which has some potential to affect the world artistically.

 

  • Persuasive Systems. Some of the output is great – beautiful new images that you perhaps wouldn’t have found/made yourself. But some of the pictures are unpalatable and you can’t imagine why the software likes them. However, sometimes, an awful image grows in appeal to you, and you realise that your own aesthetic sensibilities are being changed by the software. This is weird, but fun. You want to give the software the ability to influence you more easily, so you add a module which produces a little essay as a commentary on the aesthetic generation, the artefact generation and the style that the software has invented. You’re at level four, with a persuasive system that can change your mind through explanations as well as high quality, surprising output.

 

  • Inventive Systems. You begin to realise that you enjoy the output partially because of what it looks like and partially because of the backstory to the generation of the output and the aesthetics being considered. You want to increase both aspects, by enabling the software to alter its own code, perhaps at process level, and by taking inspiration from outside sources like newspapers, twitter, art books, other artists, etc., so you have less control. And you add natural language generation to turn the commentary about the process/product into a little drama. You’re at level five, where what your inventive system does is as important, interesting and unpredictable as its output.

 

  • Authentic Systems. You’re loving the commentaries/essays/stories about how and why your software has made a particular picture/aesthetic/style/series or invented a new technique, and the software pretty much has an artistic persona. However, sometimes the persona doesn’t ring true and actually verges on being insulting, given how little the software knows about the world. You realise that you’re reading/viewing the output as if it were created by a person, which is a falsehood which has gotten very old and somewhat disturbing. You decide to give the software plausible and believable reasons to be creative, by implementing models of intrinsic motivation, reflection, self-improvement, self-determination, empowerment and maybe even consciousness. In particular, much of this depends on implementing techniques to record the life experiences that your software has, via: sensors detecting aspects of the environment the software operates in; improved in-situ and online HCI, wherein the software’s interactions with people are recorded and the software is able to probe people with questions; and methods which take life experiences and outside knowledge and operationalise them into opinions that can be reflected in generative processing and output. You then give the software the ability to use its recorded life experiences to influence its creative direction, in much the same way that twitter and newspaper sources were previously. You’re at level six, with an authentic system that is seen more as an autonomous AI individual than a pale reflection of a person.

 

  • Philosophical Systems. Ultimately, you find it thrilling to be in the presence of such an interesting creator as your software – it’s completely independent of you, and it teaches you new things, regularly inspiring you and others. You realise that for the software to be taken seriously as an artist, it needs to join the debate about what creativity means (as creativity is an essentially contested concept [18]) in practice and as a societal driving force. You implement methods for philosophical reasoning based on the software’s own creative endeavours, and you enable it to critique the thoughts of others. You add dialogue systems to propose, prove and disprove hypotheses about the nature of creativity, enabling your system to generally provoke discussion around the topic. You’re at level seven, where it’s difficult to argue that your philosophical system isn’t genuinely creative.

 

It is fair to say that no AI system gets close yet to levels 6 and 7 yet, but projects presented in Creative AI and Computational Creativity circles have tested the water up to and including level 5. If I were giving a talk about this roadmap, there would be much handwaving towards the end, as the road gets very blurry, with few signposts. This, of course, is the frontier of Computational Creativity research and reflects directions I will personally be taking software like The Painting Fool in. I’m particularly interested in exploring the notion of the machine condition and seeing how authentic we can make the processing and products from AI systems. That notwithstanding, I hope the roadmap offers some insight and inspiration to people from all backgrounds who are working with cool generative systems and want to take the project further.

 

  1. In Conclusion

More than a decade ago, I was dismayed to read in a graphics textbook the following statement:

“Simulating artistic techniques means also simulating human thinking and reasoning, especially creative thinking. This is impossible to do using algorithms or information processing systems. [19, p. 113]”

The topic of the textbook is Non-photorealistic Computer Graphics, part of which involves getting software to simulate paint/pencil/pastel strokes on-screen. Stating that computational creative thinking is impossible was short-sighted and presumably written to placate creative industry practitioners, who use software like the Adobe Creative Suite which employ such non-photorealistic graphics techniques. In the 17 years since the above statement was published, the argument seems to have moved on from whether software can be independently creative to whether it should be allowed to. It is my sincere hope that the argument will shift soon to the question of how best truly creative AI systems can enhance and inform the human world, and how we can use autonomous software creativity to help us understand how technology works.

Creative AI practitioners have emerged as much via scientists in the machine learning community embracing art practice as via tech-savvy artists picking up and applying tools such as Tensor Flow [20]. Speaking personally, and having witnessed numerous transitions, scientists tend to hold on too long to the idea that product is more important than process or personality in creative practice [21]. This is presumably due to scientific evaluation being objective, with scientific findings expected to be evaluated entirely independently of their origins.

It would be tempting to follow the lead of companies like DeepMind who often justify working on applications to the automated playing of board games and video games [22] by stating that this research pushes forward AI technologies in general, which ultimately leads to improvements in applications to other, more worthwhile, domains like protein structure prediction [23] and healthcare. Getting software to produce better poems, paintings, games, etc., will likely lead to improvements in AI techniques overall, so concentrating on improving quality of output is in some senses a good thing. However, this would serve to deflect from what I believe is a looming crisis in Creative AI, which is when the novelty of the computer generation gimmick wears off, and people begin to realise that authenticity of process, voice and life experience are more important than the so-called “quality” of computer generated artefacts.

The activities of playing games and predicting protein structures have the luxury of objective measures for success and thus progress (beating other players and nanoscale accuracy, respectively). This is not true in the arts, where there are only subjective – and highly debated – notions of the “best” painting, poem, game or musical composition. The humanity wrapped up in artefacts produced by creative people is absolutely critical in the evaluation of those artefacts, which is not true in scientific or (to a lesser extent) competitive scenarios.

It is similarly tempting to appeal to the creative outcomes of the AlphaGo match against Lee Sedol, which have been described beautifully by Cade Metz in [24]:

“In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence.”

“But in Game Four, the human made a move that no machine would ever expect. And it was beautiful too. Indeed, it was just as beautiful as the move from the Google machine – no less and no more. It showed that although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own transcendent moments. And it seems that in the years to come, as we humans work with these machines, our genius will only grow in tandem with our creations.”

In the thought experiment above, in the corpus of 10,000 new symphonies generated by computer, there would surely be many moments of inventive genius: a phrase, passage or flourish of orchestration found in the notes of the music produced. Humankind would learn from the software, and would in turn develop better generative approaches to music production. But would we necessarily learn anything about the human condition, as we generally hope to in the arts?

I posit that only if software is developed to record its life experiences and use them in the pursuit of creative practice will we learn anything about the human condition, through increased understanding of the machine condition. Developing better AI painters means engineering software with more interesting life experiences, not software with better technical abilities. While there might be advantages, there is no imperative for these life experiences to be particularly human-like, and society might be better served if we try and understand computational lives through art generation. We hear all the time that the workings of black box AI systems deep-learned over huge datasets are not understood even by the researchers in the project. While this difficulty is usually overstated, we are facing a situation of increased scenarios where AI-enhanced software makes decisions of real import for us, coupled with decreased understanding of how individual AI systems make those decisions.

Combining the best practices and understanding gained from both Computational Creativity as a research field and Creative AI as an artistic and technological movement, may be the best approach to bringing about a future enhanced by creative software expressing its life experiences artistically for our benefit. The diversity, enthusiasm and innovative thinking coming daily from the Creative AI community, guided by the philosophy of the Computational Creativity movement is a potent combination, and I’m optimistic that in my lifetime, we will reap the benefits of cross-discipline, cross-community collaborations. Creative AI practitioners may rail against interventions from people like myself: stuffy academic disciples of the Computational Creativity discipline. But it is worth mentioning that we were once the angry young men and women of a largely ostracised and ignored arm of AI, shouting into the void at an establishment who thought that notions of creativity in AI systems were too “wooly” to be taken seriously.

Who knows what history will record about the rise of creative machines in society. My sincere hope is that it will chart how Computational Creativity thinking evolved without the benefit of sophisticated technical implementations; this was massively influenced with a surge in the technical abilities of Creative AI Systems during the period of Deep Learning dominance; but then naturally turned back to the philosophical thinking of Computational Creativity in order to properly reap the benefits of truly creative technologies in society.

 

References

[1] Cardoso, A., Veale, T. and Wiggins, G. A. (2009). Converging on the divergent: The history (and future) of the international joint workshops in computational creativity. AI Magazine, 30(3), 15–22.

[2] Simon, H., and Newell, A. (1958). Heuristic problem solving: The next advance in operations research. Operations Research, 6(1), 1-10.

[3] Colton, S. and Wiggins, G. A. (2012). Computational Creativity: A Final Frontier? Proceedings of the European Conference on Artificial Intelligence, 2012.

[4] Cook, M. and Colton, S. (2018). Neighbouring Communities: Interaction, Lessons and Opportunities. Proceedings of the Ninth International Conference on Computational Creativity.

[5] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. (2014). Generative Adversarial Networks. Proceedings of the International Conference on Neural Information Processing Systems.

[6] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T. and Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature 550, 354-359.

[7] Max, D. T. (2011) The Prince’s Gambit: A chess star emerges for the post-computer age. New Yorker, March 14th 2011 edition.

[8] Colton, S., Cook, M., Hepworth, R. and Pease, A. (2014). On Acid Drops and Teardrops: Observer Issues in Computational Creativity. Proceedings of the AISB’50 Symposium on AI and Philosophy.

[9] Colton, S. (2008). Creativity versus the Perception of Creativity in Computational Systems.

Proceedings of the AAAI Spring Symposium on Creative Systems.

[10] Charnley, J., Pease, A. and Colton, S. (2012). On the Notion of Framing in Computational Creativity. Proceedings of the Third International Conference on Computational Creativity.

[11] Guckelsberger, C., Salge, C. and Colton, S. (2017). Addressing the “Why?” in Computational Creativity: A Non-Anthropocentric, Minimal Model of Intentional Creative Agency. Proceedings of the Eighth International Conference on Computational Creativity.

[12] Colton, S., Pease, A. and Saunders, R. (2018). Issues of Authenticity in Autonomously Creative Systems. Proceedings of the Ninth International Conference on Computational Creativity.

[13] Chamberlain, W. and Etter, T. (1984). The Policeman’s Beard is Half-Constructed: Computer Prose and Poetry. Warner Books.

[14] Barthes, R. (1967). The death of the author. Aspen 5-6.

[15] Colton, S. (2012) The Painting Fool: Stories from building an automated painter. In McCormack, J. and d’Inverno, M., eds., Computers and Creativity, 3–38. Springer.

[16] Mordvintsev, A., Olah, C. and Tyka, M. (2015). DeepDream – a code example for visualizing Neural Networks. Google AI Blog, July 1st 2015.

[17] Elgammal, A., Liu, B., Elhoseiny, M. and Mazzone, M. (2017). CAN: Creative Adversarial Networks, Generating “Art” by Learning About Styles and Deviating from Style Norms. Proceedings of the Eighth International Conference on Computational Creativity.

[18] Gallie, W. (1956). Art as an essentially contested concept. The Philosophical Quarterly 6(23),97-114.

[19] Strothotte, H. and Schlechtweg, S. (2002). Non-Photorealistic Computer Graphics: Modelling, Rendering and Animation. Morgan Kaufmann.

[20] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A.,

Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jozefowicz, R.,  Jia, Y., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Schuster, M., Monga, R., Moore, S., Murray, D., Olah, F., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y. and Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.

[21] Jordanous, A. (2016). Four PPPPerspectives on computational creativity in theory and in practice. Connection Science special issue on Computational Creativity, 28(2), 194-216.

[22] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A., Veness, J., Bellemare, M., Graves, A., Riedmiller, M., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S. and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature 518, 529-533.

[23] Evans, R., Jumper, J., Kirkpatrick, J., Sifre, L., Green, T., Qin, C., Zidek, A., Nelson, A., Bridgland, A., Penedones, H., Petersen, S., Simonyan, K., Crossan, S., Jones, D., Silver, D., Kavukcuoglu, K., Hassabis, D. and Senior, A. (2018). De novo structure prediction with deep-learning based scoring. Proceedings of the Thirteenth Critical Assessment of Techniques for Protein Structure Prediction (Abstracts).

[24] Metz, C. (2016). In Two Moves, AlphaGo and Lee Sedol Redefined the Future. Wired, 16th March 2016 edition.

Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.

Subscribe Today

, , ,

No comments yet.

You must be a subscriber and logged in to leave a comment. Users of a Site License are unable to comment.

Log in Now | Subscribe Today