Richard Bright: The February issue aims to explore what it means (and will mean) to be human in the age of Artificial Intelligence. It would be good to start with how we define intelligence, both in terms of human intelligence and our growing understanding of animal intelligence. It is obviously a complex, multi-faceted entity, (and maybe difficult to answer?) but how would you define intelligence? Can creativity also be defined and, if so, how would you define what creativity is?
Arthur Miller: Let’s take a pass on animal intelligence, interesting and complex though it is.
As for human intelligence, I would define it as a concept, an organising principle, which organises and acts on information gathered from our responses to and actions in the world in which we live, through logic, learning, emotions, and planning. How we gather this information is of the essence. Creativity is the key.
As for creativity, let’s begin with a working definition: It is the production of new knowledge from already existing knowledge, i.e., creativity is problem solving. But how do people solve problems, both on the emotional and logical levels?
Throughout our daily lives we regularly come up against situations which we deal with by turning them into problems, from drawing up a shopping list to studying the Middle East quandary to exploring string theory. I distinguish between everyday creativity like taking a different route to work – little “c” creativity – and the big domain-breaking feats of creativity, such as discovering a theory of relativity – big “C” creativity.
In my work I focus on big “C” creativity and genius-level thinking – people such as Einstein, Picasso, Virginia Woolf, Steve Jobs and Ada Lovelace, among others, whose extraordinary powers distinguish them from almost everyone else and which cannot be attributed simply to hard work. Studying how these people think can give us insights into ordinary everyday creativity.
Genius-level thinking includes the ability to discover new problems and to realise connections between disciplines that seem to everyone else unconnected. It may also involve taking someone else’s idea and pushing it in bold new directions; and not being afraid to make mistakes.
In my forthcoming book on AI and creativity in art, literature and music, I’ve conducted extensive interviews with leaders in these fields, framed against the background of their research. I will suggest that machines too can have the characteristics of creativity.
RB: Until a machine can originate an idea that it wasn’t designed to, can it be considered intelligent in the same way humans are? In other words, should creativity be the benchmark of humanlike intelligence?
AM: As I discussed in my response to the first question above, and expand on further elsewhere in my replies – In my opinion, creativity is a key benchmark of humanlike intelligence.
Surprise – the unexpected – is one characteristic of a creative result. Surprise, along with the novelty of the new result, are subtle concepts because some great discoveries took years to be accepted. Einstein’s annus mirabilis of 1905 was an annus mirabilis only in retrospect. In that year he discovered the special theory of relativity and the quantum nature of light, but carried on working as a patent clerk in the Patent Office in Bern for four more years. It was not until 1911 that scientists realised that Einstein had come up with a whole new theory of space and time.
I do not believe that only humans can be creative.
When the first computer art appeared, critics argued that it was not creative because the art was created by an algorithm input by a human being. But today’s machines are no longer the slaves of pre-programmed algorithms. Artificial neural networks are loosely modelled on the neuronal structure of the human brain and are capable of much more sophisticated behaviour. Other sorts of modern machines are also capable of learning through statistical means. Both are capable of going beyond the input data, just as humans are.
RB: The Lovelace 2.0 Test claims to be a better measure of artificial intelligence than the Turing test, particularly in regards to assessing creativity. To pass the Lovelace Test a machine has to create something original, that it wasn’t programmed to do. What is your view on this?
AM: Like the Turing Test, in the Lovelace 2.0 Test a machine is interrogated by a human, though in this case the human is aware that he is dealing with a computer. The human might ask the machine to draw a picture of a man holding an elephant. If it succeeds then it is asked to write a story about a monkey who likes to ride horses. The interrogation goes on until the machine fails or alternatively produces an artefact that the human considers as demonstrating genuine machine intelligence. Thus far the Lovelace 2.0 Test is merely a thought experiment.
Computational Creativity is the study of machine creativity. It is an interdisciplinary subfield of AI that explores how computational systems can exhibit behaviour that unbiased observers would deem to be creative. In contrast, to pass either the Turing Test or the Lovelace 2.0 Test, the machine does not need to do anything creative. It simply responds to questions.
The key words in this definition of computational creativity are “unbiased observers.” In Ada Lovelace’s original description of what became known as the Lovelace Test, she specified that a machine could be described as exhibiting creativity when it produced an artefact which the programmer could not explain on the basis of an input algorithm. Today’s machines incorporate random processes capable of changing codes in ways that cannot be predicted by the human programmer and thus go beyond Lovelace’s original suggestion. This led the computer scientist Mark Riedl to invent Lovelace 2.0.
Computers work according to statistics, just like we do. We navigate the world by taking into account data, weighing various suggestions (hypotheses) for dealing with it, and then deciding which has the greatest chance of success. Should I cross the street against the light in heavy traffic? I look around and see a gap in the traffic, or perhaps the traffic seems to be slowing up and I decide to go for it. We, too, are statistical engines, a human characteristic we tend to forget. This essentially probabilistic way we make decisions is too often forgotten.
To rate an artefact produced by a computer as creative means that we can’t tell whether it was created by a human or by a machine. Many researchers in machine creativity assert that we should say up front that the artefact was made by a machine and ask, ‘Would you buy it?’ Presenting the artefact for what it is would go far towards removing the bias against machine creativity.
As humans we only know human creativity. A first step towards exploring machine creativity will be to produce machines that can be creative like us. But we need to bear in mind that machines are silicon life forms and ultimately their creativity will be different from ours, perhaps even exceeding ours.
Machines are already showing glimpses of creativity. In 2016 Google’s AlphaGo defeated Le Sedol, a world-class player. As an artificial neural network, AlphaGo learned the rules of Go by studying thousands of matches played by Go masters and playing against itself millions of times. For the 37th move of the second game it made a move that was considered to be a poor choice, according to accepted wisdom dating back centuries. In fact the machine had reacted to the data at hand – the Go board’s configuration – and decided on a move that a human would never consider making. Le was ecstatic about the “beautiful” move. But the intricate internal details of how AlphaGo made that move are still unknown, as there is still incomplete knowledge of what goes on in the network’s hidden layers.
Similarly when IBM’s DeepBlue played Garry Kasparov in 1997, in the 44th move in the first game DeepBlue made a move that was so strange and unexpected that Kasparov stormed out of the room claiming that IBM had cheated. There was a human backstage, he claimed. He was convinced otherwise, but never recovered his equilibrium and lost the match. It turned out that the machine had hit a glitch owing to a bug in its software. Nevertheless it soldiered on and chose a move with a good likelihood of success. Later Kasparov said, “Suddenly [DeepBlue] played like a god for one moment.” It satisfied the original Lovelace Test because the original event cannot be traced back to an input algorithm.
In my opinion these two machines pass the Lovelace Test – they both exhibit glimpses of creativity.
I believe that machines will one day produce art, literature and music of a sort we cannot even imagine. Google’s DeepDream, a deep neural network, has produced previously unimaginable images. Machines are beginning to write software capable of producing literature that explores linguistic space in totally new ways. In music we have the case of a machine in Google’s Project Magenta which wrote a melody (albeit a very simple one) by itself, without having any rules input for how to write music. It learned the rules for writing music from studying a database of 4,500 pop tunes, in the same way that we have a database of music in our brains. Project Magenta’s machine was seeded with four notes, in the same way that we can sit at a piano and play a few notes to get started on creating a song based on our knowledge of music.
Then there is “Beyond the Fence,” a musical whose plot, music and lyrics were suggested by machines. It ran for six weeks on the West End.
In my opinion, however, autonomous machines with volition, as opposed to creativity, are still on the far horizon. I will address this contentious topic in my book.
RB: Algorithms have been created that help us to better understand the mind of a Bach. More recently, technologists have created an AI called ‘Aiva’ (Artificial Intelligence Virtual Artist) and taught it how to compose classical music – Aiva also became the first AI ever to officially acquire the worldwide status of Composer. After having listened to a large amount of music and learned its own models of music theory, Aiva composes its very own sheet music. Do you think computers can be genuinely creative rather than capturing concepts of music theory by the acquisition of existing musical works?
AM: In my lectures on AI and creativity, I sometimes play what I call the “Bach game.” I play an excerpt from a piece by Bach and another generated by the musician and computer scientist David Cope whose software takes apart hundreds of Bach pieces note by note, tagging each note in specific ways as to how it appears in a score, and then reassembles them in a statistical manner. I ask the audience which is the Bach. There is usually a 50/50 split. What really bothers my audiences is that the machine has shown glimmers of creativity which is an attribute many people still believe is reserved only for humans.
Recently a potentially lucrative industry has emerged selling music generated by artificial neural networks. Clients specify moods, rhythms and even ethnicity for the output music. The products are meant as background music for advertisements, Indie productions, YouTube videos and games. The selling point is that there are no copyright fees.
Jukedeck was among the first in this genre. They are all growing rapidly and each has their own hype. Aiva’s claim is that it is the first registered composer, Jukedeck’s that they have churned out over a million songs. The Holy Grail for such companies is to be bought out by someone like Google. Their intent is not research but purely commercial.
I have already mentioned Project Magenta’s step towards creating a machine which produces music on its own. For now their intent is to provide a means for musicians and artists to work with machines.
François Pachet, a musician and computer scientist, leads a group at Spotify, in Paris, that explores ways for musicians and machines to work together to create pop music in several different genres. They have designed an impressive next generation of AI-based composition tools. Their recent album, “Hello World,” goes beyond the level of a demo, and contains sounds generated by their machine that mesmerise, though some might find them hard on the ears. But was that not the case for Eric Satie’s scores and Stravinsky’s “Rite of Spring”? These have since become mainstream.
As to whether machines can be ‘genuinely creative’ without being fed existing musical works, we humans too grow up on a diet of existing musical works which form the basis for the new creations of a Bach, a Beethoven or a John Lennon.
RB: Will AI-composed music ever be indistinguishable from the work of human musicians?
AM: This has already occurred, as I discussed in my description of my “Bach game.” It is the first step toward AI producing its own music, which will be a sort that today we cannot even imagine.
RB: Following on from the previous question, scientists have created an artificially intelligent system (GAN – generative adversarial network) that is capable of producing paintings in which deep neural networks are taught to replicate a number of existing painting styles. The new, modified version, Creative Adversarial Networks (CAN), is designed to generate work that does not fit the known artistic styles. After the paintings were produced, the scientists conducted a survey with members of the public in which they mixed the AI works with paintings produced by human artists. They found that the public preferred the works by AI, and thought they were more novel, complex, and inspiring. What is your view on this?
AM: A GAN is made up of two networks. One is like a detective and the other a forger. The detective is, say, an expert on Picasso and the forger tries to fool the detective with fake Picassos it has generated. In the parlance of GANs, the detective’s system is known as the ‘discriminator’, and the forger’s network as the ‘generator’. The detective keeps rejecting images from the generator. The forger tries to improve its products based on what has been rejected. In the course of this adversarial situation the forger produces interesting variations on Picasso.
In a CAN, products of the generator are assessed by the discriminator as to whether they are art and also whether they are in the style of paintings on which the discriminator network has been trained, which is the Wikiart DataSet. This huge online collection of artworks is made up of 81,449 fine-art paintings, in 27 different styles and 45 different genres (including interior, landscape, and sculptures) from 1,119 artists spanning the years 1400 to 2000.
One interesting result of this work is that with no human intervention CAN decided on abstraction as a solution to the problem posed to it, which was to seek a style that differed from those in the training set. Ahmed Elgammel, the computer scientist who created CAN, considered two possibilities: either there was a bias of some sort in the data towards abstraction, “or the machine has captured the trajectory of art history which is towards abstraction.” He opted for the second one. Becoming more abstract was natural for both the human and machine artist.
Elgammel was surprised that human subjects preferred the AI produced art to that produced by human artists, which included art works from Art Basel 2016, at the pinnacle of the art world.
But could the products of CAN be considered art? Elgammel asked human subjects how they felt when they interacted with all the sets of paintings. For the CAN products they sensed intentionality (that it was composed intentionally), saw an emergent structure, felt that the painting communicated something to them, and felt inspired by the paintings. In all these categories the CAN products rated higher than the art produced by human artists. This can be taken to mean that human subjects deemed the CAN products to be art.
As for creativity, art history students judged the CAN products to be novel and aesthetically pleasing.
Elgammel compared these results with a widely-used set of criteria for creativity proposed by the British computer scientist and artist Simon Colton. Among them were novelty and the ability of a system to assess its own creations. Elgammel sees CAN as satisfying both of Colton’s requirements. He interpreted this as due to the interaction between the adversarial networks in CAN which forced the system to explore creative space in order to deviate from established styles in the data and yet still produce products that the discriminator judged to be art.
RB: The late Harold Cohen was a pioneer in computer art, in algorithmic art, and in generative art, who abandoned his distinguished career in abstract painting in the 1960s in order to “collaborate” with AARON, a computer program that he had designed to produce its own artistic images. AARON, is one of the longest-running, continually maintained AI systems in history. Do you have any examples of artists engaging, or ‘collaborating’, with AI now?
AM: I have already mentioned Google’s Project Magenta which has artists on site who are computer scientist and artist rolled into one. This is a new breed of artist who populate the new avant-garde, as I have discussed in my recent book, Colliding Worlds: How Cutting-Edge Science has Redefined Contemporary Art. Today cutting-edge art is a product of science and technology, particularly AI. Project Magenta also supports artists in residence and is in close contact with others.
Among them are: Mario Klingemann, Memo Akten, Kyle McDonald, Anna Ridler, Refik Anadol, Gene Kogan, Jake Elwes, Theresa Reimann-Dubbers, Terrence Broad and Michael Erlers. Their work pushes the boundaries of recent developments in AI such as DeepDream, GANs, Pix2Pix and CycleGAN. Computer scientists who created these processes have a keen interest in art and appreciate input from artists in order to improve their systems. Many of these systems are open-source, meaning they can be downloaded from the web free of charge and used ‘out of the box,’ which is a help for artists whose coding ability may be minimal.
RB: Can AI learn to be creative? Can AI be taught how to create without guidance and develop its own sense of creativity?
AM: Yes, I believe that in the future machines will be capable of reading the web and, for example, developing an interest in the arts – “Well, that looks cool. I think I’ll try art.” They will build on what they have learned, just like children learn to draw, by imitation.
RB: A lot of the processes behind creative thinking are still unknown. Do you think AI has a big role to play here in helping the understanding about our own creative methodology?
AM: Absolutely, by examining, for example, the workings of the so-called hidden layers in a convolutional neural network. This sort of network was designed in parallel with the human vision system, that is, the way in which we figure out that the object in front of us is a cat or a dog or a car. Doing ‘brain surgery’ on an artificial neural network is already under way in DeepDream, GAN and CAN in ways that include generating art.
New architectures will come on line such as quantum computers and organic structures as in Eduardo Miranda’s work. He uses a slime mould as an interface with a piano. In response to music from the piano the slime mould generates sounds of a sort never before heard. Here, too, surprise and novelty enter, two characteristics of what it is to be creative.
Emotions are essential for creativity and this subject is being explored in a relatively new area of AI, Affective Computing, which seeks to place a machine in the world such that it functions and responds. At first machines will be designed to express emotions like ours and then go on to evolve their own.
The big question is whether a machine can create art and music on its own – that is, be an artist or musician in its own right and produce compelling works. I say, yes, in time.
It is a mistake to try to understand machines on the basis of how we understand humans. Machines are silicon-based life forms and so are alien to us. Their products will entertain us as well as their brethren, as we continue to merge with them.
As to whether machines can help us understand our own creative processes – by working in collaboration with human beings they can help us understand our creative abilities better, aid us in analysing our own thinking, how we solve problems, and how we draw connections between apparently disparate concepts.
RB: How far can AI go, or should go, in the creative process?
AM: As Frank Sinatra put it: All the way.
In the meanwhile stay tuned for my book.
Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.