Richard Bright: Can we begin by you saying something about your background?
Mario Klingemann: I am an autodidact and did neither attend university nor art school. The reason being that whilst I was interested in technology and programming on one side as well as visual art and photography on the other, at the time when I had to decide which path to take neither academia nor art school were offering anything that seemed to remotely combine my various interests. We are talking about the late 80s here, where there was no internet around yet, so I stayed ignorant of the few opportunities worldwide where I might have been able to combine programming with making art. So I started off with some internships in ad-agencies, became a freelance graphic designer in the techno-music scene, worked a few years in motion graphics, co-founded a web-design-collective, tried to get-rich-quick working for a few start-ups and eventually realized that I had been an artist all of the time, but that if you don’t explicitly tell everybody, the same work you do is either a nerdy experiment or something that ends up on the walls of a gallery.
The thread that goes through my personal development is that I was always highly interested in creative uses of technology and thus my interests evolved in parallel to it. As a consequence I happened often to be an early adopter, be it working with digital imagery, making websites, generative art or neural art – in phases where those technologies were still uncharted so you had to find your own way around.
RB: What is the underlying focus and vocabulary of your work?
MK: My work happens on various levels of detail. At the very coarse level is my interest in systems of any kind. This includes actual physical systems like machines or humans, complex systems like society or bureaucracy or believe systems like art and religion. I try to understand what makes these systems tick and how they can be manipulated or augmented. Art as a system is particularly interesting to me since it involves several subsystems like human creativity and perception on the creator’s side and the social, commercial or cultural mechanisms on the audience’s side. Some questions I try to find answers to in this context are for example “What makes an image to be perceived as art and another one not?” or “What is an artist?” These questions then bring me into areas of very fine detail, like “how do you make a bunch of pixels look like an eye?” or “how can a machine create the 100.001st image and make sure that it looks different than all the previous ones and is interesting to a human spectator?”
Ultimately I do all this to satisfy my own curiosity, but since I have to live and eat and since art is not a one-way channel I try to make work that is relevant, interesting, entertaining or touching to other human beings as well.
My work with neural networks introduced quite a few new terms and concepts into my practice. The most important one is the concept of the latent space. This is the internal abstract representation of what a neural network has learned. It is a multidimensional space in which every concept or object has its defined place which in turn allows you to make measurements, create order or to traverse it. The fascinating aspect is for this representation it does not matter what a model was trained on – it could be images, sounds or words. This allows us to build universal translators that for example can translate a text into an image or an image into a sound. A real world example of this is Google translate where a text gets first translated into a latent representation or abstract meta-language and then this representation gets transformed back into another language.
One term I coined for my visual work in these latent spaces is “Neurography” which is short for “Neuro-Photography”. Like a photographer goes out into the physical world, selects a motive and frames it, I travel into the latent spaces of the models I trained and bring back the images I find there. The difference to the real world is that with every model I train I can create an entirely new universe with different rules and behaviours.
RB: When and how did you begin using computers, coding and neural networks to make art?
MK: My earliest contact with computers was when I was still a child in the 1970s. My father is an engineer and the company he worked at had these old Wang mainframes and huge pen plotters. So whenever I visited him in the office I was playing moon landing on an old green terminal and watched those drawing machines with awe. My dad also often brought home the latest technical gadgets like electronic games, chess computers or programmable pocket calculators. On one of those I started learning BASIC. The next one he bought even had a tiny pen plotter which allowed to output graphics that nowadays we might call generative art. When I was 14 I had finally enough money saved to buy me my first “home computer”, a Commodore C64 which allowed me to explore the world of bitmaps and raster graphics. Working with bitmaps I had my first “epiphany” – I realized that a bitmap theoretically allows you to make any image imaginable (within the limits of the available resolution and color depth). A bitmap can show you any image that there ever was and any image that has never been seen before. On the other hand inside the computer a bitmap is just a number, admittedly a very big number, but if you just tried out all the possible numbers eventually you would see every image. So I wrote a program that did exactly that – going through every possible bit combination (something that John F. Simon, Jr. did 10 years later in “Every Icon”). I was 15 years old and we did not have combinatorics in school yet, so I learned the hard way that this approach would not really work since even in a small bitmap the amount of possible images is much bigger than the number of atoms in the universe, so trying to brute-force this would probably not show me a single interesting image in my lifetime.
Nevertheless the idea that a computer is a machine that theoretically allows you to generate interesting images from scratch stuck with me and over the past 35 years I just kept improving my methods that allow me to find the signals in the noise, starting with writing filters, then moving on to algorithm based generative art and eventually using neural networks.
RB: Your project Alternative Face v1.1 features a video of the musician Francoise Hardy speaking with the voice of Kellyanne Conway. Can you say something about this project? What are its aims?
MK: The infamous “alternative facts” quote by Kellyanne Conway happened to coincide with my interest for creating believable imagery. The realization that Orwellian Newspeak has become a vital part of the enemy’s weaponry extrapolated by the mass-psychological potential that “Newsee” will have, once these new technologies become easily abusable, just asked for a coarse warning signal. Whilst words have a more subtle and longer term impact on the way we think and feel images have a much more immediate effect and one flaw in the way our perception works is the “I believe it when I see it” factor. The possibilities to lie with technology unfortunately develop faster than our abilities to adapt our lie detectors, so one motivation to put this piece out there was to provide some basic inoculation to prepare our immune-systems for the actual believability-deficiency-syndrome that is spreading quickly.
RB: Alternative Face v1.1 seems particular relevant in the current age of ‘fake news’. How do you see your role as an artist in such times?
MK: I like the picture of the artist as a canary in the coal mine. Of course every human being observes the world and how it changes and makes assumptions how it might affect their life, but maybe artists have a particular way of seeing that enables them to detect the impact or potential dynamics of certain “disturbances in the force” earlier than others. And since we live in an age where technology has become a vital part of society an artist like me who works intensively with the latest technologies happens to be there at the right time and place. Alternative Face is a good example, especially seen in hindsight where now, a year later, we get flooded in “Fake Porn” and “Deep Forging” videos after the software skills required have become so commodified that everyone can do it.
RB: Can you say something about your residency at Google Arts and Culture and its resulting X Degrees of Separation installation?
MK: My ongoing residency at the Google Arts and Culture Lab is a unique opportunity for me to work artistically with all kinds of cultural data, the latest deep learning technologies and the Google’s technical infrastructure that is required to handle data at this scale. Arts and Culture is a non-profit institution by Google that helps museums and collections to digitize their cultural artifacts and thus make them available for free to everyone around the world. In the lab we are a group of artist and engineers are then exploring ways how this treasure trove of human cultural heritage can be made accessible, presented or analyzed in ways that go beyond just showing the images or their metadata. One of the issues you have when you are facing millions of potentially interesting or important artifacts is: where do you start? What are you looking for? What are you hoping to discover?
X Degrees of Separation is one way to address this. It tries to build visual connections between the artifacts in the collections. Typically when you are faced with hundred thousand of potential choices you might start with two artworks you already know, maybe a painting by van Gogh and a sculpture by Degas. X Degrees of Separation will then employ Google’s algorithm that is also driving its image search and use the visual feature vectors of those artworks to find other artworks or artifacts so that step by step you get a gradient through forms, shades, textures or colors that lead you from the painting to the sculpture. What I find interesting is that along this route you can discover artworks by artists maybe unknown to you and you might find genres or artforms that are outside your filter bubble. The other aspect that appeals to me is that X Degrees of Separation entirely ignores the metadata of the artifacts, so it is free of human preconceptions of what is “high” art, “primitive” art or “outsider” art.
RB: Given the implications on our society that AI and machine learning already has, what role do you think artists have to play?
MK: In an ideal world artists do not have to do research for an expected outcome or commercially viable products. This allows us to explore areas that are not interesting for scientists or businesses. But as it turns out these neglected areas often contain potentially useful ideas. It is a bit like gentrification in idea space – artists move into vacated spaces left behind by scientists or engineers, make those spaces inhabitable and interesting just so a while later entities like the entertainment industry or start-ups see the commercial potential and move in. As an artist you then have to decide if you want to profit from the increased interest and go mainstream or if you rather move on to the next vacant lot.
RB: Can AI learn to be creative? Can AI be taught how to create without guidance and develop its own sense of creativity?
MK: The current deep learning techniques we use are definitely not creative in the same sense that humans are creative. On the other hand they are already able to produce a huge range of audio-visual or textual output that most people would classify as “creative” since what we get to see, hear or read there is often novel, unseen or unheard of as the machines are able to generate new combinations of the material they have been trained on in ways that humans might not have done it. This means that machines are already able to cover that aspect of creativity – our ability to make new connections between concepts we already know. Some people might say that machines will never develop “imagination”, but I don’t think so. Actually I don’t believe in imagination if you interpret it as the immaculate conception of ideas. Any idea that seems to spring to your mind out of thin air is always triggered by some external impression, something we have seen, heard or felt and which then gets processed by our subconscious mind just to surface to our attention when the right conditions are met.
What machines are still missing is the ability to understand what they are making or developing their own motivation to do so. Also unlike humans they are not yet supposed to evolve their “interests” – one model is trained for one purpose, it will not escape from the space of possibilities it has been trained for. I believe that the key that will open this locked door is the ability to generate and understand stories. Once machines can make up believable and interesting stories they will also be able to tell themselves their own stories which in turn can direct their creativity or give us as an audience explanations as for why they have chosen to generate something.
RB: A lot of the processes behind creative thinking are still unknown. Do you think AI has a big role to play here in helping the understanding about our own creative methodology?
MK: As you can see in my previous answer I naively believe that I pretty much know how my creativity works. At least so far this simple model has served me well. For me it is actually more interesting to find out if machines can help us to find ways to think creatively beyond the beaten path. Similar to like AlphaGo demonstrated the limitations of human imagination when it found new ways playing Go that humans never tried in 2000 years of playing the game. I see the biggest potential in augmenting the fuzzy, sometimes unpredictable way in which our brains work (and which’s exact inner workings we will probably not be able to fully understand for a long time) with the precision, specialization and massive computing power and memory of machines.
RB: How far can AI go, or should go, in the creative process?
MK: How far can we go? Ultimately the limit in what machines can do creatively is our ability to recognize a creation as such and not perceive it as noise or random gibberish just because we lack the ability to understand or read it. If you do not speak Chinese you will never be able to appreciate the beauty of a poem written by a Chinese poet. The same analogy applies to what machines might be able to create – as they are enabling us to explore uncharted creative territories we humans will have to keep adapting, learning and developing our perception and comprehension.
Regarding the question of “should”, I am not an advocate of artificial limitations or taboos, at least when it comes to creativity or art. If it should happen that machines at some point become better at creativity than humans I’d happily like to be a member of the welcoming committee. May the better one win.
Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.