Warning: session_start(): open(/home/clients/186c52fa7058e7f6be1916f8e6a248a2/tmp/sess_331f436d0766b35b66c1b175af09246e, O_RDWR) failed: Disk quota exceeded (122) in /home/clients/186c52fa7058e7f6be1916f8e6a248a2/artmarket/wp-content/themes/artandonly/functions.php on line 621

Warning: session_start(): Failed to read session data: files (path: /home/clients/186c52fa7058e7f6be1916f8e6a248a2/tmp) in /home/clients/186c52fa7058e7f6be1916f8e6a248a2/artmarket/wp-content/themes/artandonly/functions.php on line 621
Interview with Mario Klingemann - ArtMarketGuru

Interview with Mario Klingemann

Mario Klingemann is a German artist, recognised as a pioneer in the field of AI art. Using artificial intelligence and algorithms Klingemann creates and investigates systems and is particularly interested in human perceptions of art and creativity. He is an artist in residence for the Google Arts & Culture Lab, winner of the Lumen Prize Gold 2019 and has also been awarded the British Library Labs Creative Award.

Thank you for talking to us Mario.

 

Can you tell us a bit about yourself, do you have a tech background?

I guess one could say that. I was born in 1970 which means that my childhood coincided with the advent of handheld computer games and arcade machines like Pac-Man or Space Invaders and my teenage years were falling in the time when the first programmable pocket calculators and home computers arrived at our doorsteps. I was always drawn to technology, learned programming BASIC with 12, experimented with what you might now call “generative art” on my Commodore C64 and when I was finished with school by the end of the 1980s I wanted do something that allowed me to combine computers and image-making. The problem was that at that time I could not find anything that fit that description – art schools seemed still to be about manual analog art making and computer science appeared to be way too dry to me.

Mario Klingemann the butchers son
Mario Klingemann, “The Butchers Son”

So, I started out with some internships in advertising agencies and became a copy-writer. In the early nineties Techno music emerged, I decided to go freelance and made a living from designing flyers and posters for club-nights and raves. This was a great opportunity since I could write my own experimental Photoshop filters to make visuals and as long as it looked “Techno” enough the organizers were happy with whatever I gave them. Around 1994 the world-wide-web arrived and I figured out how to create websites, a bit later someone approached me and asked whether I could also make the images I created move so I learned After Effects and made motion graphics for TV for a few years.

In 1997 I co-founded a company called coma]² – A collective of media artists and for 7 years we built pretty creative websites – it was the era of Flash and the infamous “skip intro”. During those years I also started showing my generative art and algorithmic experiments online which got me my first award nomination at the Flash Forward Film Festival 2002 (I did not win unfortunately), but I still like the trailer that I made for the awards show (watch on Vimeo).

A some point I realized that what I really wanted was to make more of these artistic explorations and less commercial work and after a few more years of trying to balance the need to make a living with my urge to create art I finally grew the confidence to call myself an artist and luckily found more and more people who believed me. Fortunately, the breakthroughs in deep learning happened at the same time and the possibilities of AI that I had been waiting for were finally there, so I dived right into it, learned the new skills necessary and well, here I am.

 

It is exciting seeing artists using AI as a medium and you are one of the pioneers. You have dubbed the movement “neurography” – what do you mean by this?

Neurography is short for “neural photography”. It is the way working with neurally generated images, since it is as much a process of search and discovery as it is one of creation. Like a photographer who goes into the world, discovers an interesting motif and then looks for the best ways to frame and capture it, I go into the multidimensional latent spaces of my GANs on the hunt for interestingness. The difference to classical photography is that by training my own GANs I can create and explore new and different worlds every time and am not limited to the single one we are all living in.

 

What draws you to AI? Can you tell us a bit about how you use the technology?

What fascinates me about working with AI is that it gives me control whilst at the same time leaves open the chance for accidents to happen. When training a neural network I can get the expected, namely images that look similar to what I chose as the training material, as well as the unexpected, which is what the model gives me if I feed it with the “wrong” inputs or have it extrapolate outside of what it knows.

I am very thankful to live in a time where scientists and researchers share their work and knowledge freely because with everything I am doing I am standing firmly on their shoulders. I usually start working with models and architectures that a researcher published and then make my own changes, take it apart or try to break it and generally attempt to make it do things that were not its original purpose.

 

Can you tell us about your project with Google?

My residency at Google Arts & Culture was a wonderful experience since it allowed me to experiment with cultural data, the powerful technologies that Google has and collaborate with people that are a lot smarter than me. In my time there I worked on a lot of prototypes which then became part of other projects. One work that I created during this time was “X Degrees of Separation” which tries to find visual connections between cultural artifacts in a similar way to “Six Degrees of Separation” which works with human relations. I could not have done it without the help of some brilliant Google engineers and the huge cultural database of Google Arts & Culture that contains millions of artifacts from museums and collections worldwide.

 

How does AI lend itself to portraiture?

The urge to make portraits seems to be deeply embedded in our human nature. We have used every new medium that has been invented to explore portraiture from a different angles. Painting, sculpture, photography and film were all able to capture different aspects of human nature and I feel that the hyperdimensional aspect of the latent spaces inside of neural networks might be something that we did not have in earlier media.

Mario Klingemann

 

What effect do you strive to achieve in your work or do you prefer to leave it to the machine to decide?

I am trying to find interestingness which is a search that never ends since it is in its nature to melt away like a snowflake in your hand once you have captured it. Interestingness hides among the unfamiliar, the uncommon and the uncanny, but once you’ve discovered it and dragged it out into the spotlight the longer you look at it, the more it becomes familiar or normal and eventually loses its interestingness.

I am using machines as detectors to help me in the search through the ever-growing pile of information that is disposed into our world at an accelerating speed. At the same time, I am adding to that pile myself by using machines to generate ordered patterns of information faster than I could do it left to my own devices. Sometimes in this process it is me who decides and sometimes I leave it to the machine.

 

Your work Memories of Passersby (2018) will be sold at a Sotheby’s auction on March 6. What has the reaction been like to having your work recognised by one of the major auction houses?

I was very surprised and happy to hear that Sotheby’s were interested in my art. Of course, given the growing interest in AI it was not entirely unexpected that works which used the medium would start to be sold at auction. The fact that my work will appear at auction amongst all the other artists who are excelling in this field, makes me a little proud and nervous at the same time.

 

The first work sold at auction that used AI as a medium was Edmond de Belamy by Obvious Collective – how does the technology you used to create Memories of Passerby I differ?

The main difference between Memories of Passersby I and AI artwork which has appeared at auctions in the past is that it is not a finite artwork or a print, but consists of the ‘brain itself’. It is self-contained, and self-generating. One aspect that all machine generated artworks have in common is that at some point in the process a human has to make a curatorial decision which of the infinite variation of images the AI is able to generate will be selected to be shown. This can either be at the end of the process, where you pick single images as the finished piece, or like in the case of Memories of Passersby I it happens when I define the system that makes the images when I give the machine a certain amount of freedom to roam within a given space of possibilities.

 

Are there any other artists that you can tell us about who are in your opinion using technology in an exciting way at the moment?

Yes, there are. At the moment the movement is still comparably small, but it is steadily growing. Making a list of names is always a tricky business, since I will inadvertently forget someone, so I apologize in advance to those I left out and limit my list to artists who are using AI to make visual work and omit others who use AI in different ways: Harshit Agrawal, Memo Akten, Sougwen Chung, Sophie Crespo, Cyril Diagne, Jake Elwes, Gael Hugo, Anna Ridler, Helena Sarin, Chris Sugrue, Hannu Töyrylä, Mike Tyka, Tom White.

 

What are your plans for the coming years? We hear your work will be displayed in the Barbican’s new exhibition “AI: More Than Human” this May.

At the moment I am completely focused on making a new installation called “Circuit Training” for the show at the Barbican. It is definitely my biggest project to date. What I am trying to create is a machine that makes the art people want (or deserve, depending on how you read it). Being confined to an exhibition space the only material it can work with are the visitors themselves. It will work by continuously collecting images of people and using them to train data for models from which it will try to generate “art”. This art will then be judged by the audience and from the feedback it gets it will try to learn which aesthetics, motifs, styles or compositions people prefer to see and adjust its training and classification processes accordingly. So, you could say that the whole system is a big GAN with humans being the data and the discriminators.

I am also working on a new series of works that is inspired by the “Three Body Problem” – the original physical based one, not the book – in which I translate this concept from three-dimensional space into the hyperdimensional latent space of a GAN. The first one in this series called “Hyperdimensional Attractions: Bestiary” has already sold and been shown in the library of one of my collectors. I have a few more in the works and those will be available through my gallery Onkaos in Madrid.