Can machines be creative? This week we think about creativity, explore how AI/ML is creating new things and look at some funky bananas.
What is creativity?
We start this week by asking a very theoretical question, which does not have a clear answer. If you go the boring route and ask the Oxford dictionary what creativity is, you get this: “The use of imagination or original ideas to create something; inventiveness.”
After searching high and low through many definitions, defining creativity means a lot of things to a lot of people. Although, two key trends emerge from this exercise. First, creativity involves something “new.” Whether it is creating, combining, or brainstorming, generally creativity involves an original idea. The second trend is defining creativity involves romanticizing it, adding mystery and deep meaning, and leading the reader to believe we will never really know what it is. David Ogilvy’s quote on the creative process certainly resembles this sort of thinking:
“The creative process requires more than reason. Most original thinking isn’t even verbal. It requires a groping experimentation with ideas, governed by intuitive hunches and inspired by the unconscious.”
Can machines be creative?
You could argue machine learning is not creative, if you believe creativity must involve something “new.” Machines are very good at finding patterns and classifying your data into groups you already know. If you have 10,000 clearly labeled cats and dogs, a computer will only classify that group into a dog or a cat. It won’t create new groups of cats and dogs.
Could machines be creative?
While there have been limited attempts in the past to use AI for the creative process, researchers are starting to tinker with the idea of AI creativity. Researchers at Google introduced an algorithm called “Deep Dream” which produces hallucinogenic like images based on modifying how image recognition algorithms work.
Deep dream is an interesting example of researchers asking a computer to “come up” with what something looks like, instead of asking what something is. Instead of giving an image recognition algorithm a picture of a banana and asking it to tell us what it is, Google fed their algorithm a plain white noise picture, and told it to create an image that most looks like a banana. While this may not be classified as machines being creative, the process yields some funky bananas.
What is going on here?
An important thing to realize about artificial intelligence is machines don’t understand the world as we do. In the approach that Google took, the research team used an approach called deep learning, which essentially breaks an image down and then puts it back together and in that process, tries to learn what things are. While I don’t want to simplify the explanation by saying things get “lost in translation”, a computer’s idea of what an ideal banana looks like is clearly different from what one actually is.
These researchers continued to tinker with their algorithm and started to manipulate original white noise images into crazy things. When they asked it to create images with trees and pagodas, they got this image
Machines creating art
While the previous example wasn’t specifically designed to create art, researchers from Rutgers university took these ideas one step further and now are using deep neural networks to create entirely new pieces of art. They relied on a relatively new method called Generative Adversarial Networks, or GANs, to create their new images.
WTF is GAN?
A GAN was a method created by Ian Goodfellow in 2014 which essentially pits an algorithm against an algorithm to create new things. The way Goodfellow explains how this works is imagine a money counterfeiter (one algorithm) and a police officer (another algorithm). The counterfeiter is tasked with trying to create fake money, with limited knowledge of what actual money looks like. The counterfeiter starts creating fake money, and asks the police officer, who knows what real money looks like, if the fake money the counterfeiter created is real. The police officer either replies “fake” or “real” and the counterfeiter takes this feedback and tries again. After enough times trying to create fake money, the counterfeiter should be able to fool the police officer.
How this would work in an art example, is one algorithm (the counterfeiter) would generate a new piece of art, which would then be passed to the second algorithm (the police) to rule if it was fake or not. This authors did exactly this and here are some examples that “fooled” the police officer. As you can see, just like our bananas, they are a little wonky.
(the wonkiness is a whole different story)
Despite these being entirely generated images, you could make the argument that these images represent artworks of different time periods. To counteract this point, the authors did something interesting where they gave the counterfeiter an impossible task. First, they asked the counterfeiter to try and fool the police algorithm into thinking it was art, and at the same time, that piece of art should be difficult for the police officer to classify into a time period. If the police officer categorized it as art, but then could not assign it into a time period, that was deemed a success for the counterfeiter. The results from this scenario are interesting.
These completely generated examples come from the GAN the researchers built. Some of them look like art, but trying to define which art movement they fall into isn’t so easy.
Of course, there is a mathematical explanation
In reality, there are concrete mathematics about how GANs works. What the counterfeiter is trying to do is produce a probability distribution that is similar to the one that the police is using (specially optimizing the Kullback-Liebler divergence, but who’s counting). It just so happens that the probability distribution it is creating involves art works. The machine doesn’t know what artwork is, it’s just trying to optimize a mathematical function.
Whatever creativity is, and whatever machines can do, leave them out
Despite advances in AI creativity, marketers want nothing to do with it. When the idea of Marcel was being introduced, instantly people at Publicis said Marcel would “Kill creativity.” Quotes from the CampaignLive article include “The elements of Marcel that have worried Publicis staff, as identified by Serrano, are the idea of introducing AI into the creative process…”
Should this be the right response?
I think we need to be prepared to accept new ideas created by machines, whether it be ways to improve the game of Go or making a banner ad more effective. While we don’t need to open a museum dedicated to their work, but we need to accept that sometimes a machine can come to a better solution than we can and sometimes that solution may even be creative.