(These are excerpts from my book "Intelligence is not Artificial")
After Machine Intelligence: Machine Creativity - Can Machines do Art?
The question "Can machines think?" is rapidly becoming obsolete. I have no way of knowing whether you "think". We cannot enter someone else's brain and find out if that person has feelings, emotions, thoughts, etc. All we know about other people's inner lives is that it generates a behavior very similar to our own, and therefore we conclude that other people too must have the same kind of inner lives that we have (feelings, emotions, thoughts, etc). Since we cannot even determine with absolute certainty the consciousness of other people, it sounds a bit useless to discuss whether machines can be conscious. Can machines think? Maybe, but we'll never find out for sure, just like we'll never find out for sure if all humans think.
The question "Can machines be creative?" is much more interesting. Humans have always thought of themselves as creative beings, but always failed to explain what that really means. The humble spider can make a very beautiful spider-web. Some birds create spectacular nests. Bees perform intricate dances. Most of humans don't think that the individual spider or the individual bird is a "creative being". Humans assume that something in its genes made it do what it did, no matter how complex and brilliant. But what exactly is different with Shakespeare, Michelangelo and Beethoven?
If you think that the history of human civilization is mainly the history of a uniquely creative species, it is surprising how little the human race has investigated the topic of creativity. The first major philosopher to use the term in the title of a book was probably John Dewey in "Creative Intelligence" (1917) but he was referring to the "being in the world" that, in his opinion, characterizes human intelligence. British social psychologist Graham Wallas in "The Art of Thought" (1926) outlined a four-stage model of the creative process, and others explored the topic in superficial manner. The turning point came in 1950, when Joy-Paul Guilford of the University of Southern California gave his presidential address to the American Psychological Association (later published as "Creativity"), defining creativity as the ability to generate novel ideas. In 1951 Guilford launched the Aptitudes Research Project at his university (the results of that project were later compiled in the book "The Nature of Human Intelligence", 1967). However, someone in California had already started a scholarly project on creativity.
In the 1930s Donald MacKinnon had studied in Henry Murray's psychological clinic at Harvard University, and during World War II he had operated a secret laboratory at a remote Maryland farmhouse whose task was to select spies to infiltrate in Europe (on behalf of the Office of Strategic Services). In 1949 MacKinnon, now a professor at UC Berkeley, founded the Institute of Personality Assessment and Research (IPAR). IPAR interviewed and tested "creative" thinkers from various disciplines (such as writers, architects, scientists and mathematicians). MacKinnon concluded that engineering students were not creative ("Fostering Creativity in Students of Engineering", 1961) and outlined his own model of human intelligence ("The Nature and Nurture of Creative Talent", 1962). IPAR's most prominent researcher, Frank Barron, published the seminal book "Creativity and Psychological Health" (1963) and in 1969 moved to UC Santa Cruz, where he taught an influential course on creativity.
Meanwhile, Alex Osborn had been refining his "brainstorming" technique to stimulate creativity, first published in the book "Applied Imagination" (1953) but already applied successfully to both military and corporate organizations. In 1955 he founded the Creative Problem-Solving Institute at the University of Buffalo that hosted a yearly conference. Also important where the Utah Creativity Research Conferences, inaugurated in 1955, and symposia at the University Michigan State in 1957 and 1958. In 1960 Paul Torrance at the University of Minnesota launched the Minnesota Tests of Creative Thinking (MTCT), today better known as Torrance Tests of Creative Thinking (TTCT). The debate among these psychologists was about whether intelligence and creativity were the same mental process or two different processes. Within a few years "creativity" had become a popular buzzword. Creativity was discussed in popular books, from Arthur Koestler's "The Act of Creation" (1964) to Howard Gardner's "Frames of Mind" (1983), and, in more sensationalistic terms, Margaret Boden's "The Creative Mind" (1990), in which she updated Guilford's definition of creativity to "the ability to generate novel and valuable ideas" (but also fell for some quasi-scams of fake creative programs).
This research paralleled promises and progress in Artificial Intelligence. While Hubert Dreyfus maintained the impossibility of machines to be creative ("What computers still can't do", 1992), Douglas Hofstadter wrote a book about building such machines ("Fluid Concepts and Creative Analogies", 1995). Margaret Boden spoke about "Creativity and Computers" at Stanford's 1993 AAAI Spring Symposium on "Artificial Intelligence and Creativity" In the same year an International Symposium Creativity and Cognition was held at Loughborough University. In the same year Gilles Fauconnier introduced "conceptual blending" at UC San Diego in California ("Conceptual Integration Networks", 1994).
Humans use tools to make art (if nothing else, a pen). But the border between artist and tool has gotten blurred since Harold Cohen conceived AARON in 1973, a painting machine. Cohen asked: "What are the minimum conditions under which a set of marks functions as an image?" I would rephrase it as "What are the minimum conditions under which a set of signs functions as art?" Even Marcel Duchamp's "Fountain" (1917), which is simply a urinal, is considered "art" by the majority of art critics. Abstract art is mostly about abstract signs. Why are Piet Mondrian's or Wassily Kandinsky's simple lines considered art? Most paintings by Vincent Van Gogh and Pablo Picasso are just "wrong" representations of the subject: why are they art, and, in fact, great art?
Enter the machine.
In 1963 Stanley Gill, who had been in the 1940s one of the world's earliest software engineers, wrote a program to compose music for the BBC. In 1968 the same Gill, presiding over the congress of the International Federation for Information Processing at Edinburgh, announced a contest for computer-composed music that was won by Iannis Xenakis' string quartet "ST-4" (i would rather call it a "computer-assisted" composition).
The Cybernetic Serendipity exhibition that ran in London from August to October 1968 featured computer-generated images (including one by Norbert Weiner), a live drawing computer, several computer-generated poems, Peter Zinovieff's music computer that could improvise a song based on a melody whistled by the user, and interactive robotic sculptures such as Bruce Lacey's ROSA Bosom (1965), that, incidentally, had been "best man" at his wedding (ROSA stood for "Radio Operated Simulated Actress").
The filmmaker Malcolm LeGrice created "Typo Drama" (1969), premiered in London at the Event One art exhibition in April 1969, a system that generated the text and the actions for the actors of a theatrical play (the software was written by Alan Sutcliffe, founder of the Computer Arts Society).
It is not difficult to write a program that will write a book. Already in 1967 a program designed by the Fluxus artist Alison Knowles and the composer James Tenney to randomly assemble stanzas produced "The House of Dust" (1967), a computer-generated poem.
It all depends on how random you want the sentences to be. In 1983 New York freelance writers and programmers William Chamberlain and Thomas Etter published "The Policeman's Beard Is Half Constructed", subtitled "the first book ever written by a computer", a collection of poems allegedly written by their program Racter, a program remarkably written in Basic on a personal computer with 64 kilobytes of memory. In 1993 Scott French used a program to compose "Just This Once", a romance novel in the style of Jacqueline Susann (and one of most stereotypical novels ever published), but French's manual contribution was probably massive.
In 1992 the Polish artist Wojciech Bruszewski wrote a computer program that generated sonnets in a nonexistent (but pronounceable) language. These sonnets were published in eight volumes. (This is my favorite: if the machine has to be creative, let it invent its own language).
In 1996 Naoko Tosa at ATR in Japan built an art installation called "Interactive Poem" that consisted in a verbal collaboration between a person and a computer to write poems.
David Cope at UC Santa Cruz had experimented since 1981 with automatic music composition (his expert system EMI and its various descendants). Peter Todd at Stanford employed a recurrent neural network to compose melodies: his network was trained to predict the note following the current note ("A Connectionist Approach to Algorithmic Composition", 1989); but this note-by-note approach was clearly limited. LSTM networks were more appropriate to generate music: his inventor, Schmidhuber, collaborated with Douglas Eck at IDSIA to learn the characteristics of blues music ("Learning the Long-term Structure of the Blues", 2002) and then to compose music ("A First Look at Music Composition using LSTM Recurrent Neural Networks", 2002).
Hod Lipson and Jordan Pollack (now at Brandeis University) used Ken Stanley's NEAT algorithm to build EndlessForms, a program that keeps generating industrial designs ("Automatic Design and Manufacture of Robotic Lifeforms", 2000).
During the 1990s and 2000s several experiments further blurred that line: Ken Goldberg's painting machine "Power and Water" at the University of South California (1992); Matthew Stein's PumaPaint at Wilkes University (1998), an online robot that allows Internet users to create original artwork;
Jurg Lehni's graffiti-spraying machine Hektor in Switzerland (2002);
David Cope's program Emily Howell for music composition, that was conceived in 2003 and went on to release the albums "From Darkness Light" (2009) and "Breathless" (2012);
the painting robots developed since 2006 by Washington-based software engineer Pindar Van Arman; and Vangobot (2008) (pronounced "Van Gogh bot"), a robot built by Nebraska-based artists Luke Kelly and Doug Marx that renders images according to preprogrammed artistic styles.
The pundits, such as Juergen Schmidhuber at IDSIA in Switzerland ("Curious Model-building Control Systems", 1991, later expanded into a "Formal Theory of Creativity, Fun, and Intrinsic Motivation") and Geraint Wiggins at City University of London ("Towards a more Precise Characterisation of Creativity in A.I.", 2001), keep debating how to make machines creative.
After a Kickstarter campaign in 2010, Chicago-based artist Harvey Moon built drawing machines, set their "aesthetic" rules, and let them do the actual drawing. In 2013 Oliver Deussen's team at the University of Konstanz in Germany demonstrated e-David (Drawing Apparatus for Vivid Interactive Display), a robot capable of painting with real colors on a real canvas. In 2013 the Galerie Oberkampf in Paris showed paintings produced over a number of years by a computer program, "The Painting Fool", designed by Simon Colton at Goldsmiths College in London. The Living Machines exhibition of 2013 at London's Natural History Museum and Science Museum featured "Paul", a creative robot capable of sketching a portrait, developed by French inventor Patrick Tresset since 2011, and BNJMN (pronounced "Benjamin"), a robot capable of generating images built for the occasion by Travis Purrington and Danilo Wanner from the Basel Academy of Art and Design.
In 2011 the computer Iamus at Universidad de Malaga University in Spain premiered a piece in its own style (Cope's algorithm were imitating masters). Four of Iamus' compositions for full orchestra are performed by the London Symphony Orchestra in the album "Iamus" (2012).
In 2012 neuroevolutionary veteran Ken Stanley (of NEAT fame) and his students at University of Central Florida unveiled MaestroGenesis, a program that creates polyphonic music from simple monophonic melodies.
John Supko, a music scholar at Duke University, and digital media artist Bill Seaman created the software that composed the music released on the album "S_traits " (2014), voted by the critics of the New York Times as one of the best recordings of the year (it wouldn't make my top 1000 but that's personal taste).
While each of these systems caused headlines in the press, none was autonomous and the "trick" was easy to detect.
Then deep learning happened. Deep learning consists in a multi-layer network that is trained to recognize an object. The training consists in showing the network many instances of that object (say, many cats). Andrew Zisserman's team at Oxford University was probably the first to think of asking a neural network to show what it was learning during this training ("Deep Inside Convolutional Networks", 2014). Basically, they used the neural network to generate the image of the object being learned (say, what the neural network has learned a cat to be like).
In May 2015 a Russian engineer at Google's Swiss labs, Alexander Mordvintsev, used that idea to make a neural network produce psychedelic images. One month later he posted a paper titled "Inceptionism" (jointly with Christopher Olah, an intern at Jeff Dean's Google Brain team in Silicon Valley, and with Mike Tyka, an artist working for Google in Seattle) that sort of coined a new art movement. Neural nets trained to recognize images can be run in reverse so that they instead generate images. More importantly, the networks can be asked to identify objects that actually don't exist, like when you see a face in a cloud. By feeding back this "optical illusion" into the network over and over again, the network eventually displays a detailed image, which is basically the machine's equivalent of a human hallucination. For example, a neural network trained to recognize animals will identify inexistent animals in a cloudy sky.
In August 2015 two students (Leon Gatys and Alexander Ecker) of Matthias Bethge's lab at the University of Tubingen in Germany taught a neural network to capture an artistic style and then applied the artistic style to any picture ("A Neural Algorithm of Artistic Style", 2015). Neural networks can imitate the style of any maestro. A neural network trained to recognize an object tends to separate content and style, and the "style" side of it can be applied to other objects, therefore obtaining a version of those objects in the style that the network previously learned.
In other words, the neural network captures an artistic style and then applies the artistic style to any picture, turning it into a painting in that artistic style.
In September 2015, at the International Computer Music Conference, Donya Quick, a composer working at Paul Hudak's lab at Yale University, presented a computer program called Kulitta for automated music composition. In February 2016 she published on Soundcloud a playlist of Kulitta-made pieces.
In February 2016 Google staged an auction of 29 paintings made by its artificial intelligence at the Grand Theater in San Francisco in collaboration with the Gray Area Foundation for the Arts ("DeepDream: The Art of Neural Networks").
In March 2016, a 20-year-old Princeton University student, Ji-Sung Kim, and his friend Evan Chow created a neural network that can improvise like a jazz musician on Pat Metheny's "And Then I Knew" (1995).
In April 2016 a new Rembrandt portrait was unveiled in Amsterdam, 347 years after the painter's death: Joris Dik at Delft University of Technology created this 3D-printed fake Rembrandt consisting of more than 148 million pixels based on 168,263 fragments from 346 of Rembrandt's paintings. (To be fair, a similar feat had been achieved in 2014 by Jeroen van der Most whose computer program had generated a "lost Van Gogh" after analyzing statistically 129 real paintings of the master).
In May 2016 Daniel Rockmore at Dartmouth College organized the first Neukom Institute Prizes in Computational Arts (soon nicknamed the "Turing Tests in the Creative Arts"), that included three contests to build computer programs that can create respectively a short story, a sonnet, and a DJ set. Spanish students Jaume Parera and Pritish Chandna won the prize for the DJ set, while three students of Kevin Knight's lab at the University of Southern California won the prize for the sonnet ("And from the other side of my apartment/ An empty room behind the inner wall/ A thousand pictures on the kitchen floor/ Talked about a hundred years or more").
Combining a convolutional neural network to learn a person's favorite style of fashion with a generative adversarial network, in 2017 Julian McAuley's student Wang-Cheng Kang at UC San Diego in collaboration with Adobe created a system that can generate personalized clothing.
In May 2016 the TED crowd got to hear a talk by Blaise Aguera y Arcas, principal scientist at Google, titled "We're on the edge of a new frontier in art and creativity — and it's not human".
In July 2016 a Bay Area software engineer, Karmel Allison, launched CuratedAI, an online magazine of poems and prose written by A.I. programs.
In September 2016 Google published a paper on WaveNet, a neural network that can generate music.
Mario Klingemann, a 2016 artist in residence at Google Cultural Institute in Paris, learned how to use generative adversarial networks and became perhaps the first professional artist to specialize in A.I.-based artworks. In 2017 (at the peak of the "fake news" debate) he became famous for "artworks" that consisted in artificially-generated audio and video of non-existing events that felt real.
In 2016 an LSTM created by Ross Goodwin of New York University and named Benjamin, and trained with sci-fi screenplays from the 1980s and 1990s, scripted the sci-fi movie "Sunspring" that was directed by Oscar Sharp and was presented at the Sci-Fi London annual film festival.
At the end of 2016 Maya Ackerman of San Jose State University and David Loker debuted Alysia, a computer program that generates a melody based on a text. A few months later Ackerman, also an opera singer, performed songs whose melodies had been composed by Alysia based on lyrics written by another computer program, Mable, based on Rafael Perez y Perez's Mexica. (Vanity: this performance took place at a Leonardo Art Science Evening Rendezvous, a series that i founded in 2008). The problem with these music-composing programs has been and still is that the music they produce is incredibly boring. If (like me) you think that pop music is mostly garbage, you're in for a real nightmare, because these machines can only make pop music that is even worse than your least favorite pop star's songs. Pure torture for my ears. I am not sure if this is what was meant in 2016 by Douglas Eck (now at Google) when he announced the Magenta project to make art with deep learning techniques when he mentioned "the completely, frankly, astonishing improvements in the state of the art".
A poetry book written by Microsoft's chatbot Xiaoice was published in China in May 2017.
A 2017 blog entry on the Magenta.as website by William Anderson of Huge Inc explained how to achieve pretty much the same kind of "creativity" as inceptionism by simply using old-fashioned Markov chains (the title of the online paper is unfortunately "Using Machine Learning to Make Art", 2017).
In 2017 Chris Donahue's team at UC San Diego trained a neural network with a dance videogame whose users have created dances for many popular songs. The neural network, named Dance Dance Convolution, can generate a dance for any new song.
In 2017 Ahmed Elgammal's group at Rutgers University collaborated with art historian Marian Mazzone of the College of Charleston in South Carolina used a generative adversarial network (GAN) to create a system that learns about art styles and then "deviates" from norms to create its own style ("Creative Adversarial Networks", 2017). Perhaps more importantly in 2015 the same group had written an algorithm that could identify the artist, genre, and style of an artwork, and find correlations between styles, which is the job of the art historian.
In 2017 Ian Simon and Sageev Oore at Google's Magenta subsidiary developed Performance RNN, an LSTM-based recurrent neural network and trained it on the Yamaha e-Piano Competition dataset, which contains MIDI captures of 1,400 performances by skilled pianists, so that the network outputs polyphonic music.
In July 2017 San Francisco's McLoughlin Gallery hosted the exhibition "Artificial Intelligence: The End of Art as We Know It". It showed "portraits" by mural artist (and former Silicon Valley entrepreneur) Matty Mo, who since 2014 signs his artworks as "The Most Famous Artist", and who is mostly famous for stealing ideas from other artists. His portraits were jointly produced with an A.I. program created by a group of hackers.
The standard objection to machine art is that the artwork was not produced by the machine: a human being designed the machine and programmed it to do what it did, hence the machine should get no credit for its "artwork". Because of their nonlinearity, neural networks distance the programmer from the working of the program, but ultimately the same objection holds.
However, if you are painting, it means that a complex architecture of neural processes in your brain made you paint, and those processes are due to the joint work of a genetic program and of environmental forces. Why should you get credit for your artwork?
If what a human brain does is art, then what a machine does is also art.
A skeptic friend, who is a distinguished art scholar at UC Berkeley, told me: "I haven't seen anything I'd take seriously as art". But that's a weak argument: many people cannot take seriously as art the objects exhibited in museums of contemporary art, not to mention performance art, body art and dissonant music. How does humankind decide what qualifies as art?
The Turing Test of art is simple. We are biased when they tell us "this was done by a computer". But what if they show us the art piece and tell us it was done by an Indonesian artist named Namur Saldakan? I bet there will be at least one influential art critic ready to write a lengthy analysis of how Saldakan's art reflects the traditions of Indonesia in the context of globalization etc etc.
In fact, the way that a neural network can be "hijacked" to do art may help understand the brain of the artist. It could lead to a conceptual breakthrough by neuroscientists. After all, nobody ever came up with a decent scientific theory of creativity. Maybe those who thought of playing the neural net in reverse told us something important about what "creativity" is.
This machine art poses other interesting questions for the art world.
What did the art collectors buy at the Google auction? The output of a neural network is a digital file, which can be copied in a split second: why would you pay for something of which an unlimited number of copies can be made? In order to guarantee that no other copies will ever be made, we need to physically destroy the machine or to re-train the neural network so it will never generate those images again.
What is missing to declare this art? Nothing: it is art. There is no doubt in my mind that machines can make art. Just like animals can make art: i have seen amazing spiderwebs and amazing bird nests. The Earth has created art that millions of tourists visit every year, from Iguazu Falls to the Namib desert.
What is missing is not the art, but, rather, the art critic. Art is a conversation between a producer and a consumer, and it's a conversation that often lasts a lifetime; in fact, it lasts centuries and millennia, from generation to generation. Art critics and art historians write books on how to appreciate art. The public visits museums to try and understand what the art critic saw in the art, and this becomes also a conversation between the art critic and the public. An algorithm can produce in a millisecond a positive/negative response based on some criteria, but art cultivates patience. There is no final answer. My reaction to a painting can (and in most cases will) change over my lifetime. I used to be moved by music that i now find tedious, and viceversa i have discovered unspeakable meanings (meanings that cannot be expressed in words) in music that i used to ignore. Different people have different reactions because they have different brains, different stories, different contexts. What is missing from machine art is not the art: what is missing is the ability to appreciate the art. Sure: a machine can suggest to me, based on my preferences, what music to listen to next (which is usually a really tedious suggestion) but that is precisely "not" what a music critic does: the music critic tells me to listen to music that i never even dreamed it existed, and the music critic can explain to me how it relates to a virtually infinite number of cultural, social, etc elements, including other music. A neural network can learn patterns (e.g., habits), but what a music, literary or art critic does is precisely to break those patterns and provide some kind of rationale for why it matters that the pattern is broken.
In a sense, i disagree with Charles Darwin who in "The Descent of Man" (1871) wrote: "The Imagination is one of the highest prerogatives of man. By this faculty he unites former images and ideas, independently of the will, and thus creates brilliant and novel results". In my opinion, all of this doesn't happen in the mind of the artist but rather in the mind of the critic or historian who deliberately and very rationally makes those "brilliant and novel results" brilliant and novel. It doesn't happen "indipendently of the will" but, on the contrary, very much deliberately. I disagree with Darwin on who is the "creator". All musicians can improvise but not all musicians will be considered great improvisers by jazz critics. The meme spreads not because of the musician but because of the critic/historian. The critic/historian may well be a fellow musician, who will endorse a previous musician and elevate him to a classic. Painting and singing probably preceded language but it is only when language emerged that we can talk of Art, because it is only then that we can... talk.
To paraphrase the philosopher Stephen Asma, author of "The Evolution of Imagination" (2017), humans were graphically literate before we were verbally literate. In fact, emotional communication (that now we normally classify as "art") was probably serving a very simple evolutionary function: to emphasize important facts about the environment to fellow humans and possibly trigger rapid reactions in those fellow humans.
Artists may not like to hear this, but art and music have always been overrated. The difficult part is to decide what is art and what is not, what deserves to be saved for future generations, and to explain why. Art is a meaning generator, but the generator is not quite the artist: the meaning if generated when the art is placed in a context and relations with that context are revealed. Both animals and machines can "make" things. Humans are uniquely equipped to criticize what is being made and to put it in a historical context.
The real breakthrough would be a machine that can do the same: place machine (and human) art in a historical and social context, judge it, analyze it, criticize it. Upon reading this statement, our beloved software engineer will rush to design a neural network that can say something meaningful about art (whether made by humans or machines).
But, of course, if you create such a neural network, you already told me who the real critic was: you who handpicked the dataset to train the neural network, and who crafted the architecture of the neural network. And then i will write a book in which i will mention the historical fact that art was made by machines and that you even created a machine to value the art. What is (still) uniquely human is the ability to write a history of what is being made (and how, why and by whom), not necessarily the ability to "make it". Art tells us a lot about the viewer, and music about the listener, while they tell us very little about the creators.
In 1961 the Italian artist Piero Manzoni displayed 90 tin cans labeled "Artist's Shit" in an art gallery with a price fixed to the fluctuation of gold. This is what the Tate Gallery has to say in 2017 about Manzoni's 1961 shit: "Manzoni's critical and metaphorical reification of the artist's body, its processes and products, pointed the way towards an understanding of the persona of the artist and the product of the artist's body as a consumable object. The "Merda d'Artista", the artist's shit, dried naturally and canned 'with no added preservatives', was the perfect metaphor for the bodied and disembodied nature of artistic labour: the work of art as fully incorporated raw material, and its violent expulsion as commodity. Manzoni understood the creative act as part of the cycle of consumption: as a constant reprocessing, packaging, marketing, consuming, reprocessing, packaging, ad infinitum." I am certainly not an art expert, but it is obvious to me that Manzoni's "work of art" has indeed the value of shit (whether made by the artist or by someone else) and it becomes "art" only because the art critic has decided so and has written that elaborate interpretation that suddenly enlightens all of us savages to the exhilarating, life-changing meanings of Manzoni's shit. As game designer Ian Bogost of the Georgia Institute of Technology wrote in 2017: "Before art was culture it was ritual". I am sure that machines, and animals, can make art... but it becomes Art only when it becomes part of a human ritual. Without the human observer, it is not Art just like an electron is neither here nor there until the observer observes it.
Back to the Table of Contents
Purchase "Intelligence is not Artificial"