Topical Cream is a New York-based non-profit, supporting the work of women and GNC individuals in contemporary art through public programming and digital publishing.

Last week at SCOPE Art Fair in New York City, the work Link Between Heaven and Earth (2017) sat among paintings and prints, seemingly at home in the chaos of the Metropolitan Pavilion in Chelsea. However, instead of the Bushwick loft or Berlin basement that most of the festival’s works were made in, this piece originates from a computer lab in the quiet town of Piscataway, New Jersey. The piece was created without acrylics, brushes, canvases, or clay, because the artist doesn’t need them.

Link Between Heaven and Earth is not an intelligent painting. It’s artificially intelligent. It is the product of the aesthetics of eighty-thousand works of art—and a product of the command to make something unlike any of those works and yet able to reside among them. The artist is AICAN, an algorithm whose creator is the Art and Artificial Intelligence Laboratory in the Department of Computer Science at Rutgers University.

An art viewer at Scope Miami Beach takes in “Alternative Facts: The Multi Faces of Untruth,” 2018, created by AICAN.

While the roots of artificial intelligence (AI) can be traced back to the thinking statues of ancient Greece and Egypt, the buzzword was coined at the Dartmouth Summer Research Project on Artificial Intelligence, a 1956 conference at an elite, then men’s only college. Mathematics professor John McCarthy organized six weeks of brainstorming for eleven mathematicians and scientists. Their hypothesis: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” Programs were developed, and computers were learning English, beating chess champions, and evaluating information. Today, AI provides ubiquitous services to the public, like Google’s search engine. The future of AI is in applying machine-learning to different spheres and different human abilities. That includes one of the most sacred and sought-after abilities: creativity.

The AICAN machine uses the algorithm of generative adversarial networks (GANs) to learn the aesthetics of every work of art that it is fed. From this, the machine can create a nearly infinite collection of originals. Its programmers can ask it to bring some of these originals back from its unseeable studio via a retrieval code or query, which are similar to the keywords that you might type into a search engine; due to the volume of its collection, it would be nearly impossible to sift through all the data at once. AICAN is not a drawing machine, which is a robotic method that has been used to a number of artistic ends—to render the artist in the nude using his own bodily fluids via IV hookup is one example.

AICAN, however, is less interested in flesh and blood. What the machine produces is a composite of numbers, not materials, and the art can only exist physically if printed out. I imagine that field of numbers like a night sky filled with paintings instead of stars, their frames glinting as a tentacle arm cuts through space, plucking pieces to bring back to the motherboard. I see the machine as sentient.

Unlike other forms of robotics, like the drawing machine depicted here, AICAN is not concerned with anthropomorphizing the machine. The Rutgers team makes no move to apply their code creation to gendered robotics.

“Most discussions about AI are centered around anthropomorphizing metaphors,” says Emily L. Spratt, art historian, art technician, and former curator of the AICAN Art Collection at the Rutgers Art & AI Lab. “That’s giving too much power and authority—and a sense of volition, even—to the machine. … It is becoming increasingly evident that specificity needs to be brought to these conversations, in regards to the underlying processes of these algorithms and what they purport to do.”

Emily L. Spratt presents the keynote lecture for The Frick Collection symposium, “Searching Through Seeing: Optimizing Computer Vision Technology for the Arts” (April 12, 2018). Courtesy of The Frick Collection. Photograph by George Koelle.

The truth, dangerous or not, is that these machines contain unknowable things. To make it easier for most people to understand, the field has been distilled into a series of concocted phrases and metaphors to make it more digestible. But “machine learning” is just its parent, artificial intelligence. AI has become entangled with robotics; robots are marketed as mechanical humans.

Robotics experts use the language of embodiment precisely because it naturalizes what we can’t understand. We can conceptualize Amazon’s Alexa because she has a human name, thus we are given permission to imagine her as something tangible, fleshy. It is confusing for us to imagine conversing with numbers and letters and brackets. It is far bleaker to imagine giving orders to those numbers and letters and brackets. 

Amazon clearly felt it was critical to gender Alexa as female. It allowed an artificially intelligent robot to perform acts of care and labor, and for the consumer to feel at ease. Beyond comfort, it should improve the user experience, to be waited on by this female-assuming data set.

“When AI takes on a human shape, that’s where we see biases,” says Spratt. “We should not forget that this technology can take on any form we choose for it, and I’d personally prefer that its incarnations not take place on the surface of the female body.” Imagine companies that employ only men, paid to argue over the dimensions of ideal femininity, so that they can make it out of plastic. In the United States, only thirteen percent of engineers are women. It is estimated that women make up twelve percent of U.S. professionals in machine learning research. Minority women comprise fewer than one in ten employed scientists and engineers.

A graph created by AICAN assessing the relative creativity of paintings, while also taking into account the paintings’ contexts within the scope of art history. Its creators claim that it is able to identify masterpieces in line with art-historical precedent.

AI takes on a shape to be marketable, and what is resultantly commodified is the female/femme body. Between programmers with god complexes and pressure to employ gender-based conventions to make sales, it’s a boys’ club.

AICAN is different because the Rutgers team makes no move to apply their code creation to gendered robotics. Its laboratory home is also different from the typical AI workspace: the current eight-person team comprises four men, five women (two are fine arts scholars and six are computer scientists). Spratt believes that there are far more women working in the AI field than the media or academic literature might convey.

These kinds of omissions have also long been a part of the art world, and AICAN literally draws from that legacy. The point of AICAN was to achieve artistic creation without a human hand, while still being recognizable as “art”—art as it is a painting on a wall, art as it carries the lineage of the (old, white, male) Western-art historical canon. This is perhaps why AICAN’s pieces appear so formally dull.

While the AICAN team was unable to say how many works by women or non-binary artists were included in the eighty-thousand artwork wiki data set, we do know that the data came from Western museum collections. A recent survey of the permanent collections of eighteen prominent art museums in the U.S. found that out of the over ten thousand artists represented, eighty-seven percent were male, and eighty-five percent were white.

“I would love to see what would happen if we could put in just women artists,” says Marian Mazzone, a member of the AICAN team and a professor of art history at the College of Charleston. “But we simply don’t have the numbers [and] the machine needs an incredible amount of information. What we want is variety, but what we’ve used is American and European art because we have access to the data.”

While not always visually intriguing, the AICAN artworks are novel, which means that they sell. The machine has entered the contemporary art market, giving it a stake in that market. 

Dr. Ahmed Elgammal is the founder and director of the Rutgers Art & AI Lab. He is a computer scientist-cum-art dealer, having transitioned the exhibition of AICAN works from university galleries to Christie’s and Scope. St. George Killing the Dragon was sold for $16,000 at a New York auction in November 2017. Most of the proceeds went to fund research at Rutgers and the Institut des Hautes Études Scientifiques in France, says Elgammal.

Making AI in the art world both genderless and faceless is one of the most important ethical challenges of AICAN.

Printing the artwork and selling it at Art Basel is a way to take back control, to fit these works into the mold of how we are used to looking at things. On a wall, in a room: you can touch it, you can buy it, you can own it.

But when AICAN works are sold for tens of thousands at a Miami art fair, the machine runs the risk of losing its pureness. “I participate in capitalism, therefore I am.” It’s even harder, now, to reject the anthropomorphization of the machine. He’s an art bro again, losing the genderless quality that was probably a myth to begin with (a “universalist perspectiveless viewpoint” still preserves the “male-as-norm” ideal, explains British researcher Alison Adam).

The attempt to translate generative art to the capitalist auction block is awkward. Marian Mazzone of AICAN remarked that while the imagery is not yet interesting enough to be totally accepted by the art world, she is confident that it will get there. But maybe AI art shouldn’t be treated like traditional art-making at all. Spratt predicts that “the next frontier of AI art will exploit the digital medium from which it derives.” 

In a contemporary world of human-machine coexistence, I still want to matter. One way to assert the relevance of our fallible yet romantic humanity is to give AICAN a mortal curator. 

As Spratt pored over images that the AICAN algorithm had produced, in preparation for the exhibition she curated, Unhuman: Art in the Age of AI, which premiered in Fall 2017 at STATE Studio in Los Angeles and had a second installation at the Frankfurt Book Fair, she found one that particularly resonated. It seemed like a portrait with a story.

“[I saw] a noble face that had been somehow tarnished, a face that had been effaced, yet also a sanctified portrait. I intentionally called this image Malala, and it was an homage to the Pakistani human rights activist, Malala Yousafzai. I wanted to provoke a conversation,” says Spratt. “A conversation about the possibilities of AI art to take on meanings unrelated to the conditions of its production.” The point of the exhibition was all about the understanding of the borders of human creation and interpretation, by asking what was not human. Can generative art, where there’s no explicit intentionality to create a portrait of a face, still take on a political meaning? 

We will always look for the human. But the art market has spoken, and it wants to see what the un-human can do. AICAN has no idea that Malala Yousafzai was shot in the head because she wanted girls to receive equal education in Pakistan. AICAN doesn’t know what sex is, but it does know that naming a work Orgy is on-brand for an abstract painting.

The machine is a deep, imperceptible depth of data gleaned from human creativity—measured aesthetically, by finest brushstroke and exact Pantone shade. What its algorithms do not account for is socio-political relevance. As the technical capability and thus the commercial demand of machine-made art grows, we must still prioritize those artists who make documents of lived experience. The fact that art by women sells for forty-two percent less than art by men in 2019 suggests a market uninterested in difference. If we cannot resist anthropomorphizing the machine, we must consider the implications of maintaining the art and tech world’s current hegemony.


Greta Rainbow is a Brooklyn-based writer and multimedia artist, with a focus on poetic, political subversion through art practice.

Bibliography of contributing texts:
1. Kaplan, Andreas, and Michael Haenlein. “Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence.” Business Horizons 62, no. 1 (2018): 15-25.
2. http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf.
3. https://www.bls.gov/cps/cpsaat11.htm.
4. https://www.wired.com/story/artificial-intelligence-researchers-gender-imbalance/.
5. https://ngcproject.org/statistics.
6. Adam, Alison. Artificial Knowing: Gender and the Thinking Machine. London and New York: Routledge, 2006.
7. Haraway, Donna. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” In Simians, Cyborgs, and Women: The Reinvention of Nature. London and New York: Routledge, 1991.
8. https://www.npr.org/2018/09/18/649060336/researchers-explore-gender-disparities-in-the-art-world.