More than 3,000 black-and-white mugshots stare out from a wall-size canvas. They are faces of people who’ve been accused of crimes, and in some cases, incarcerated. They are also the faces of people whose likenesses were used, without their consent, to trainbefore social media became a primary source of visual data for algorithm training.
This is artist Trevor Paglen‘s haunting installation “They Took the Faces From the Accused and the Dead,” on display at San Francisco’s deYoung Museum starting Saturday. It’s part of a provocative new exhibit that explores, through the lens of international artists, the ever-expanding space where humans and artificial intelligence meet.
The exhibit’s title, “Uncanny Valley: Being Human in the Age of AI,” suggests viewers might be in for some revenge-seeking Westworld-style robots, but the only bot on display is social robot head Bina48, chatting on video with artist Stephanie Dinkins in an exploration of the human-robot divide. Like Paglen’s piece, most other works focus on the invisible forms of AI, like algorithmic data mining and machine learning, reshaping our reality.
If it’s hard to picture the data economy made into a compelling visual experience, think an AI-generated Taylor Swift and a CGI lizard that spouts poetry generated by a neural network trained on recordings of Doors frontman Jim Morrison.
There’s a spiky red digital serpentine creature named Bob who morphs in appearance, behavior and personality according to his online interactions with visitors, like a Tamagotchi digital pet of yore. And an artist’s rendition of a transport system based on an unrealized Amazon patent for a cage that could ferry workers atop a robotic trolley.
Heady stuff, for sure. But it’s intriguing to see the conversation about AI’s promises and pitfalls extend past academia into psychedelic video projections and interactive avatars. In an era when machines are becoming increasingly effective at mimicking human behavior and understanding, the deYoung says the exhibit is the first in the US to explore through art the impact of AI on the human experience. Art, of course, is one of many arenas where artificial intelligence is becoming a frequent collaborator.
“Technology is changing our world, with artificial intelligence both a new frontier of possibility but also a development fraught with anxiety,” says Thomas P. Campbell, director and CEO of the Fine Arts Museums of San Francisco.
Paglen’s giant grid of faces, culled from the American National Standards Institute‘s archives, is eerie. Making it even eerier is an aesthetic the artist says intentionally evokes 19th century experiments like one by a professor who believed physical appearance could reveal criminal tendencies. Could the photos we share online every day be used to create algorithms that lead to profiling and put people in danger?
In a nearby room, a short film by Lynn Hershman Leeson touches on related questions. In it, actor Tessa Thompson (star of the futuristic Westworld) describes PredPol software, which uses analytics based on current and historical crime data to help law enforcement predict the likely times and locations of future crimes. The company says the software has dramatically reduced crime, but it’s also raised concerns about bias.
“What about algorithmic mistakes, faulty logic?” Thompson asks in a foreboding voice, looking directly into the camera. “Predictive behavior and algorithms can actually construct and alter real-life behavior.”
Pictured inside a red digital square like the one PredPol puts on maps to indicate a likely crime zone, Thompson warns about complacency when it comes to data collection and online privacy. “The red square puts us inside of a coded prison,” she says. “The Red Square has also been a place of revolution. We decide which we will become: prisoners or revolutionaries.”
For more uneasiness, stand in front of Hershman Leeson’s interactive installation “Shadow Stalker” and you’ll see a projection of your body-shaped shadow overlaid on a Google map showing the area around the museum.
Input your email address, and personal details retrieved from internet databases immediately start to pop up — your age, old home addresses, the names of relatives. (Don’t worry, the museum’s legal department has made sure no bank account or other such information will show.) Still, the personal information that flashes for all to see is a sobering reminder of how readily and widely available data collected on us, some without our knowledge, has become.
But with AI Forensic Architecture, an independent research agency based at the University of London. It uses machine-learning methods to analyze citizen-gathered evidence like phone photos and footage in open-source investigations of civil and human rights violations like a suspected chemical weapons attacks in Syria.in transportation, retail and health care ( , for example), not all works touch on its potentially threatening aspects. The exhibit also spotlights
One of the goals of the exhibit, curator Claudia Schmuckli tells me, is “to present a more nuanced picture of how AI operates in the world, to move from this polarizing conversation that pitches technophobes and technophiles against each other, and to really allow us to take a step back and look at how does AI operate in the world today? Where are the opportunities? Where are the current problems?”
Uncanny Valley: Being Human in the Age of AI runs through Oct. 25, expanding through the first floor of the museum into its sculpture garden. There, visitors will find a sculpture of a crouching, nude woman with a live bee colony for a head. Curated materials from the exhibit suggest that Pierre Huyghe’s creation is supposed to be a metaphor for neural networks modeled on the human brain. But it could just as easily represent how many people feel trying to make sense of the increasing complexities of being human in an AI-driven world.