AI Artist Spotlight: Trevor Paglen

Trevor Paglen is a novel geographer, photographer, and author who makes the invisible visible by documenting the American surveillance state of the 21st century. A dexterous artist whose skill set spans many disciplines including image-making, sculpture, investigative journalism, writing, engineering, and others, Paglen now has his attention on the invisible, yet fast-developing, technologies of computer vision.

He believes that machines, through artificial intelligence (AI), have learned to see us without us, in what he calls “machine-[to]-machine image-making invisible images.” His work in AI centres around unravelling those invisible images. Paglen’s approach to AI is simultaneously subtle and incisive while interrogating AI.

Paglen achieves this by putting AI to work on problems that reveal ways in which the technology is deficient through the results they generate about humans. For example, he instructed a system trained on ImageNet to label people and show what the system sees–the results were mostly inaccurate.

This form of automated self-incrimination helped Paglen discover loopholes and areas where AI can improve. A major discovery of his was that errors in algorithms behind AI software are often a mirror of the prejudices and poor judgement of the people who educate it. With machine-made images, such as those from a text-to-art generator, defining more basic elements of our lives, Paglen aims to help us see and understand AI differently. 

In 2016, Paglen received the Deutsche Börse Photography Foundation Prize and the Cultural Award from the German Society for Photography. He also received a MacArthur Fellowship, allowing him to become a contender in the AI world among peers such as Joy Buolamwini and Kyle McDonald.

Selected AI Projects

Here are a few of Paglen’s better-known works:

ImageNet Roulette

Paglen cocreated ImageNet Roulette with Kate Crawford. ImageNet Roulette is an app where people upload their photos and use AI to label them with one of the 2,833 subcategories of people that exist within ImageNet.

The application was trained on the most widely used image recognition database and allows people to understand what AI has been learning about people, which is often offensively inaccurate.

With AI taking increasingly important roles in our everyday lives, Paglen argues that “the project is meant to call attention to the real harms that machine learning systems can perpetuate” and the dangers of categorising people. Paglen also talked about how the outcomes of AI-automated processes can tilt in favour of those who have the power to build these systems.

Bloom

Bloom is a series of large-scale photographs of flower formations, made entirely by computer algorithms that analysed real-life photographs. The project also featured The Standard Head, a large sculpture that reconstructs U.S. Central Intelligence Agency agent Woody Bledsoe’s 1960s equation for the typical human head. The project highlighted some of the loopholes built into AI systems and how they’re anything but neutral or objective. 

However, Paglen said during the premiere of Bloom that he was just creating “visual art, not arguments advocating for different forms of regulation.” 

Octopus

Octopus is an interactive live stream invoking dialogue on the digital world in relation to the Covid-19 global pandemic. Through the project, he explored the central themes of AI, the politics of images, facial recognition technologies, and alternative futures.

Employing a variety of disciplines, Paglen used Bloom to provide insight into corporate and government use of machine-learning algorithms to monitor, extract value, and influence people’s lives. In addition to exploring the flaws instilled in AI programming, Paglen explores the amazing tasks that can be accomplished with AI.

Looking Into the Future

As technology advances, the threat to personal privacy will continue. It is a safe bet that Paglen will continue to challenge how we think about surveillance and data collection.