Visit the project
This neural network has not been trained on anything. It starts off completely blank. It is literally opening its eyes for the first time and trying to understand what it sees.
In this project, Akten uses neural networks to explore the way we (and the technology we use) understand the world. Akten elucidates the ways that artificial intelligence algorithms are trained on specific datasets, and how their training influences the ways that neural networks make sense of live camera footage. In the words of the author, “It can see only what it already knows, just like us.”
“Hello, World!” is an interactive video installation. Akten shows how an untrained neural network makes sense of a live camera feed. Audience members are able to become part of the neural network’s input and visual learning process through a live surveillance camera. The neural network finds and optimizes its recognition patterns, predicting future inputs and forgetting past inputs it doesn’t encounter again over time. With increasing prevalence of recognition software and ubiquity of cameras, Akten’s installation raises questions of surveillance and what neural networks are trained to look for.
In other visual studies available online, Akten trains the neural network on classic paintings, images of the ocean, images of clouds, images of flowers, and more. He then shows that neural network live webcam input. The resulting videos of these studies show the live webcam input and the neural network’s interpretation of it side-by-side. These visual studies grant audiences insight into machine learning, as well as an opportunity to question what it means to perceive and to recognize an image, as opposed to understanding it.