This artistic research is a critique of the techno-solutionism and mystification of AI, focusing on generative neural networks and questioning the value that this technological advancement can bring outside of commercial applications. TroublingGAN has derived the underlying subtle features from the news photography in the training dataset and projected its affective value onto generated visual outputs. On the one hand, it offered an alternative perspective on how to view the images that were used in the training dataset, pointing out the more subtle knowledge previously hidden behind the concrete details, context, and meaning of each image.
On the other hand, through the provocative use of TroublingGAN outputs instead of news photography, we pointed out the unexpected relevance of ‘imperfect’ outcomes in AI-driven visual synthesis. The visual ambiguity characteristic of TroublingGAN could be achieved only by deviating from established industry trends in AI and by contravening several guidelines for successful StyleGAN model training. This form of digital détournement challenges the assumption that synthetic visual media must inherently strive for photorealism. Instead, it engenders images that test our cognitive reflexes to recognise and categorise.
The practical experimentation with GANs during the artistic process of developing the TroublingGAN tool enriched our comprehension of the internal mechanics of generative neural networks and honed our insights into the nature of synthetic images. A considerable portion of the knowledge that surfaced during this process originated from the neural network’s own training activity. Thus, we employed GANs as tools for observation and knowledge production within artistic research. By fostering curiosity and openness towards non-human sense-making, we embraced the sympoietic connection between human and algorithmic ways of knowing. The knowledge produced by AI is set to become increasingly dominant in our society, posing challenges to prevailing ontological and epistemological paradigms. Informed and empathetic artistic interventions into the opaque domain of deep learning can yield unexpected new perspectives and help reorient the trajectory of AI research.
Visual synthetic media, whether photorealistic or visually ambiguous, forms a completely new category of visual material, and its place in visual culture has yet to be determined. Its spectacularity seems to be a temporary effect caused by its novelty; however, the anxiety of its indefiniteness and its affective quality are features of its AI-generated origin and need to be accounted for when working with these visuals. TroublingGAN images provoke substantive discussions about the boundary between representation and abstraction, as well as the potential application of AI-generated semi-abstract photographic material. They invite critical evaluation of the illustrative usage of photojournalism and catalyse numerous speculative thought experiments questioning the role of an image in the twenty-first century.
Through TroublingGAN, we showed that GANs can serve as a metaphor for our society, highlighting some critical issues and opening ethical discussions. The ethical issues range from the integral problem of biased datasets to the use of photojournalism and copyright licences. We have attempted to describe the entanglements within the process of working with GANs, namely those that lead to the projection of multiple biases and the impossibility of avoiding all of those.
We recognise the issue of non-representative datasets and the potential of associated bias being highly problematic and itself a troubling aspect that casts a pessimistic light on any hopeful expectations for the future use of AI generative tools. A tremendous amount of work on unbiased quality datasets would be needed right now for AI to become a non-discriminatory tool, but there is apparently nothing on the horizon. If the datasets of the future need only to depend on the information that is widely available on the Internet, then it is the responsibility of every one of us to operate ethically regarding what online content we are creating and uploading.
This research unveiled critical questions related to the visual representation of troubling events in the news as well as the significance of the equal visual representation of all underprivileged groups of people and issues that are hard to visualise. Coming back to our initial thoughts about this project — if we don’t change the way we think about this world and the way we design for this world, we shall keep repeating the same mistakes and generating new versions of the same problems.
This exposition presents multiple unresolved issues and provocations for future work that we hope will provide inspiration for researchers in or outside the field of artistic research. An interdisciplinary approach is necessary for tackling the complexity of AI creative tools, as is an acknowledgement of artistic practice being a relevant research method that offers valuable insights into an otherwise technical AI discourse.