DAY 1 SUMMARY

 

We welcomed two guest speakers who are both Gray's School of Art graduates and AI pioneers.

 

First was Leila Kleineidam who is a painter and also AI ethics policy advisor for the Ministry Of Defense.

 

The afternoon session was led by Catherine M. Weir who makes AI system limitations visible through her art. She also taught us how to train our own AI using Google Teachable Machine.

 

Fine Art lecturer Jim Hamlyn and our new Robert Gordon University interdisciplinary research lead for Living in a Digital World Kyle Martin were also instrumental in kicking off AI week, joining Leila Kleineidam in a panel discussion and open conversation, the results of which are charted in the diagram to the right.

Jim Hamlyn opening spiel:

 

Intelligence is situated.

 

Think of the inseparable relationship between our intellectual powers and the environment, the ecosystem in which those powers have both evolved, at the level of our species, and developed at the level of us as individuals. Intelligence isn't a self-contained entity locked within the mind, but rather it relies upon a dynamic interplay with the surroundings in which we operate: our ecological niche: our ecosystem. The importance of an ecosystem to the existence and exercise of intelligence cannot be overestimated. Let me give you a stark example. If you entirely remove any intelligent individual from their ecosystem, then they will simply die because literally everything that supports and enables their survival will have vanished. Imagine how utterly vulnerable you would be, and how futile your intellectual powers, if you suddenly found yourself transported to the pitch-dark vacuum of space.

 

In the modern age we are increasingly aware of just how fragile our ecosystem actually is and how even subtle changes can sometimes have radical consequences not just for our own survival but for the survival of other often less adaptable species. Intelligence is undoubtedly, unarguably an adaptation to changeable and sometimes threatening circumstances. But we often forget just how incredibly narrow the limits within which our survival is constrained.

 

If any form of superintelligence is to emerge, it will inevitably have to do so within the very same intricate and fragile ecosystem that we currently occupy and significantly contribute to—especially as regards Artificial Intelligence. It doesn’t take much intelligence to see that we are teetering on the verge of a very different precipice than that embodied by superintelligence.

Catherine M. Weir – RGU AI Week Talk (Edited Extract)

 

I finished my PhD in 2018, and it was around this time the first artworks created using Generative Adversarial Networks (GANs) were starting to appear. GANs have improved a lot since 2018, but back then a lot of the images these systems produced were very strange looking. They were blurry, distorted, and would rarely fool anyone into thinking they were real photographs. But, at the same time, I was interested in what this new technology might mean photography, because – for me, at any rate – the ability promised by GANs to create photographic images from words represents a far more significant shift than that of digitisation; which was the cause of so much theoretical angst in the 90s.

 

Aside from the strangeness of the images themselves, one of the things that really interested me about GANs was the training sets used to build them. As Kate Crawford and Trevor Paglen put it in their 2019 essay, ‘Excavating AI’, ‘training sets […] are the foundation on which contemporary machine-learning systems are built’, and ‘despite the common mythos that AI and the data it draws on are objectively and scientifically classifying the world, everywhere there is politics, ideology, prejudices, and all the subjective stuff of history.’[1]

 

Questions of the politics and biases of training sets were very much in the forefront of my mind when I first started to play around with machine learning. I wanted to try and find ways of making the limitations, and assumptions, built into these systems visible. Light Leaks started during lockdown, when I decided to have a go at training my own GAN. I did this using a piece of software called RunwayML, which is made by a New York-based start-up focused on developing AI tools for creators. Technically speaking, what Runway enables you to do is a form of transfer-learning: instead of training a GAN from scratch, you re-train an existing model using your own dataset. Even so, to get the best results requires between five hundred and one thousand images. It was important to me that I used my own photographs to train the GAN, so I decided to use a collection of bird photographs – around four-hundred and fifty in total – to re-train a StyleGAN model originally trained to generate images of flowers.

 

The results of this first training experiment were promising, but what fascinated me most about these images were the strange, almost glowing, artefacts that started to appear around the edges. There was nothing like them in my original images, and they seemed to have been generated during the training process itself. I think, in fact, they stem from the pre-trained dataset, because if you look at the original dataset it quickly becomes apparent that there are a lot of things in these photographs besides flowers. There are insects, droplets of water, lens flares: instances in which the ‘messiness’ of the real world was impinging on the AI.

 

Photography’s capacity for life-like reproduction has never been its biggest draw for me; in fact, some of the artworks I find most poetic are those that do not directly depict anything at all. These kinds of photographs, which the philosopher Vilém Flusser says are ‘mistakenly’ called abstract,[2] can further invite us to practice what Lyle Rexer has called ‘looking with’, as opposed to looking at, photographs: a mode of seeing which does not privilege the subject of the photograph, but allows us to reflect on the apparatus of photography.[3] When I saw these ‘light leaks’ in my AI-generated images, it struck me that they might similarly act as a means to facilitate critical reflection on machine learning systems in operation without recourse to iconic images.

 

So, I started to isolate them; sat for hours in Photoshop carefully erasing everything in the image except for these little ‘light leaks’. When they were ready, I used a custom software program to classify them using the im2txt image captioning model, based on the Common Objects in Context (COCO) dataset. Through this process, strange pairings began to emerge. im2txt would often ‘see’ traffic lights, cell phones, laptops, and scissors; albeit with very low confidence. Finally, I used another model called AttnGAN to generate an image based on the caption returned by im2txt; which is the third panel you see here.

 

For me, Light Leaks was an interesting experiment: I really enjoyed some of the strange combinations it produced and, importantly, I think it does a good job of bringing attention to some of the limitations of machine learning datasets in a poetic fashion. In some ways, I find these strange, blurry GAN images more intriguing to look at than some of those produced by newer AI systems. In my research, I have come to think of them as an example of what Vilém Flusser terms an ‘informative’ image: an image that has never been seen before, or which conveys new information.[4] At this early point in the development of Generative AI, of GANs, the images seemed to offer a glimpse into the inner workings of the machine in a way that those more closely resembling photographs do not.



[1] Crawford and Paglen, ‘Excavating AI’, 2019 <https://excavating.ai/> [accessed 14 October 2023].

[2] Vilém Flusser, Towards a Philosophy of Photography (London: Reaktion Books, 2000).

[3] Lyle Rexer, The Edge of Vision: The Rise of Abstraction in Photography (New York: Aperture, 2009).

[4] Vilém Flusser, Into the Universe of Technical Images, trans. Nancy Ann Roth (Minneapolis: University of Minnesota Press, 2011).

Workshop 1 - How to Train Your AI

 

Guest Speaker and Gray's alumni Catherine Weir - how I use AI in my work (including training your own AI with Google Teachable Machine and P5.js)

 

Catherine discussed how she employs creative machine learning and generative artificial intelligence in her digital photographic practice and teaching. Her 2021 work Light Leaks uses a series of abstract blurs resembling light leaks from a damaged camera as input for an image recognition system, in an effort to draw critical attention to the limitations and biases of contemporary machine learning systems. Her current project Frankenstein’s Camera consists of a series of landscapes created by blending her own photographs with AI-generated imagery using text prompts based on the writing of Frankenstein author Mary Shelley. In addition to demonstrating how she goes about creating these images using Runway, Catherine reflected on some of the conceptual, theoretical, and ethical challenges working with generative AI has posed for her as a photographic practitioner.

 

This talk was followed by a practical workshop in which attendees trained their own AI image recognition system and create a simple interactive web app using Google Teachable Machine and P5.js.

 

CONTRIBUTORS

Helen Scarlett O'Neill

Jim Hamlyn

Catherine M. Weir

 

Presenter Bio:


Catherine M. Weir is a visual artist and researcher, working primarily with photography, data, code, and creative machine learning. Her practice-based research frequently draws on elements of the natural world – animals, land, and stars – to reflect on our relationship to the nonhuman; whilst simultaneously exploring the implications of new and converging digital media forms for contemporary photographic practice and theory. Her research has been presented at conferences and symposia nationally and internationally, and her work has been exhibited at galleries including Street Level Photoworks (Glasgow), The Royal Scottish Academy (Edinburgh), and the Victoria and Albert Museum (London). Catherine holds a PhD in Fine Art from Glasgow School of Art (2018), an MFA in Computational Studio Arts from Goldsmiths, University of London (2013), and a BA(Hons) in Photographic and Electronic Media from Robert Gordon University (2010). She currently lectures on the BA(Hons) Interaction Design and Design History and Theory programmes at Glasgow School of Art.

 

Website: www.cmweir.com

 

Workshop Resources: www.cmweir.com/rgu-ai-week

 

Google Teachable Machine: https://teachablemachine.withgoogle.com/

 

Runway ML: https://runwayml.com/

 

P5.js: https://p5js.org/

 

ML5.js: https://ml5js.org/

 

Processing: https://processing.org/

 

Arduino: https://www.arduino.cc/

 

Daniel Shiffman’s Coding Train: https://thecodingtrain.com/

 

Open Processing: https://openprocessing.org/

 

 

Presentation slides: 

Links & References:

 

Data Feminism by Catherine D’Ignazio and Lauren F. Klein: https://data-feminism.mitpress.mit.edu/

 

Design Justice by Sasha Costanza-Chock: https://designjustice.mitpress.mit.edu/

 

Algorithms of Oppression by Safiya Noble: https://nyupress.org/9781479837243/algorithms-of-oppression/

 

Atlas of AI by Kate Crawford: https://yalebooks.co.uk/book/9780300264630/atlas-of-ai/

 

AI Art by Joanna Zylinska: http://www.openhumanitiespress.org/books/titles/ai-art/

 

Your Computer is on Fire, eds. Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip: https://mitpress.mit.edu/9780262539739/your-computer-is-on-fire/

 

Into the Universe of Technical Images by Vilém Flusser: https://www.upress.umn.edu/book-division/books/into-the-universe-of-technical-images

 

Caterina Moruzzi DI Webinar: https://vimeo.com/874733743

 

Algorithmic Justice League: https://www.ajl.org/

Slido survey conducted by Leila Kleineidam: