The Technical: Finding Your Footprint

After the development phase, our own Instagram accounts became essential to the functioning of Scrape Elegy. The algorithm runs off nine verified Instagram accounts. Each collaborator and the curatorial team at the Science Gallery have become the hosts of the Instagram accounts that the API (application programming interface) of Scrape Elegy runs off. These accounts exist like normal accounts and are rotated via each request from a visitor. As such, Scrape Elegy has become not just a physical installation, but also an artwork with its own online presence. Our team have posted photos and stories from the accounts sharing the experiences of Scrape Elegy to a separate Instagram account. 

 

The work makes use of two iPads, the first with a ‘Vacant’/‘Occupied’ sign on the exterior wall of the work, the second attached to its inner circle — the first port of call for visitor participation. The iPad has a series of prompts that ask for consent and explain what the process will entail.

 

The visitor is asked to input their Instagram handle. While the work contains a dummy scrape that allows for visitors without Instagram accounts to listen to a sample of the audio journey, the proper depth and participatory nature of the work comes from the input of a visitor’s handle. The algorithm is built so that visitors with private accounts receive friend requests from our Scrape Elegy Instagram accounts. The participant is prompted on the iPad to accept the follow request. The visitor is then invited into the work through prompts on the screen.

The work has multiple accounts owing to Instagram’s anti-bot software, which detects algorithms running on the platform. The work adheres to the Instagram community guidelines and does not keep or retain visitors’ data.

 

The following is a technical description of the back end by the Scrape Elegy developer, Misha Mikho.

 

Scrape Elegy has five containers:

 

  1. The front end (which builds static files during 'docker-compose build' using webpack and shoots them off into a volume, to be picked up by Nginx, immediately exiting during 'docker-compose up');
  2. The back end (which runs Daphne, an ASGI (Asynchronous Server Gateway Interface) Django server with support for channels to facilitate the use of websockets);
  3. The task queue (Huey);
  4. Redis (an in-memory database used both by Django Channels to facilitate websocket connections and by the task queue Huey); and
  5. The web server (Nginx), which serves all the static files, which are:

a. the optimized front-end (React) production build;

b. our back-end static files, e.g., for the Django admin site; and

c. the audio clips, which are generated by the Huey task queue and passed onto Nginx).[2]