Interactive digital media has a relatively short history, with its early development described thoroughly in Janet Murray’s book “Hamlet on the Holodeck” (Murray 1997). This text will not attempt to present a comparably comprehensive overview, but instead include the works that have made an impression on me and my personal analysis of the last decade’s development of technology and affordances.
Read about my personal digital background in the text “My interactive journey” in the office.
Concepts and technological evolution
The late nineteen-eighties was the time of the first personal computers, the spread of the first game consoles, and is also when I started exploring digital media. It was also when the first commercial equipment for Virtual Reality was launched (Barnard 2022). However, since neither screen technology nor graphic processors were mature enough, the excitement of the eighties and nineties seemed to die in a wave of nauseating VR experiments (4Gears 2019).
For the next twenty years, the idea of mass market VR would lie relatively dormant while the world experienced the spread of personal computing, mobile phones, digital networks and social media. The first wave of computer games from the eighties and nineties, often based on 'point and click' stories with limited re-playability, were largely replaced by role playing games and more open game worlds that invited hours and hours of exploration. Casual games found an eager audience on increasingly advanced mobile phones, and the eventual inclusion of network functionality supported the growth of large gaming communities and online gaming.
During the first two decades of the new century, large game titles developed into branded universes that offered a continuous stream of new content and challenges, analyzing user data and using what they learned to avoid losing the gamers’ attention to competing games (Yang et al. 2019). Next to these behemoths, smaller game developing companies and indie developers created stand-alone titles that tried to get noticed by offering original content and gameplay (Warren 2020).
No other interactive storytelling formats have been able to compete with the immense success of computer games. However, these decades also produced experiments with web-based interactive experiences [1]. Transmedia productions emerged that mixed television series with information on websites or social media platforms, like the series “Ruby Skye P. I.” (2010) and Norway’s “Skam” (2015). The wave of application development in the 2010s also led to more interactive experiences. Mobile phone apps gave better opportunities for interaction than web-based experiences, and mobile phone users proved to be happy to download and use apps. I saw smaller experiences pop up as part of marketing campaigns. Interactivity seeped into educational material and apps that mixed gameplay with learning became popular for iPads (Barseghian 2012). However, larger non-game interactive experiences were few and far between.
Mature technology
In 2012, the world had already had smartphones with small, high resolution colour screens for over five years and a huge global market had led to the creation of strong graphic cards and cheap storage. The technology that had been missing in the eighties had matured, and an innovator named Palmer Luckey realized that the time had come to revisit the idea of the commercially available VR headset. His 2012 Kickstarter campaign to create the first Oculus headset is by many seen as the starting point of the second wave of VR (Kickstarter 2016). The availability of user-friendly VR headsets is seen as the key to building a commercially viable, global market for the medium.
VR headsets were also marketed to the tech-interested gaming community and held the promise to be a platform for exciting gaming experiences. In my view, this market interest helped to create more excitement around VR than other interactive experiences, and the solid revenue figures from the game industry motivated large corporations like Meta, Microsoft, Apple and many others to invest heavily in VR and its sister technology AR (Augmented Reality)(Vk 2021).
While Luckey worked to create commercially available VR headsets, VR pioneer Nonny De la Penã was instead focused on exploring the storytelling affordances of VR (Peña 2015). She built her own headsets because she needed the equipment to explore the potential of what she called “immersive journalism”.
Discovering De la Peña’s work was what really inspired me to dive into VR. Before then, I had thought that computer games, and especially mobile games, were the best starting point for creating interactive experiences for non-fiction. It seemed easier to be able to reach out with interactive non-fiction by inserting it into what would appear to be a game and could thereby make use of the established game market and its distribution methods. However, the way VR includes the participant’s body and senses offers a completely new way of thinking about communication.
Nonny de la Penã’s “Project Syria” (2013) is an impressive and important early VR-experience that demonstrates how VR can augment journalism and the physical effect it can have on the participant. It explores the effect of placing a real-life video recording into an animation of the location where it were recorded. The participant sees a video from a street corner in Aleppo. A girl is singing in the video, before being interrupted by a big explosion. The animation takes over and shows the rest of the scene that the recording did not capture. In this way, the 3D animation gives additional information about the event, while the authentic video recording shows the participant what has actually happened. The experience places the participant next to the girl in the scene and in the midst of the noise and panic that follows. It was an innovative way to place an eyewitness’ partial documentation into a wider, more explanatory context in order to better demonstrate its significance.
In the animated “Hunger in Los Angeles” (2012), de la Penã shows a queue outside a food bank. An animated male figure queuing in front of the participant collapses in a diabetic shock. Focus group testing showed that participants would then physically crouch down to check on the animated person, clearly having a physical, bodily reaction to the event (drawing).
The realization that VR could be experienced as real enough to actually trigger instinctive bodily reactions opened up the whole topic of immersion and presence.
A lot of 360° at first
De la Penã’s work stood out among other contemporary VR experiences because most of them were films shot in a 360° view. This was the most accessible technology at the beginning, and there were several experimental 360° documentary films, many of which were basically letting a participant experience a physical place they would not otherwise be able to visit. These ranged from the viewing point by the Golden Gate Bridge, standing in the Antarctic looking at penguins or visiting a flat that was for sale. This idea of transporting the participant to an otherwise not readily accessible place also became a central storytelling aspect to many later animated VR experiences. One example is the experience “D.M.Z: Memories of a no man’s land” (2015). An animated presentation of the border between North and South Korea communicates the claustrophobic and surreal atmosphere at the checkpoint in the heavily guarded de-militarized zone.
In “Strangers with Patrick Watson” (2014), this idea is explored in a different way. This time, the goal is not to present a physical place, but rather the possibility to place the participant in an intimate setting. The experience consists of a room filmed in 3D with professional pianist Patrick Watson playing the piano 'only for me'. This was the earliest piece I saw in which I, as a participant, was clearly present in the experience, not merely an invisible visitor to places and people unaware of my gaze. The intimacy of the space was also new to me. A lot of the earliest 360°pieces reveled in large spaces, maybe fascinated by the idea that a participant could put on a headset in a small room and be virtually transported to a huge, open place. In its seeming simplicity, this experience really impressed me by its then innovative focus on intimacy.
More animation in the mix
Starting in 2014, I began to systematically attend documentary film festivals in order to experience these early VR innovations, as these festivals seemed to be the best place to see the VR pieces that were experimenting with storytelling experiences. Other venues were in my view more focused on the VR technology and its commercial potential. more utilitarian VR.
“Assent” (2013) was one of the earliest examples of an animated VR documentary that I experienced. While walking up a virtual hill, I hear Oscar Raby narrating about his missing relatives and assassinations in Chile, until I reach the top of the hill and find hanged men. While other VR pieces at the time struggled to hold the participant’s attention, this piece’s combination of voiceover and journey kept my focus on and in the story, making the discovery of the men at the end a sad and meaningful experience.
“The unknown photographer” (2015) is a different take on an interactive documentary. It is based on thousands of photographs from World War I, some of whom are shown in an animated landscape resembling a battlefield. The way the experience invited exploration and movement was new to me. The participant is invited move about in this landscape and study the pictures while listening to a voiceover from the putative photographer.
“Drawing room” (2015) was one of the first animated VR pieces I saw that played with scope. The Dutch artist Jan Rothuizen tells the story of the time he spent in a small room on a roof in Amsterdam, and while he is doing this he draws the scenes around the participant with black lines against an entirely white background. The participant is seated and passive, but Rothuizen changes the perspectives and participant’s placement, creating a dynamic experience where the participant is positioned in different parts of the room. At the end, the walls fall away and the participant floats out over a detailed drawing of Amsterdam.
The VR film “Collisions” (2015) is a seated VR piece that is a beautiful example of a successful combination of 360° filming and animation. The film tells the story of nuclear bomb testing on Aboriginal land in Australia and contains a deeply touching animated sequence showing the earth burning and animals fleeing.
Trust and empathy
By 2015, it had become clear that audiences experienced 360 films as very trustworthy – 'as if they had been there themselves' – which motivated many news organizations to initiate 360 filming projects. As Thomas Seymat, 360 director and VR editor at Euronews put it, “Since there’s no longer any ‘off camera’, accusations of biased framing hiding part of the reality of the event can not be levelled anymore” (Jarosz 2018). “Seeking home” (2015), a 360 documentary from a refugee camp in Calais known as ‘the jungle’, is a good example of such news coverage. The 360 view and filmed walks in the camp seem to convince the participants that the scene had not been constructed.
That same year also saw the release of the UN-funded 360 film “Clouds over Sidra” (2015) that followed the life of a 12-year-old Syrian refugee. Response to this production made co-creator Chris Milk say what has since become a famous quote: “VR is the new empathy machine” (Milk 2015). This comment sparked both research on the emptional impact of VR experiences and a lively debate about empathy and VR that is still ongoing.
Read more about empathy in VR in the text “Empathy and simulation” in the library.
Enter the game engines
Up until this point, most VR experiences placed the participator in filmed or animated environments and let them witness a linear narrative without offer interactivity. However, 2015 was also the year when the game engine developer Epic changed the licensing model for the Unreal game engine, making it free to use (Epic 2015). Epic now offered 'blueprints', which made development much easier, and launched strong functionalityor creating 3D environments and importing various data sources.
While 360 filmed VR experiences were much easier and cheaper to produce and had dominated the field in most festivals I visited up until 2015, animated experiences created with game engines gradually became the norm. At 2022’s IDFA festival in Amsterdam, which has curated and shown VR experiences consistently over the past decade, there was not a single 360 film, even though setting up VR cinemas with 360 films is much easier and cheaper than showing interactive pieces. Some 360 filming was included in the experiences, but mostly as an element mixed with animation and interaction (IDFA 2022).
The watershed year of 2016
2016 was one the most experimental years for VR experiences that I have witnessed, with many works incorporating the new possibilities that game engines provided. “Bear 71 VR” (2016) was one of the first experiences I saw that utilized external data sets, creating a visualization of the movements captured from the GPS-tracking of a bear numbered 71 and other wildlife.
The British newspaper The Guardian invested in a very active VR-program, and released two interesting productions that year. “Underworld” (2016) was the first VR piece I experienced that introduced an element of gameplay. I found myself in the London sewer tunnels, holding a torch. I was then told that the battery is about to run out and that I have to hurry to get out in time. This element of danger gave me a sense of urgency and excitement. While searching for the way out of the sewer system, a voiceover told me the story of the London sewer system, rats ran around and information was shown as writing on the walls.
The Guardian also launched the experience “6x9” (2016) the same year, which shows an animation of an solitary isolation prison cell (drawing). The participant listens to actual noises recorded from a prison block and is presented with an animation visualizing how long-term isolation can lead to hallucinations. Relevant facts about the effect of isolation on prisoners are shown in writing on the walls.
The most impressive VR piece that premiered that year was “Notes on Blindness” (2016), which is still frequently referenced in the VR field today. It utilizes the VR space to show a different sensory experience than the one that most people have. It visualizes the hearing and sensory reading of the surroundings of a blind man through whirling particles of light showing traces of people and movement against a black background (drawing, colours reversed). The experience also includes a rich, spatial sound design that guides the attention of the participant. It is both beautiful and demonstrates a sensory experience that would probably not have been possible in a different medium. The removal of reality especially impressed me. The exclusion of reality that the VR headset can offer was used to remove information and distractions, instead of trying to show an otherwise unavailable place.
One year later, the Guardian continued its exploration of sensory input with “First impressions” (2017). The participant finds themselves placed in a baby’s playpen and gets to experience how a baby’s senses gradually evolve until the child can see and recognize its own body and mother. Sadly, shortly after this, the Guardian scaled down its animated experiments, and other mainstream newspapers also started limiting their VR use to 360° videos. Despite the renewed enthusiasm for VR that followed the launch of the new VR headsets, a new mass market or viable business cases for VR had yet to be developed.
Focus and motivation
The challenge of keeping a participant’s focus had been eagerly debated for years at this point, mainly in the context of the challenge of editing of 360° videos. A linear narration in 360° needs to ensure that the participant is looking in the right place at all times so that the necessary information is communicated and the intensity of the storytelling isn’t lost. The term 'points of interest' (POI) was introduced to identify elements in the scenes that could help steer the participant’s gaze. Doors were widely used as navigational POIs. 360° filmmakers also struggled to remove distracting and objects when shooting. In 2017, magicians were even studied for their ability to keep an audience’s focus on a desired location (Experiments 2017). Google did much research on this topic, and Jessica Brillhart developed a new editing tool for 360° experiences (Brillhart 2018).
With the introduction of animation and game engines, the discussion about keeping a participant’s focus died down. Animation now provided the opportunity to play with scale, senses and proportions in totally new ways and became the new focal point for experimental storytelling.
“Accused #2 Walter Sisulu” (2018) is an animated experience in black and white, based on the recorded audio from the court case of Nelson Mandela’s colleague. It impressed me with its use of scale. The participant stands in the middle of a courtroom and has to lift their head to look up towards the judge, who is placed 'impossibly high' above the accused (drawing).
At the end of the experience, we see people queuing for their first, free election in South Africa. The participant then floats upwards into bright light and open air. The last scene gave me an immense, uplifted feeling of freedom after having felt oppressed and small by the dark, towering surroundings during most of the film. It is a good example of using proportions and scale to convey a message.
One of my favourite VR experiences to this day is “Allumette” (2016), which is another early example of play with scale. It shows the participant a charming city in the sky that resembles a globe. The city has three levels. The participant is placed in front of it, with the globe floating in chest-level in front of them. The city is populated by small figures, and the participant can move around it and peek wherever they want. Hans Christian Andersen’s story of the girl with the matches starts playing when the participant has moved to look at figures standing under a bridge in the town. Because the participant can choose where to look and how to experience the story, they can be said to take on the role of their own camera person. When an airship sails towards the city , the participant can choose to push their face through the ship’s side and watch the story progress inside it (drawing).
For me, “Notes on Blindness” and “Allumette” were the two first VR pieces that made me forget myself – sinking totally into the experience. They made everything that came before them seem like demos and mere experiments with technology.
Nauseating movement
It is probably not a coincidence that the experiences described so far have been quite slow-paced. in VR, the brain struggles to interpret visual input that signals quick movements that are not felt by the physical body. The result is nausea.
This challenge led to many of the animated VR experiences launched in 2017 and 2018 to be “on the rails” – with the participant sitting in a boat floating through a landscape or being moved forward automatically at an even pace (Geffen 2016) This controlled movement made the experiences feel less static and the forward momentum helped focus the attention of the participant.
The other movement that did not induce nausea and was therefore frequently used was teleportation, where the participant points to a location in the experience and is lifted there there instantly. This unrealistic way to move is, however, likely to work against the creation of any sense of presence and means that the participant has to spend time to re-orientate themselves. These interruptions can dampen any suspense that might have been created.
“A thin black line” (2017) is in my opinion one of the more successful experiences that is partly 'on rails'. In one scene, the participant is placed on the back of a moving lorry while being deported from their home. The scene works because the presence of a lorry fits in with the theme of the journey and makes sense to the story. It is also just one of several scenes in the experience, avoiding the 'conveyor belt' sensation one can get, as though one is floating through a whole experience in a canoe.
Photographic details in VR
The increased use of game engines for both photorealistic experiences and animation created the need to be able to capture 3D pictures of actual objects. Photogrammetry could combine 2D photographs into 3D objects which then could be manipulated. The experience “RecoVR: Mosul, a Collective Reconstruction” (2015) utilized this process, collecting photographs from tourists to recreate 3D models of historic artifacts that had been destroyed by the Taliban. Scanning equipment made it possible to capture whole landscapes or rooms. Similarly, the VR experience “The Cave” (2017) consisted of a detailed 3D reproduction of the stunningly painted Mogao caves in China. However, these processes were restricted to recreating static objects.
While this recreated static objects, the sensors in Microsoft’s Kinect camera (George 2017) could film 3D surfaces in an affordable and relatively simple way originally been released in 2009 as an add-on to their Xbox console, but they could be combined with a regular camera and used to film 3D surfaces. These sensors were used in VR as early as 2014, with the experience “Clouds” (2014) showing volumetrically captured interviews with pixelated renditions of the talking persons. “Home after War” (2019) combines both techniques with more modern equipment– scanning a complete house in 3D and interviewing the house’s owner with volumetric capture. The result is a VR experience that lets a participant move around in a visual, 3D rendition of a house. The scanned interview of the owner is placed in the house, where he seems to be present, explaining the challenge of returning to a house that a receding enemy has booby-trapped.
Since these experiences, the resolution of captured 3D images has been continuously improved, delivering more and more realistic 3D filming. The captured images can then be animated and manipulated with game engines.
Everything, all at once
Affordances is a term that is used to describe the potential use of an object or technology. As described above, new technological innovations have created a stream of new possibilities for creators to use VR. These different affordances are being continuously explored and identified to understand how they can be used to create engaging and interesting experiences.
In my view, 2017 was the year when these explorations of affordances really took off. The VR experiences were more playful and less similar to each other. The game engines had made it possible to combine so many things in an experience: several different data sources, new user interfaces, more interaction, physical feedback systems, photogrammetry, volumetric capture, spatial sound and more. VR experiences were created that included actors that would give unique responses to the participant’s actions. In “Draw me close” (2017), the participant even gets a physical hug from an actor – experiencing the sensation of real-life physical contact while seeing an avatar hugging them in the VR experience (Solsman 2017).
Others used physical objects equipped with the newly launched HTC Vive tracker in the area explored by the participant. When the participant reached to pick up virtual objects shown in the VR experience, there would actually be physical objects placed in the room that they would feel and be able to study in their hands to study.
To me, it seemed as if we had finally reached a technological maturity that meant that we could focus more on the effect we wished to create than figuring out what the available technology would allow us to create.
Sadly, these advances happened immediately before an economic slowdown, and subsequently the covid-19 pandemic, with its lockdowns, restrictions, and concerns about public hygiene. My impression is that the lack of informal networking and workshops dampened the playful experimentation that is often the starting point for non-commercial projects. Furthermore, presenting VR experiences was next to impossible, since gathering in a physical venue and having many people share a headset was difficult when the general worry was more about preventing a viral infection from spreading.
Nevertheless, many of the VR experiences I have seen in the last couple of years have been the most imaginative and interesting. My impression is that more thought is being put into the emotional experience of the participant and that there are more meaningful ways to interact with the material.
One example is “Goliath: Playing with Reality” (2021), a VR piece presenting the history of a schizophrenic man who finds meaning in online gaming. One scene presents recorded interviews with the many therapists and doctors that he has visited as several columns of raining light. I can move into these showers of sound and listen to descriptions of Goliath’s mental health. Seeing the visual representation of all these doctors’ comments gives me an overwhelming sense of drowning in too much information, but also gives the feeling that I do not need to listen to everything. The scene is not trying to give me actual information, but is trying to present the sense of becoming lost and confused by the sheer volume. Maybe this is what Goliath felt, having all these voices drown him?
“The Man Who Couldn’t Leave” (2022) presents a fascinating combination of volumetric capture and animation in a 34 minute VR piece that has given me the most film-like experience so far. It begins with a scene in which the protagonist stands and talks between animated, static prisoners in bunk beds; the scene then changes to the same bunks being filled with actors who are captured volumetrically. While the first scene ensured that little distracted from the man’s narration, the second scene moves my focus to the fellow prisoners and brings them alive. This scene paints a richer and more intimate picture of his time in prison, demonstrating the prisoners’ camaraderie.
The experience “In Pursuit of Repetitive Beats” (2022) has taken a completely different route. It uses vibrating backpacks and air fans as part of its setup for a VR documentary about rave culture in Britain in the late eighties and early nineties. While driving towards a rave, the participant sticks their head out of the roof window of the car. Feeling the wind created by the fan heightens the feeling of driving. While at the rave, the vibrating backpack reminds the participant of the physical pressure felt from large loudspeakers. Interviews are presented as postcards, pictures and posters, with the images starting to move and talk when they are picked up or looked at.
The experience “Plastisapiense” (2022) starts with the participant floating under water. A voiceover talks about microplastic in the oceans and asks the participant to breath with their hands. Their hands appear with long, weird fingers. Moving them in a breathing motion makes the whole underwater scene move in sync. The feeling is one of breathing together with nature, communicating physically that humans and nature are biologically linked. To me, this is an experience that has moved more towards a user journey – letting the participant feel the story – and partly away from the participant being a mere passive recipient of the message in the voiceover.
These examples use very different technologies to evoke emotions instead of communicating facts in a way that cannot be replicated in other media. For example, in the Goliath VR, I think that feeling the overwhelming emotion of drowning in the voices of too many experts communicates Goliath’s situation much more effectively than if I had been told the same information. In the piece in which the participant breathes underwater, I think that being made to think that I am physically breathing together with nature makes me reflect on the fact that I am also part of the world’s biology instead of separate from it.
However, the potential for strong emotional reactions to VR experiences can also backfire. The experience “MLK: Now is the Time” (2023) presents Martin Luther King’s famous “I believe” speech. I can look up towards a statue of King hovering above me, listening to his words. I then see a symbol of a raised fist and eventually understand, through trial and error, that I need to raise my fist in the same way to progress in the experience. Although I am in awe of King’s speech and its historical significance, I still resented being placed in a position where he was placed so far above me. It felt as if I had been made to inhabit the creators’ own adulatory reverence for King. If I am to place anyone on a pedestal, it needs to be my free choice to do so. It also does not help that it reminded me of the vertical positioning of the judge in “Accused #2 Walter Sisulu”, where the effect was used to signal domination. I had a similar reaction to the raised fist – a strong symbol echoing the Black Power salute from the 1968 Olympics (Nittle 2021).Again, I think that the symbolic gesture is highly meaningful for the creators. However, I wish they had offered me an alternative choice, which would have given me the opportunity to freely choose to commit. Without the free choice, the gesture felt forced instead of inspirational.
Meanwhile, in the game industry
The development I have described so far covers experiences that try to communicate non-fiction or experimental storytelling in VR, and that were mostly produced as non-commercial explorations. Parallel to this development, other experiments with VR and interactive storytelling have found places in the game development industry.
This industry has had a different approach, mainly exploring VR in order to create new, interesting gaming experiences for their customers. This goal has proven to be challenging, mainly because motivation is such an important factor for a good gaming experience. As mentioned, holding a participant’s attention is challenging in VR.
The first VR game that is considered to have achieved this goal is “Beat Saber” (‘Beat Saber - VR Rhythm Game’ 2018), where the participant is to hit elements with laser swords to the beat of high-intensity music. The challenge and the flying elements capture the participant’s attention completely, and there are no other details in the scene to distract. The game is still a best-seller among VR games with USD million 225 in revenue as per April 2023 (Stockdale 2023).
The second commercial VR success is “Half-Life: Alyx” (2020), which is the first VR game to have reached the top-10 list of games sold that year (Reeves 2022). The developers play-tested every single scene to identify where people looked in order to remove distractions and make sure that the correct elements for game progression stood out (Morton 2020). The launch of “Half-Life: Alyx” created a small breakthrough in the sale of VR headsets to gamers, which alleviated one of the main challenges for all VR developers – the limited market due to low penetration of VR headsets in target audiences. It is thought that the market for VR games will triple from 2020 to 2025, with an estimated revenue in 2025 of 6.9 billion USD (Clement 2023)
Although I have still to see examples of VR games that attempt to communicate more complex emotions than what is traditional in computer games, there is a shift towards games that introduce more complex storylines. The continued appearance of smaller indie titles tackle challenging topics like cancer, bullying, abuse, suicide, loneliness, and so forth, demonstrate that gamers can be interested in emotions other than than enjoyment and excitement.
The larger AAA games seem to take inspiration from this development, integrating more gameplay that motivate more varied emotions. One example I have seen is from “God of War Ragnarok” (2022), a large, expensive AAA-title from Santa Monica Studio (drawing).
When Kratos’ son Atreus kills his first human, the father tries to get his son to talk about the event. The player has to press the green square on the PlayStation controller three times before the child opens up. The pressing of the button is significant in the way the players’ action is not creating a result. The player has to re-evaluate their normal agency as the controller stops offering control. This is a contrast to the way a controller normally works, and also effectively communicates the son’s refusal to respond. The seemingly simple, but yet surprising effect makes the action of pressing the button repeatedly emotionally meaningful for the player – the way repeatedly ensuring that you are there for another human might make them open up.
Another example of an AAA-game offering innovative gameplay evoking emotions is “Death Stranding” (2019). In the dystopian setting of an America plagued with acid rain, human couriers cross the continent carrying huge backpacks. After playing for around 50 hours, the game gives the player the opportunity to donate equipment that can help other players. The ropes or ladders left by players playing alone in their homes show up in the game environment of other solo players, free to use. Players finding donated equipment can click a like-button to show their appreciation (drawing).
This opportunity for offering selfless help became very popular with many players, who appreciated the opportunity to selflessly help other players. Being allowed to be kind created a meaningful experience for the gamers.
And many more
This description of recent developments in interactive experiences has focused primarily on examples in VR or computer games, presenting the development of new technologies and new affordances. These are the examples that I find most relevant for the VR exploration that is part of this artistic research project. I would, however, like to note that there are also examples of interesting interactive experiences in other formats, like digital art, augmented reality, web-based projects and more. One noteworthy project is Netflix’s ambitious and complex interactive production “Bandersnatch” (Slade 2018), offering a complex multiple-choice story with high production value on an otherwise linear streaming platform.
We can also clearly see that new technologies are influencing experimental creators. The Zizi Project (Elwes 2023) uses artificial intelligence to process a new dataset consisting of volumetrically captured and animated drag queens, challenging how most AI datasets are based on web texts that censor many terms related to queerness.
Another technology that creates worry is “deep fake” technology. MIT has already used deep-fake technology to create “In Event of Moon Disaster” (2020), a video with a digitally created rendition of president Richard Nixon giving the speech that had been written (but was ultimately not given) in case something had gone badly wrong during the first moon landing.
I think it is fair to say that there is a great need for thorough exploration of the affordances these technologies offer so that we can use their potential to create meaningful experiences and get a better understanding of how to avoid manipulative and abusive use.
Read about the artistic exploration for this project in the Kitchen Journal,
and study the concept sketches in the kitchen.