Beyond the “Zoom Play” — :robot_face: Creates a New Kind of Mediated Performance

Beyond the “Zoom Play” — :robot_face: Creates a New Kind of Mediated Performance

With :robot_face:, Seattle’s Dacha Theatre manages to move beyond the Zoom play, creating a truly immersive and interactive screen-based experience that fully leverages the affordances of its platform.

Disclosures. 1) :robot_face: co-creator Nick O’Leary is a member of the Playable Theatre Board; 2) I’ve been collaborating with Nick and Dacha on an in-person playable theatre piece which had to be postponed due to COVID-19. However I genuinely liked this piece, otherwise I would not have written about it!

As live performance — which had all but shut down during quarantine — has developed a new life online, I’ve been craving an experience that uses the affordances of real time mediation in a creative and innovative way. :robot_face:, created by Nick O’Leary and Nathan Whitehouse, with Dacha Theatre — a small company in Seattle devoted to devised and playful work — is the first thing I’ve seen that really fulfills this promise. What’s more, it does so without the benefit of a big budget, through creatively leveraging commonplace tools (including game technology) in an interesting way. My insight here is informed by my perspective as a game designer who has recently begun working at the intersection of games and theatre with a particular bias towards works that have well-designed audience agency. Here, I’ll be discussing :robot_face: through some ideas that I’ve been developing and plan to write more about on this blog in the future. In addition, I’m looking forward to publishing a post-mortem by the creators once the show has closed!

:robot_face: is framed as an online game show in which four teams compete to program the most human-like AI. The game (and it is a game in every sense) is played via Zoom with a spectator (“studio”) audience observing and voting via a Twitch game streaming channel, which allows them to see the broadcast, talk about it in a text backchannel, and vote. In each round, each team works with (i.e., instructs) their “programming assistant” to “code” their AI using a simple computer-esque interface displayed in the Zoom window. After the teams have completed their programming, a “celebrity” host interviews each AI to assess which is the most human-like. The AI’s are deftly improvised by actors, each dressed in and against a background of their team’s color (in our case, Cobalt), each with their own distinctive uncanny-valleyesque set of default gestures. When it’s time for the interview, the AI/actor must improvise their answers based on the parameters “programmed” by their team, to hilarious effect. The judges then comment on each, and votes are counted from the Twitch audience. Your team is also visited periodically by the company’s CEO, who very much fits the stereotype — a guy who’s far too young to know as much as he purports to, with just the right amount of technobabble inserted to add veracity without obfuscation. Via the CEO, a personal conflict of the programming assistant’s was also introduced early in the game, giving us an additional sub-goal of trying to help a “real” person solve their problem.

Overall this was among the best live interactive experiences I’ve had — in-person or mediated — and that’s not just because my team won! I was on a three-person team with someone with whom I’d role-played before, and we immediately fell into a groove of building our AI’s personality through different personality and mood parameters and sentence fragments. The programming assistant served as an in-character moderator, improvising to help us to build our AI’s personality and inputting our selections. The AI actors then improvised brilliantly, saying exactly what we told them to, but also adding some underlying algorithms from which they were presumably working, pausing before each answer to “process” their response (so we were told.) At the end, the spectator audience voted via Twitch and the winners were declared. Go Cobalt!

Zooming out (no pun intended), I’d like to unpack some of the reasons I think this show was so successful, beginning with the fact it avoided the trap of being a “Zoomed play,” wherein a traditional stage performance is simply delivered over video chat, not unlike the way some early cinema directors made movies that were essentially filmed plays. Instead, it took advantage of three interactive platforms (and probably more depending on the behind-the-scenes tools that were used) to create a new kind of live experience.

Let’s begin with the premise: “A game show about creating the most human-like AI.” This conceit uses two methods that I think are useful in guiding audience members as to their roles and the extent of their agency. Both of these are examples of “keying,” a term drawn from social scientist Erving Goffman and describes signals that people transmit to one another to indicate appropriate behavior for a given social context.1 The first method I call “scaffolding,” which involves using familiar social settings to let players know what is expected of them. The second is “indexing,” as used primarily by media scholars, which is the referencing of common cultural tropes or narratives. By framing :robot_face: as a game show and players as contestants and game show audience members, participants know exactly who they are and what they are supposed to do from the get-go. Furthermore, the role of “game show contestant” is by definition that of an amateur performer, which takes the onus off of players to function as “actors,” as they might in a live action roleplaying game. This solves the perceived problem of poor acting by participants. Here, your actions are consequential and pivotal — you are in a sense the stars of the show — without having to be “professional.” With a very lightweight amount of warmup — comparable to what real game show contestants might be asked to do — you are thrown headfirst into your role without any additional training or prep work.

The AI theme has the indexing function of using cultural tropes with which most people are familiar. The eponymous Turing Test — create an AI that indistinguishable from a human being — provides the core gameplay premise but also acts as a jumping off point for social satire on IT culture. (Is it a coincidence Dacha is located in Seattle? I think not!) That human actors are employed to play AIs creates a humorous inversion of the Turing Test, posing the question: Can humans play believable AIs?

The core gameplay, the “programming,” if you will, is done via a simple console displayed in Zoom’s “share screen” mode but which is, importantly, not controlled by players but by an actor/moderator. Although I think the visual design of the interface could be improved to make the programming process a little more legible to players, it allowed us to use simple language to essentially “compose” character traits and dialogue modules that the actors could then combine with their own — perhaps pre-scripted, perhaps fully improvised — lines. The genius here, however, was that we didn’t have to interact in any direct way with the interface itself. Instead, the “programmer” had to interpret our ideas and translate them into “code.” This is a challenging role to play — one which demands the performer to be at once virtual “dungeon master” and improvisational actor. Importantly, the programmer provides human mediation to player actions. This assures that the outcome is reasonably coherent, provides a way for the performer to fill in blanks left by the players, and mitigates any kind of griefing behavior. (Inevitably, a player-controlled interface would eventually lead to an AI that just says “penis penis penis” all the time.)

At the end of each round, the host (also really brilliantly played and key to managing the pace) introduces the judges (also actors), each of whom is assigned to interview an AI, using questions designed to bring out the AIs “personalities.” Whether this is scripted, improvised or a little bit of both is not entirely clear; however, it feels authentic, which is key. After conducting the interviews, the judges provide commentary (which appear to be mostly improvised) of their impressions of each AI. This is followed by audience/spectator voting via Twitch. Though the agency given Twitch viewers is fairly nominal — simple four-way voting — it is exceptionally consequential: They actually get to decide the winner. The audience is also a necessary part of the social scaffolding because it gives the players someone to perform to, the absence of which would make the experience a great deal less satisfying. Even though, as a contestant, you really can’t see or interact with them, the twitch spectators hold the “performative” role of both audience and final judge. I’m really fascinated by the use of voting mechanics in live experiences because, although they are relatively simple in terms of actual mechanics, they can be vastly different in terms of satisfying agency. I plan to spend a little time on this in a subsequent article.

Finally, I’d like to talk a little bit about diagetics. I am using the term diagetic here as applied to games, to describe elements that exist within the game world, as opposed to extradiagetic elements, which exist outside of it. Creating an online experience that is diagetic has become a bit of a sticky wicket for immersive designers. :robot_face: makes the brilliant move of beginning with a premise in which screens are inherently diagetic, using the screen framing to represent both a TV show and a computer. That the AIs exist within the screen is key; and the splitting off of the teams into their respective Zoom breakout groups during the programming phases also fits in with the diagetics of the world.

This segues nicely to my final point. :robot_face: demonstrates how to push an instrumental medium beyond its intended use in the service of art. Although both Zoom and Twitch are now commonplace, the show uses their particular affordances in a clever, seamless way to weave together a complete, diagetic — and yes, immersive — experience. Zoom of course has become the de facto proscenium of the Covid era, for better or worse. In :robot_face:, the screen-as-proscenium, far from an awkward artifice, is used to great effect in “framing” this as a screen-based world that exists at once within the television, and the computer, as well as being mediated by a collaboration platform. It also highlights the way “performance” is invoked at different levels — from the performance of the actor-AIs, to the programmers, to the MC to the guest hosts, to the contestants, to the spectators, all of whom “perform” a distinct and integral role. Removing any of these elements would cause a collapse. I have been waiting and hoping for theatre folks to engage with Twitch, as some already have. Its text backchannel and voting affordances have a lot to offer in terms of a direct feedback loop between performers and audience, while also being scalable (the spectator audience for this show could conceivably be any size). All that said, although :robot_face: very much leverages the platforms it uses, I can also easily see a variation of this show which combines live and screen-based elements, following the diagetics of real-world game shows.

All-in-all :robot_face: is well worth experiencing. At the very least, it will be the most fun you’ve had in a Zoom room to-date.

:robot_face: is playing online through September 13. For tickets and information visit —

:robot_face: logo design by Zach Zamchick


  1. Goffman, Erving. 1974. Frame Analysis: An Essay on the Organization of Experience. Harvard University Press.

Submit a Comment

Your email address will not be published. Required fields are marked *