Conventions & Events

Yokō Tarō on “World-Building in an AI-Integrated Society from the Perspective of Japanese Game Creators”: Keynote at the Replaying Japan 2024 Conference

NieR Automata Screenshot of Emil's head.

Beware! This article contains spoilers for NieR: Automata.

This August, I had the pleasure of attending and presenting at Replaying Japan, an academic conference bringing together Japanese and non-Japanese scholars to share research related to Japanese games and the culture surrounding them. This year’s conference took place at the University at Buffalo, SUNY (North Campus) in the United States. Undergraduate, graduate students and professors from around the world gathered to present thoughts and findings from diverse multi-disciplinary perspectives. Discussions ranged from the cross-cultural influence of Japanese games, their unique contributions to design and aesthetics, to broader topics exploring their connections with philosophy, sociology, and history.

One of the most exciting pieces of news leading up to the conference was that Yokō Tarō, the celebrated director behind multiple games in the Drag-On Dragoon (2003 – 2013) and NieR series (2010 – ) was going to present a keynote. I’ve written about video games in both academic and non-academic publications with the goal of bridging the divide between these spheres so they can support and inform each other. When I had the pleasure of seeing Tarō, one of my favorite creative minds in the industry, speaking at the conference, I knew this was a valuable opportunity to showcase the thought process of a thematically mindful director in an academic context.

The game Tarō mainly discussed in his talk is his most famous, NieR: Automata (2017). NieR: Automata’s story circles around two types of robotic beings created by two different organic civilizations. Humankind created the game’s playable android characters, while the aliens they were at war against created the opposing machine lifeforms. Both these robotic groups have been engaged in a perpetual conflict on Earth for millennia according to the will of their creators, yet it is revealed that both humanity and the aliens have long been extinct. On the surface, then, NieR: Automata appears to ask questions about the existence and autonomy of artificially intelligent beings. Yet, Tarō clarified this wasn’t the point. The real theme of the game, he insisted, is not about AI at all. It is what he calls “Decomposed Humanity,” a distilled and exaggerated vision of humankind expressed through the non-human beings we see and control in the game.

Despite a clear interest in reflecting philosophical ideas in his games, Tarō noted that his games are first and foremost entertainment. For entertainment to be effective, he says that it should be made up of 90% familiar elements. Only 10% of its content should push forward novel ideas. This makes sense when thinking about many of the most well-regarded games made, as most of them are based in clear genre conventions and narrative influences. Yet, what makes the classics stand out is how creative developers get with that remaining 10%, and how it establishes the game’s unique identity.

To show this theme of Decomposed Humanity in NieR: Automata, Tarō explained the way he writes his scenarios through multiple layers of meaning while referencing a few examples. By the end of the talk, he tied this process into his overarching thoughts on humanity’s relationship to AI, and the impacts of our society’s exponential development of AI technology.

Decomposed Humanity in NieR: Automata (2017)

Tarō explains how he constructs the meaning of his games’ scenarios across three layers. Layer 1 is apparent at face value. We can see it in what is overtly told or represented. Layer 2 looks beyond the immediate representation and asks us to think about its underlying meaning. Tarō notes that his job is to convey these two layers to his players, but also that a game creator does not simply tell their audience what to think. Layer 3 is a meaning that goes beyond the intentions of a developer. It is something players discover for themselves as they interpret their own experiences interacting with the game. While layer 3 is therefore not something Tarō can tell us to feel, he proceeded to give a few examples of the first two layers using examples from NieR: Automata.

The first example he gave was of the character Pascal, a pacifist machine who is also the leader of a friendly machine village. Under Pascal’s guidance, the machines in the village cut themselves off from the network linking the game’s violent enemy machines and turned to the history of human thought in promoting peace and conceiving of themselves as individuals. Pascal is ethically opposed to war, but after an attack by hostile machines he’s driven to fight to protect his fellow villagers. This contradiction between his philosophy of peace and feeling of hate-driven violence is layer 1. Layer 2 asks: is the real Pascal peaceful or aggressive? Or is what truly makes Pascal human-like this very contradiction?

Since Tarō could not discuss layer 3, perhaps I—as a player of the game—can. I remember first discovering Pascal’s Village and speaking to each of its friendly machines. They were written in a way that felt artificially human, yet I began to perceive each of them as unique individuals despite their uncanny dialogue and non-human appearance. I could see why Pascal wanted to protect these machines. When the game allows you to control him as he fights against a wave of enemies, I felt justified in the violence I inflicted through the desire to defend those he cares about. I projected my own human reasoning as a player onto Pascal in a way that completed the character. In this moment, I regarded Pascal with the empathy I’d have towards a human struggling to adhere to their own ideals as they wish to protect what they hold dear.

Another example discussed the machine Simone, a boss the player must kill. Layer 1 of Simone’s storyline is one of unrequited love for another machine and the dangers of obsession. The situation drove her mad and Simone began to decorate herself with the parts of androids she killed in an attempt to make herself more beautiful. Although the scenario is of course exaggerated and fictional, layer 2 here is about the horrifically inexplicable actions of people driven to madness. Do people who commit terrible actions—perhaps due to past trauma—deserve to be regarded humanely? Layer 3, then, might involve players reflecting on whether they feel this was a good reason to kill Simone, or sympathize with her despite the horrors she committed.

I will briefly summarize a few more examples. One optional questline follows a machine (referred to as Wise Machine) that never says anything when the player interacts with it and eventually commits suicide. It finally vocalizes a scream as it plummets to its death. Is the scream meant to express regret? This highlights the superficiality of language and the disconnect between what we think and say on a conscious level and feel on a subconscious one. Another example focuses on the main protagonists, the androids 2B and 9S, who develop a strong bond over the course of the game despite androids’ supposed lack of emotion. Yet 2B is programmed to kill 9S when he inevitably uncovers the truths behind the androids’ existence, and she does so. This is meant to prompt consideration about whether human connection can ever truly be pure, or whether there is always a practical reason for any two people growing closer.

The sum of all these multi-layered storylines is a question about the value of the machines’ and androids’ existences. Because their creators have long since perished and their purpose destroyed as a result, they become trapped in existential anxiety. Tarō again clarifies this is about the modern human condition rather than AI. The spiritual guidance of religion as a force that unites people is no longer prominent in many contemporary societies, including Japan. This leaves us only with authorities such as parents and governments, which are human and therefore fallible in the same way we are individually. NieR: Automata is a response to this.

“We Who Unravel from AI”

The final part of the talk had Tarō segue from analysis of NieR: Automata’s creation to reflect on the current state and future impacts of AI in our society. As the representation of the game’s android characters shows us, the current essence of AI is imitation. AI mirrors humans through the data it pulls from us. Yet, Tarō asks if this model of existence is so different from humans themselves? After all, humans also grow by learning about things external from us and imitating them.

Moreover, humans often act in ways that are artificial or socially programmed. Tarō asked the audience to consider the words “thank you” in three different contexts: an AI saying thank you as an automated response, a cashier saying thank you based on instruction, and a friend saying thank you as a response to something we do for them. To what degree are any of these responses genuine? Even in the case of a friend, how can we truly know in that instance whether the phrase comes from the heart or if they had another reason for saying it?

Tarō even goes so far as comparing himself to an AI in imitating the writers and game developers that came before him. Just like them, he writes and uses computer graphics to tell stories with the purpose of selling games. Many of his ideas come from other creative minds and most of his job involves imitating practices that have already been done. It’s an interesting point to consider from a game director whose works stand out from the vast majority of what gets made by virtue of their originality. While it’s a humbling and provocative thought, the reality is that AI could likely not imitate the spark of interest that separates games like NieR from others that fail to leave the same kind of lasting impression on players. At least not yet.

Theoretically, AI will be able to learn how to discern and create originality based on the data it collects from us. On this note, Tarō suggests, originality should not be an end in itself. It must be valuable. Yet value is subjective, and maybe this is the important point. In the face of AI that can create with skill and originality, the saving grace of human creation is the ambition to do what hasn’t been done before and the passion motivating it. The most human response to any existential threat is to do something, anything, regardless of whether we win or lose. It is this same determination that makes the androids of NieR: Automata feel human.

These reflections on AI were spoken in a room full of academics who (myself included) tend to regard the originality of their research as a marker of self-worth. While I can’t speak for everyone in attendance, the end of the talk put me in a similar bittersweet reflective state as the end of the NieR games have before. I felt humbled as a human who owes my being and ideas to those of others I’ve absorbed. I also felt defiant in the face of this existential predicament to continue seeking the value of my own creativity. What else can I feel? We’re only human.

Aleks Franiczek

Aleks Franiczek

Aleks is a Features writer and apparently likes videogames enough to be pursuing a PhD focused on narrative design and the philosophy of player experience. When not overthinking games he also enjoys playing them, and his favorite genre is “it’s got some issues, but it’s interesting!”