Leveling the Playing Field: Toward Equivalently Accessible Video Game Worlds

Image Description: A screenshot from a video game demoing the NavStick system. A character with red hair aims a weapon toward enemies in the distance. At bottom right, an inset shows a pair of hands holding a game controller and pushing the right stick forward in the direction of one of the enemies. Text below this inset reads “Roaming Chomper 2,” referring to the enemy being targeted.

This article is based on a talk Vishnu Nair gave at A11yNYC.

Video games are a pastime and an escape for many. They allow people to explore and experience complex, fantastical worlds on their terms. “The Legend of Zelda: Tears of the Kingdom” is an action-adventure game. Players scan their environment, make split-second decisions, depending on what’s around them, and explore the world.

But this and many other video games and experiences are largely inaccessible to blind and low vision or BLV players. This doesn’t preclude blind players from gaming as there are audio games. These are the audio-centric equivalent of video games. They often feature no graphics, and center on sound effects and other announcements to provide a gaming experience. However, the issue with these audio games is that they aren’t the same, in terms of how fun they are for players.

For example, Terraformers is a 2003 classic audio game with rudimentary 3D graphics. The game is like those old-school text-based adventure games, such as Zork. Despite Terraformers being one of the best audio games out there, the gameplay felt linear, less dynamic, and less interesting. This is the same sentiment in conversations with blind gamers.

“We don’t really get a lot of choice in audio games,” said one blind gamer. “They tend to be really simple and not a whole lot of fun, so I just play whatever is out there that I can actually stand.”

So, if audio games aren’t a suitable substitute. Many blind players desire the ability to play mainstream games.

Last year, a great article in the Montreal Gazette focused on gaming for blind gamers. The article quotes one of the blind gamers. “The main thing for so many of us is to see more mainstream games accessible, versus solely having audio games with zero graphics,” they said. “It makes it difficult to talk to sighted friends about games. Some of my friends don’t comprehend that while I enjoy gaming, I don’t play anything remotely similar to most, short of Hearthstone.”

For those unfamiliar, Hearthstone is an online card game for PC and mobile, and the PC version has an accessibility mod that makes the game screen reader accessible. But how is it possible to go beyond simple card games?

Think about it. Video games contain a lot of video and video contains orders of magnitude more information than audio. The first question that would come to mind is how to communicate all this info, these rich worlds, through sound.

What Makes a Game Fun?

But this may not be the correct question to ask. Here’s another question. What makes a game fun? Many game designers over the years have attempted to define and formalize this notion of fun.

One such formalization is called the eight kinds of fun. It posits that games and experiences within those games embody some combination of these eight categories, to facilitate a sense of fun within players. They cover sensation, fantasy, narrative, challenge, fellowship, discovery, expression, and submission.

What’s important to note is that only one of these eight kinds of fun deals with the sensory, which is the visual experience of playing a game. That is sensation, which views games as sense-pleasure. Everything else focuses on what a player can do, what they experience, and what they feel within a game.

Vishnu proposes that the approach to more equivalent accessibility in gaming is to focus on the experiences granted to players. The information to communicate will follow. For example, when a sighted person plays a mainstream 3D game, like Legend of Zelda, what kinds of experiences are they having within the game world? And how can those experiences, those abilities be translated to blind players, so they can play those games and feel like they’re having fun and feel fulfilled?

Scanning Surroundings in Games

Vishnu’s work on his PhD has looked at how to take steps toward this through three projects. For the first project, refer to the Legend of Zelda. Again, the player controls the main character, Link, through this small town.

One of the basic operations that this player does is scan their surroundings. What’s in their line of sight? Where are these things that are within their line of sight? At one point, there’s a statue to the player’s right, to Link’s right side. In the distance, a woman is sweeping the ground. Further into the distance, there’s a workshop in that large building. And if they go near the workshop, they’ll see that rail car in the distance.

This thought process is facilitated by a player’s ability to “look around.” This is the ability to scan the environment in any direction, find items, detect enemies, and interact with the world. The first project, called NavStick, focuses on this ability.

A fundamental component of the exploration experience is the ability to look around. Through this fundamental act, players develop a plan of action and attempt to execute that plan of action. It allows them to exercise their agency and control within the world. They can survey at any time on demand like how a sighted player would use the in-game camera to do the same thing.

Testing the NavStick

The NavStick is an audio-based tool that allows blind players to look around within the game world. The concept behind NavStick is simple. The project team took the right stick on a game controller and repurposed it so that in whatever direction they point the stick, the game announces what lies in that direction via line of sight.

In a prototype, the player points the right stick in the 1:00 direction. The stick performs a raycast in that from the player within the game. In this case, that raycast will hit an enemy called the Chomper. And the game announces, “Hey, you hit a Chomper just now.”

These announcements are made through 3D or spatial sound. The NavStick is simple. However, the team wanted to test how well it worked in different situations. As such, they did two studies with NavStick.

For the first one, they compared the NavStick with the status quo means of surveying an environment. That status quo means reducing the world down to a menu of items. This tool is called NavMenu.

They compared NavStick and this menu-based system with each other across different tasks within a virtual grocery store they created. The goal was to explore each tool’s advantages and disadvantages across six tasks.

The players used NavStick and NavMenu to survey the items within this aisle and answered the project team’s questions on the five tasks. There are different categories of tasks. The team wanted to see how well the tools performed in the context of wayfinding. How well, for example, did NavStick allow participants to gain a cognitive map of the aisle versus the menu system that they created? And for what tasks do the participants prefer each tool?

And after that, for the 6th task, they added a simple video game level. This was a recreation of a room in Terraformers to look at how participants perceived NavStick within that environment. It turns out participants felt they were able to build better mental maps with NavStick over NavMenu, they felt they were able to have more fun with NavStick in that basic game level from Terraformers.

Yet, the team found that the players preferred the menu for what they call non-directional tasks, which is a task that allows you to find whether an item is in the room. As you can imagine, scrolling through the menu is easier than using NavStick to look around. The research paper has actual numbers to support all of these and more.

While it’s nice to objectively look at how well people understand the world using NavStick, it’s more interesting to put NavStick in an actual 3D video game, where it belongs, and see how it performs there. That’s what the team did for the second study.

They custom-built a 3D video game called the Explorer, derived from Unity’s 3D Game Kit, and integrated NavStick into that game. Using this game as the study’s test bed, they had seven participants use NavStick to help them traverse that 3D game world that they created, avoiding different obstructions and defeating enemies as they wanted, using a laser gun.

This game world consisted of eight segments or shorter levels. They designed them to possess different environmental characteristics in 3D games that could pose a challenge to NavStick.

Every blind participant completed every level within this game. The NavStick gave players an enhanced sense of agency and control. Looking at the paths participants took, the researchers saw how participants used different strategies to get through the level.

One of the biggest frustrations was occlusion. This makes sense because NavStick only works through line of sight. This drives home the need to allow players to explore behind those occlusions.

Despite this frustration, participants said that NavStick made the experience of playing the game enjoyable. Here’s a quote from a blind gamer. “If you stop to think about it, in most games, the whole point is that you can move the camera. And to me, NavStick was the equivalent of a camera. You’re able to see objects around you. You’re able to know where they are.”

The lab is working to convert NavStick into a plugin for the Unreal Engine. This will allow developers using Unreal to integrate it into their games to make them more accessible. The team’s work with NavStick led to a broader question, which was the subject of the next project.

Game Spatial Awareness

An important part of traversing a game world involves understanding your state within that world. In addition to what’s around you, there are questions like: Where are you? What direction are you facing? What is the size of the area? What is the shape of the area? These questions fall under the realm of spatial awareness.

Spatial awareness is a person’s awareness of their surrounding environment and how they are situated in the environment. Sighted players have the use of vision, as well as small tools like mini maps.

But what would be an equivalent tool that would grant that experience of being spatially aware of the environment for blind and low-vision players? Or more fundamentally, how well can different tools communicate a sense of spatial awareness to blind players, and what do players desire in the information they want to get?

Spatial awareness tools (SAT) became the subject of the second project. One of these simplest techniques for communicating spatial awareness is through environmental cues. For example, if a player hears a waterfall in some direction, they know there’s a waterfall around them. But using just environmental clues can become overwhelming in more complex environments that have more audio cues and more going on.

Other more explicit approaches to communicating spatial awareness exist. These include the use of tactile maps and displays that show the player’s current area.

The question is how well each of these SATs facilitates spatial awareness. The researchers also don’t know what aspects of spatial awareness are important to blind players. Given that game designers and developers can’t implement everything, which of these tools should they pick for their game?

The researchers performed another study and created another video game to answer these questions. They implemented four SAT approaches into this 3D game world. They represented tactile maps and displays using a smartphone-based mini-map system. For this system, as the player goes through the game world, the map would pan and rotate along with the player, as they move through the game world.

They represented echolocation using a whole room shock wave technique inspired by the enhanced listen mode’s technique. NavStick did directional scanning and menus are represented by a simple menu, representing the room’s contents similar to what NavMenu did in the NavStick study. The game consisted of four levels within a dungeon-like environment. Nine blind players played these game levels using the four tools the team created.

For each level, the players could use one of the four tools. They used that tool to traverse the level from a starting point, going through multiple rooms, to collect a key that was hidden somewhere, using that key to open a lock that was hidden somewhere else, and then opening that lock to reach a finished checkpoint that was just past that lock.

Every level followed this same sequence, but they all had unique layouts. The researchers wanted to determine the importance of different spatial awareness types to blind players. They looked at six types of spatial awareness mentioned as important to people with vision disabilities, across a wide body of prior academic work.

These included the scale of an area, the shape of an area, a user’s position and orientation in that area, the presence of items within that area, the arrangement of items within that area, and even an awareness of areas adjacent to the player’s current area.

Then, they interviewed players to learn which types were important. They found that participants believed that knowledge of their position and orientation was the most important. It was always helpful to know their situation within a room to plan out future actions.

Tied for second were item presence, item arrangement, and adjacent areas arrangement. Because participants thought that information was helpful but communicating it too much could make the game less fun. It would remove the element of surprise. And finally, at the very bottom were area scale and area shape awareness. Participants thought they didn’t need them. These results are important because you would think that a tool for communicating spatial awareness should communicate position and orientation well.

None of the four tools excelled at communicating position and orientation. Participants didn’t think those tools allowed them to know where they were in the level. The researchers believe there’s an opportunity for building tools that better communicate position and orientation within game worlds. They also explored the concept of combining tools. Many participants wanted to combine tools. They wanted multiple ways of surveying their environment.

One combination participants wanted was the directional scanner, AKA NavStick, alongside the simple audio menu. Combining these gave players the greatest spatial awareness out of all the tools. Participants wanted to be able to mix and match tools. Hence, customizability is important. Players have preferences on what kinds of information they want to hear.

There are opportunities to better communicate spatial awareness. But there are paths.

Surveyor

The work with NavStick and spatial awareness tools led to a broader question, which was the subject of the next project. In a typical game, players look around and head to something that interests them. This is connected to the concept of discovery. This means games are uncharted territory to be explored and uncovered. Players feel excitement going through a game world. How can games provide experience for blind players? That was the topic of the third project, which is called Surveyor.

The main issue is that existing approaches for communicating game worlds to blind players don’t provide this experience of discovery. These approaches fall into two main categories. The first involves simplification techniques, where the game world is reduced to a list of items in the environment. This reduces the game to a simple point-and-click task.

The other approach involves spatialization, which gives players some sense of the objects around them through 3D sounds. “The Last of Us” games are known for this. They use sound pings to give players a rough sense of what’s around them. But they can’t do much with those pings and they’re forced to follow a predefined path to the next objective. That sense of discovery isn’t there.

The experience that blind players get isn’t equivalent to what sighted players get within that same game. The purpose of the project is to create a different way of approaching exploration. To do that, they needed to figure out what kind of scaffolding could achieve this.

The researchers talked to two veteran blind gamers to learn about the abilities they want to effectively explore game environments and experience discovery. The interviews yielded three of these abilities. For the first one, both mentioned they wanted to move through and look around environments, rather than rely solely on menus and maps to discover items.

Exploring Surroundings

Rather than being told where everything is, these participants wanted to go through the actual process of exploring. They also wanted to keep an exploration log to know where they’ve explored and in which direction they should go to uncover more of the world.

Some video games, such as Red Dead Redemption 2, obscure unexplored parts of the world map. That way, players know what they’ve already explored. And in which direction they should go to uncover more of the world. And finally, they want the ability to fast travel to locations they’ve already discovered. Many games like Skyrim provide the ability to travel fast.

The next thing to figure out is how to manifest these three abilities within a tool that blind gamers can use. The team developed a new system called Surveyor. It’s an exploration assistance tool for virtual worlds, designed to facilitate a greater sense of discovery.

The team treated those abilities as design goals so Surveyor responded directly to them. The first is the ability to look around, which NavStick can do. It’s where the player points in a direction using the right thumb stick and the game announces what lies in that direction. In Surveyor’s case, NavStick marks areas that have been swept by NavStick as “seen.”

Everything beyond their current line of sight remains unexplored. This directly hooks into the second component, which is the exploration log. This is achieved with the Surveyor’s exploration tracking system the team created.

For example, in one room, the player’s goal is to find and collect a key card located behind the crates. The best way for a player to find that key card is to go to the edge of the area they’ve explored. That edge is effectively the cusp of the unknown.

So, the Surveyor’s primary objective is to highlight these edges, and it does so through a specialized menu system the team created. This menu system communicates these unexplored edges that potentially hold items and other things beyond them. It also landmarks that the player has seen along with information about rooms in the player’s current knowledge base. But that they might not have explored or they might have not fully explored.

Players can select anything in the menu to be guided there through audio beacons. In the case of the keycard, the player decides to explore the first patch within the menu. They select it. And a series of audio beacons lead them to that edge. And when they’re at that edge, they use NavStick to survey around them. At which point, they’ll find that key card.

A Comparison of the Tools

How well does Surveyor work? The team created another video game to test how much it helps discovery and how the experience differs from other current in-game tools. The team had nine blind participants in a fully remote study make their way through three of these game levels, using Surveyor and two other tools, representing the status quo.

Participants thought they had a stronger sense of agency in exploring the world with Surveyor. They felt encouraged to explore the levels more. Participants who used the menu could select where they wanted to go and went straight there. The shock wave didn’t provide any information about the side room.

Participants found the simple audio menu simpler to use than Surveyor. Participants found this aspect of the menu led to a more relaxing experience where they could easily traverse the level. This explains why a couple of participants preferred the menu over Surveyor. But some of them also noted that they would need more time to get used to Surveyor.

This study session using the three tools on these levels lasted 90 minutes. Most participants found the menu less interesting to use. One participant described their experience with the menu as “sterile and soulless.” It reduced the game to a simple point-and-click task. And they just had to go from point to point to finish the level.

And finally, every participant felt they could complete the level quickly with the shock wave. The shock wave facilitated the fastest level completion times of all three tools. However, most participants disliked the lack of exploration and disapproved of the enhanced listen mode.

All three of these projects, the NavStick project, the SAT project, and Surveyor, are all efforts at following that approach of looking at the experiences granted to users and building around those. Instead of just looking at the accessibility of those worlds, mechanically, through the lens of information.

Translating These Concepts to Physical Experiences

This is just the beginning. There are a lot of different areas where to continue the work along this mindset in games and beyond. And so again, the fundamental question is: What kinds of experiences can be translated to bring that sense of joy and fulfillment to users?

One of the main goals of equivalent accessibility is to allow blind players to play the same mainstream games with just as much fun and fulfillment as sighted players. There’s another layer to equivalent accessibility. And that revolves around the concept of multiplayer gaming. Or what academic literature calls mixed-ability multiplayer.

The Holy Grail is to create experiences that are so equivalent that both blind and sighted players can play together on the same field. This could open a new class of games and social experiences.

A fourth major project of Vishnu’s PhD deals with digital image accessibility. It focuses on creating tools to make it easier for blind users to explore images using their fingers on a touch screen, like on a smartphone. The project team created a bunch of techniques. One example was a menu and beacon system to list out the components of the image and guide users toward those points.

They used a neural net to highlight different areas that might be perceived as important. They created a zoom-like technique that allows users to blow up parts of an image, so they can more easily explore these minute parts of the image.

The default way of describing an image is via alt text. But that involves a sighted person imposing a description of the image on the blind user. Perceiving images involves understanding what the image is showing as well as how parts of the image relate to one another.

Understanding and figuring out what parts of the image are more important and less important. Eventually, the goal is to allow a user to form their understanding of an image, without someone else imposing a description. There are meaningful applications for this as well.

The final area is the physical world and navigating it. Six years ago, Vishnu tested a turn-by-turn indoor navigation system that he created. The system used different techniques, like Bluetooth beacons and other computer vision-like techniques to accurately pinpoint where a person was in a building.

Turn-by-turn navigation gets you from point A to point B and gets the job done. But it is confining. A blind user doesn’t have much freedom to do what they want with a system like this. They must follow every instruction. So, what experiences and abilities would make the experience of navigating the physical world richer and interesting?

Perhaps it’s the same things the researchers found in video games. For example, learning about a space by looking around. Perhaps, it involves quickly gaining an awareness of one’s orientation, one’s position, where items are, or spatial awareness.

Sometimes navigation isn’t about getting from point A to point B. It’s about experiencing the world and discovering new and interesting things while you’re going places. It can be translated over to parks, shopping places, buildings, or museums.

This might look different in the real world, but they still apply. Because at the end of the day, achieving equivalent accessibility means realizing the experiences, however simple they may be, that bring joy and fulfillment to users. Following this line of thinking can open up a lot of new opportunities for users whether it’s personal, social, or professional.

Going forward, think about how everyone can move toward a future where technology can allow everyone to share all these core experiences, regardless of their abilities.

Video Highlights

Watch the Presentation

Resources

Leveling the Playing Field resources

Bio

Vishnu Nair is a Ph.D. Candidate in Computer Science at Columbia University, focusing on designing, building, and testing interactive systems to enhance how people experience the world. His work primarily focuses on accessibility in various domains, from indoor navigation systems during his undergrad, to video games and digital media in his Ph.D., to virtual collaboration during a recent stint at Microsoft.

His work has been published at top-tier human-computer interaction (HCI) conferences, and his NavStick project is the subject of a pending patent. A lifelong New Yorker, Vishnu enjoys reading, volunteering, listening to live music, and roaming around NYC’s many museums. Visit Vishnu’s website.

Equal Entry
Accessibility technology company that offers services including accessibility audits, training, and expert witness on cases related to digital accessibility.

Leave a Reply

Your email address will not be published. Required fields are marked *