The Oculus Developer Blog has published a post in November called Introducing the Accessibility VRCs. These nine Virtual Reality Checks (VRC) are a set of technical recommendations designed to help developers create more accessible software for the Quest and Rift platforms. It is a good sign when a prominent tech company like Facebook begins to take accessibility seriously. These guidelines are a step in the right direction and we applaud their efforts.
We recommend updating the guidelines to include audio description. Blind people rely on audio descriptions to access the information contained in multimedia like VR. Additionally, there are existing guidelines under WCAG 2.0 that require this:
The Oculus guidelines address the need for audio cues and augmentation. At a high-level, one could argue that audio descriptions are covered under VRC.Quest.Accessibility.3:
“When possible, the application should provide clarity and direction to the user through a combination of visual, audio, and/or haptic feedback instead of relying on one form of feedback.”
And yet, it would be easy for a developer to read this VRC and miss the need for audio descriptions. The examples of audio cues that Oculus provides on their developer blog are things like text-to-speech scenarios, optical character recognition, and labeling elements so that they are announced when encountered.
All of these things are important, and so is the thorough documentation Oculus has provided on good sound design and audio mixing considerations. But, on their own, these things do not entirely address the fundamental questions so often encountered by people with visual disabilities in VR:
- Where am I?
- What is happening to my left, to my right?
- Is my environment static or in motion?
This is why audio descriptions are essential.
Our recently published article Equal Entry Guidelines for Describing 360-Degree Video details our research based on feedback from Blind and low-vision participants. Take a look and let us know what you think.