Building a More Accessible Social Virtual Reality World

Image Description: Accessibility VR Logo and Thomas Logan headshot

Imagine trying to navigate a virtual space without being able to see the objects in it. If a host tells everyone to move to the outside space and you can’t see where is outside and where is inside in the virtual world, then how do you do that?

If the host says, please line up in front of the microphone to ask your question, how do you do that if you can’t see the microphone? If there are 50 people in a virtual environment, and you want to find the one person you know inside of that space, how do you do that?

This requires adding a new method for exploring the virtual space and navigating between objects inside of it.

Equal Entry has been hosting events in virtual reality as part of our Accessibility Virtual Reality Meetup for two years. The team wanted to innovate and create a more accessible social virtual reality world. So, we worked with developer intern Owen Wang to create new functionality that makes Mozilla Hubs easier to use for people who are blind and low-vision.

In case you didn’t know, Mozilla Hubs is open source. We created a separate environment to use as a sandbox for enhancing the accessibility of the social virtual reality world.

You can add objects to the scene. They will come with a name and role. To add an object, select “Place” from the menu and “3D Model.”

Screenshot of Hubs in a long room with stairs to a stage and a white board on the right. Selecting "Place" from the menu

The interface opens Sketchfab. Enter the object you want. In this one, we search for “cat.” We search for and place two different cats.

Searching for "cat" images on Sketchfab that shows results of different cats and one oddball character that's not a cat

Select the “Objects” menu and you’ll see both cats have the same name and role “Cat, 3D Model.” Cat is the name and 3D Model is the role. Well, the name is not very helpful to someone using a screen reader.

Screenshot of Hubs room showing two different cats and the objects menu where both cats are named "cat, 3d model"

You can “View Description” to open the details about each cat. The Object Description window shows the name, role, and object description. The role will stay the same, but we change the name to “Fluffy cat.” Then, we change the description to change it to “Fluffy khaki cat on a stand.”

Rename this object popup hides the cat in question. The box has "Fluffy cat". On the right is the object description, its name is fluffy cat, role is 3d model, and description fluffy khaki cat on a stand.

But there’s a problem when we make the change. Both boxes to rename and change description pop up over the cat. We can’t see the cat we’re trying to describe. We make a note to fix this in a future update. Now that both have a new name, you know which one is which.

Hubs room with both cats on the scene. The Objects now show three items including Fluffy Cat, 3D model and Cat Warrior, 3D model. Select "View description" for their details.

Open the chat box and enter “/fov,” which stands for field of view.

Screenshot of Hubs showing the chat on the right and NVDA speech viewer on the left. Both show the three objects in the field of view

The NVDA screen reader announces the two objects and one avatar. It’s something to celebrate. In a future release, we’ll make the content more readable. The same goes for the avatar name, which is barely legible on the light background.

Watch this recap to learn about how we:

  • Added the ability to describe avatars.
  • Included label information automatically from SketchFab to objects imported into the world.
  • Enabled custom functionality in the chat interface to work with screen readers.
  • Added user interface buttons to label and describe any spatial object.
  • Added synthesized speech output of descriptions.

We’ve pulled together the resources and highlights from this meetup. Explore the resources and watch this video.

Speaker Bio

Thomas Logan has spent the past 19 years assisting organizations to create technology solutions that work for people with disabilities. Over his career, Thomas has delivered projects for numerous federal, state, and local government agencies as well as private sector organizations from startups to Fortune 500s.

He is the owner of Equal Entry, whose mission is “contributing to a more accessible world.” They achieve this for clients through training, education, and auditing for accessibility on websites, desktop apps, mobile apps, games, and virtual reality. Equal Entry helps companies that build digital technologies.

He is also the organizer of AccessibilityVR and co-organizer of AccessibilityNYC, monthly Meetups for people interested in topics related to accessibility and people with disabilities. Thomas lives in Tokyo, Japan.

Resources

Highlights

Virtual Reality Help and Services

Our years of experience working with virtual reality as well as being writers and speakers on the topic has given us a unique perspective when it comes to consulting on VR projects. If you’d like to innovate in the accessibility of XR please, please contact us to discuss how we can help.

2 comments:

    1. Hi Dr. Folmer, yes we have featured your work in many presentations in the past. We thank you for your research and hope that these techniques get implemented in the future!

Leave a Reply

Your email address will not be published. Required fields are marked *