Designing for Bodies: Accessibility in Virtual Reality

Image Description: Richard Hoagland is a bald white male with facial hair. A11yVR logo.

Designing For Bodies: Accessibility in Virtual Reality is about the challenges developers and designers face in building accessible XR experiences when the nature of XR is an embodied experience, and many standard methods of interaction and interface require complex bodily movements.

As the VRTogether team came to understand the core design pillars of their platform, they realized accessibility had to be a top-of-mind issue in all design conversations, otherwise, they would alienate a significant portion of users within their target user groups. The challenge for the VRTogether team has been in understanding how to build a novel VR social world with rich social and physical interaction accessible to as large a user group as possible.

About Dot to Dot

In this presentation, Richard Hoagland works through the case study of their first attempt to implement important content for VRTogether — an activity called Dot2Dot that fosters social interaction through two individuals solving a casual puzzle together — while ensuring the experience is fun and engaging for both users manipulating virtual hand switch controllers and those using gaze-based input. And importantly for the VRTogether team, supporting both interaction modalities at the same time to foster interaction.

Richard shares their findings with the hope of helping others with their own challenges around designing for accessibility in VR. They interviewed almost 200 people in senior care facilities. The team contains virtual reality developers with experience in games. On this project, their goal is to reduce loneliness and isolation for their target audience. This market has accessibility needs, which is why accessibility is one of the four pillars of the product.

They developed a game called Dot to Dot. Since many of their target audience is not familiar with virtual reality, VR games, and controllers, they set out to create Dot to Dot. It’s a simple, enjoyable game. You draw lines in order of 1, 2, 3, and so on.

Once you draw all the lines, it creates a 3D item that’s placed in the virtual world. Users have two options for connecting the dots. They can draw lines with the controller or use eye gaze. You can tell the difference by whether a hand is drawing the line or you see the line being drawn without anything, which is eye gaze.

Lessons Learned

The team learned that supporting multiple input methods increases development time, especially under tight deadlines with limited resources as a startup company. They struggled to include the voices of end-users in the process. To compensate and ensure success, they worked to remove unneeded work and development from sprints. This led to multiple decisions not to support gaze-based input for a new feature.

They realized they were finding reasons to push things off. And that’s what leads to things not being included. Even if accessibility is one of their pillars. As a result, they had to sacrifice guiding the user and giving them feedback.

Three lessons learned:

  1. Bake accessibility into development DNA. Stay on top of core pillars.
  2. Most challenges faced were not accessibility-related
  3. Be diligent to include the voices of end-users and experts

Highlights

Bio

Richard Hoagland is the founder and CEO of VRTogether. The team is developing an evidence-based virtual reality platform to reduce social isolation and loneliness in vulnerable populations. Richard has worked for 10 years in virtual and augmented reality.

His accomplishments include launching Daydream Blue, the first social VR game for Mobile VR, winning Gold Prize in the 2015 Oculus Mobile VR Jam, winning 1st Prize in the Meta 2021 VR Hackathon, and being awarded a National Science Foundation SBIR for VRTogether where Richard serves as the Principal Investigator. Richard is passionate about finding the affordances of new technology that support unique solutions to provide a meaningful impact on people’s lives.

Audio Description

First audio description

Video at the 7:40 mark.

“VRTogether” on screen. Two kids in separate locations put on VR headsets. The middle shows them on the beach.

Onscreen: “Agency: children and their families choose where to go and what to do.”

Avatars fist bumping each other and laughing.

Onscreen: “Accessibility: Everyone is able to participate due to modifiable accessibility settings.”

A woman wearing a headset says VR is more fun than talking on the phone.

Back in nature, two characters hold hands. Older male wearing a headset with captions “Do you want to dance?” Two avatars are dancing.

Onscreen: “Administration: Staff controls privacy and support through an Android tablet on a closed network.” It includes admin controls for the environment, activity settings, avatar appearance, and safety and privacy.

They’re building a Paris cafe experience based on senior care facility pilots’ feedback. The scene with two people sitting in a Paris cafe as virtual avatars with the city as a backdrop. Above the avatars are two works of art by Parisian artists. On the table are also two sculptures.

Second audio description

Video at 16:30 mark

The first video shows a forest scene with a small stream. Using a controller, the player moves the hand to connect the dots in order by number from 1 to 20. At the end, it reveals a 3D tent. The next video shows the same forest scene with a small stream. Using gaze-based input, a line is being drawn to connect the dots in order by number from 1 to 20. At the end, it reveals a 3D tent.

Leave a Reply

Your email address will not be published.