Q&A with Cameron Cundiff, Software Engineering Leader

Cameron Cundiff is a software engineering leader with a focus on Voice UI and digital accessibility. His work in accessibility has been recognized by the White House, and others. When he’s not at Adobe, he’s working on AccessLint accessibility CI testing, and creating drawings and paintings.

When did you first get started in accessibility?

I started working on accessibility at Adobe, where Andrew Kirkpatrick hired me as an intern on the accessibility team. I met Andrew at an A List Apart conference, that was focused on web standards. At that time, I was deep into semantics, information architecture, and web standards. My interest in “clean” markup was strictly aesthetic though, until I learned about accessibility and worked with Andrew at Adobe. I saw that standards and semantics are an urgent necessity for people using assistive tech.

You have done a lot of technical work for Etsy. You were an engineering intern in 2009, and eight years later, you became their Accessibility team’s Senior Software Engineer. Did you notice any significant changes during your time there, with respect to accessibility?

When I started my internship at Etsy, the company was about 60 people. Parts of the code still had leftovers from their Dreamweaver beginnings. It was a rocky period technically for Etsy. Still, Maria Thomas, CEO at the time, and Chad Dickerson, then newly minted CTO, both lit up when they heard about my accessibility experience. I remember spotting Maria write down “accessibility” with double underlines when making notes.

I owe Chad a deep debt of gratitude for navigating that period, and creating an engineering culture where, later on, the accessibility team I joined could work with high up organizational support. We worked on sensitive production code, built the beginnings of a solid accessibility engineering culture, and set the stage for automation and testing. It was so important to have leadership support, since we were making big changes to important content. For example, we added a focus outline feature that applied to the entire buyer facing website. There’s an enormous amount of trust and support that it takes to make that happen.

How did you get into Voice UI at Adobe?

During my time at Etsy, I saw the Voice UI was a big change in the way people will interact with applications. We were doing a usability study with a blind individual, and he mentioned Alexa devices. It was something to the effect of, “I don’t usually use my phone for shopping, except when I heard about the Echo. I was so excited I had to buy it on the cab ride home”. A couple months later I had the chance to join Sayspring, where we worked on design tools for Voice UI, and within a year we were acquired by Adobe. I feel so fortunate for the timing and opportunity to be at Sayspring and now Adobe. It’s also fun to be back where it all started!

What is an accessibility barrier you would like to see solved?

I’ve been deep in the weeds of Voice UI, and there’s a very specific problem that I see on the horizon. Voice UI is great for a broad range of people, but many people communicate non-verbally. We’ll still have to think carefully about how we build experiences that include Voice controls for people who are Deaf or Hard of Hearing, in particular. One approach is to improve Deaf accent  recognition in speech recognition technology. This will happen some over time as the training data and technology improves, but we should be conscious about including Deaf accents in speech model training data. I’ve read some research by HCI folks at CMU on this, and I hope that work continues. There’s also some cursory experiments in sign language recognition with Echo Show. I’m not as convinced by this line of thinking, since it tacks on sign language support instead of being an inclusive UI from the start. The best approach I can tell is making experiences multimodal, so one can interact with a screen, with voice, or with both. Now is the time to solve these problems, while foundational decisions are being made.

The best approach I can tell is making experiences multimodal, so one can interact with a screen, with voice, or with both.