Summary
Through his work, Cameron develops tools to make digital spaces more inclusive. So, he created the Accessibility Nerd project to explore the intersection of technology, software engineering, and AI in accessibility. One is Image Describer, which helps blind and low-vision users understand images online. Another is a11y-agent, which supports developers in making their code more accessible.
He emphasizes that AI isn’t a replacement for human expertise but a powerful assistant that helps developers identify and fix accessibility issues efficiently. With advancements like AI-powered descriptions and interactive developer assistants, accessibility is becoming more seamless and effective. It’s helping developers and users navigate the digital world with greater ease.

In this episode of A11y Insights, Thomas Logan and Ken Nakata welcome Cameron Cundiff, the creator of A11y Agent. It’s an accessibility remediation tool that’s unique in the way it combines traditional programming, AI prompting, and human judgment to accelerate the accessibility remediation process.
Accessibility Nerd
Thomas Logan: Hello, everyone. This is Thomas Logan from Equal Entry here with Ken Nakata of Converge Accessibility. And in this episode of A11yInsights, we’re talking about all kinds of cool technology progress and possibilities with artificial intelligence and other technology tools to make a more accessible world.
We’re really excited today to have Cameron Cundiff. For the sake of our audience, Cameron, could you please tell us a bit about your background and your role in accessibility?
Cameron Cundiff: Yeah. Thanks Thomas. I’ve been in accessibility engineering for pretty much my whole career. I started as an intern at Adobe’s accessibility team in 2007, and I’ve continued in the space until today.
Right now, I’m a tech lead manager at Asana, and, very recently, I’ve been focused on accessibility at Asana and helping build the accessibility program. And my background has always had a theme of accessibility professionally, not exclusively, but that’s sort of a theme. I also run a YouTube channel called Accessibility Nerd, and I build open-source software around accessibility, testing, automation, and AI.
Ken Nakata: That sounds cool. Can you tell us more about Accessibility Nerd and what the goals are?
Cameron Cundiff: Sure. Accessibility Nerd started as a way for me to share ideas, explorations, and new technologies that I thought would be interesting for the accessibility community. The intent was really to bridge the gap between traditional accessibility techniques, software engineering, and then more recently, AI and gen AI developments.
It’s not strictly wrapped around gen AI, but that is a big focus.
Thomas Logan: And I like what you’re doing because I think a lot of times the type of content you’re creating, it’s stuff that other developers might have worked on, but they never talk about it, and they don’t have those conversations for people to learn from.
I think it’s unique what you’re doing, and what prompted you to get started doing that?
Cameron Cundiff: Well, the basis for my interest is mostly out of curiosity, at least as it relates to Accessibility Nerd. I have an itch around sharing my work. It’s why I work in open source. That’s why I do these videos.
I feel compelled to share the work that I’m doing. I get kind of a rush from that. I can’t help myself. Standing up a YouTube series was kind of an obvious next step.
Image Describer tool
Thomas Logan: One of the projects you’ve talked about on Accessibility Nerd is Image Describer. Could you give us a quick explanation of what that is and how you use it?
Cameron Cundiff: Image Describer is a Chrome extension that lets blind and low-vision people get text descriptions of images within web content. I created Image Describer as part of a hackathon for Google, and it uses Gemini’s gen AI APIs along with Chrome’s extension APIs to interact with web content and generate descriptions.
Thomas Logan: Awesome. Can we see a demo?
Cameron Cundiff: I am on the New York Times homepage, and I can right-click on an image and choose “Image Describer”, “Describe Image” from the context menu, and that pulls up a side panel with a thumbnail of the image and a text description of the image. In this case, it’s “An aerial shot of a port with stacks of colorful shipping containers. Various machines maneuver between the containers, and a lone person walks across the lot.”
I’m going to read these descriptions for the sake of your blind listeners. And then, I can ask a follow-up question. In this case, it says there are various machines. “What are the machines doing? I can ask a follow-up question. And it says, “The machines are maneuvering, lifting, and moving shipping containers around the port.”
Ken Nakata: Yeah. I love the ability to pass the follow-up questions.
Cameron Cundiff: Yeah. That was an insight I had based on the feedback from a user group on Reddit called r/Blind. They also have a Discord channel, and I shared the extension there. And got really good community feedback, and that idea came out of that feedback.
Thomas Logan: That’s great to know that there’s a community you can go and work with and get feedback. That’s great.
Ken Nakata: Well, that is probably one of the coolest pieces of AI tech that I’ve seen for accessibility. But what else do you like in AI accessibility solutions? What other solutions excite you, or are you impressed by?
AI solutions that impress
Cameron Cundiff: I’ve been excited to see the developments around the Meta Ray-Ban glasses for accessibility use cases. If you’re not familiar, Ray-Ban and Meta have a collaboration. They release sunglasses with video cameras embedded in the frames and sensibly, the most common use case would be for sighted people who wanted to take pictures or record video, kind of as they’re moving around the world.
The use case for blind people, I think, is even more interesting. Meta has partnered with Be My Eyes more recently, and they did sort of like a video call integration with Be My Eyes. And now they actually have video streaming that live describes via gen AI, what you’re looking at through the glasses.
And so that’s exciting to me. I think that’s a big opportunity in the assistive technology space. Super interesting.
Thomas Logan: Yeah, that’s inspiring. You also work quite a bit with software developers and software development, so could you talk about how AI can work in the space of the development workflow?
Cameron Cundiff: I’ve done some experiments with AI as it relates to accessibility engineering. The most obvious touch points, I guess, are around code generation and augmenting human judgment. AI on its own is demonstrably poor at creating robust, accessible UI without any sort of guided prompts. This can and will change, but for now, including a human in the loop, you get much better results.
And I think that part will always be true. That’s how I’m thinking about AI as it relates to accessibility, not as a replacement for subject matter expertise. Generally, it’s to provide an assist to software engineers who may not know a lot about accessibility, but know enough to identify gaps.
And can use Gen AI to support them in their work.
Accessibility Agent
Thomas Logan: You mentioned one of your projects is a11y-agent. Can you explain what that is and how that works?
Cameron Cundiff: One of the insights that I had around gen AI as it relates to software engineering is it falls short out of the box for a lot of development tasks that you give to it.
But especially around accessibility, GitHub CoPilot, for example, is great at giving developers a sort of leg up on authoring content. But to date, these tools have fallen short in places like accessibility. So I’ll show you my answer to this problem, which is a11y-agent. a11y-agent is a command-line tool that you can provide or React file to JSX file. React is a popular web framework, I picked React because it’s ubiquitous. And, when you run the command, the agent checks for accessibility issues and then attempts to collaborate with you, essentially on fixing them alongside you to support your work.
a11y-agent is a command-line tool, and I give it a React template file, in this case, a TSX file. And it’s running some static analysis on the template. It just looks for a suite of possible errors, but instead of stopping there, which is what most linters do, it gives you some high-level guidance, the agent attempts to apply changes that will fix the issue.
In this case, it’s suggesting it’s not actually operating on the file yet, but it’s suggesting a fix for a missing lang attribute on the HTML element, and it’s adding in lang=”en”. I can accept the change. I can skip it. I can explain the change, which is interesting. Here I hit “e” and it says “Add a lang attribute to the HTML element.”
And then give some guidance on how it sets the primary language of the document. It’s important to screen readers and assistive technologies to interpret and pronounce the content, and then it’s saying, “Okay, yeah, that sounds great. So I’ve learned something.” Presumably, if I didn’t know that in the process of making these changes.
And then I’ll show you one more example as it continues. This is an interactive process. In this case, there is an issue around avoiding the words “image,” “picture,” “photo” in image alt text. Instead of alt=”photo”, it’s providing me a new value for the alt text and that new value is incorrect for now because the agent doesn’t know what the photo looks like, but it’s directionally helping me understand that I need to put new alt texts in. For now, I’m going to hit no, and I can continue through this interactive process of applying changes, explaining them, and having a conversation with the AI to learn something and to fix some of these issues.
Thomas Logan: Would the flow of this be the developer working on this file, they’re ready to check it in, or when do you envision in their process they run these checks?
Cameron Cundiff: I imagine this happening at multiple points in the development process. I think it would be most likely that developers would be doing some editing and running the agent as an intermediate step.
In the same way that you’d run your tests, potentially as you’re building features, you could run a11y-agent and make changes along the way.
Ken Nakata: It also seems as though it helps educate the developer in the process for this call-and-response loop that you’ve got. I like that kind of system, so it doesn’t just dump the results in the developer’s lap and say, “Here, go fix all this stuff.”
Cameron Cundiff: Yes, that’s intentional. I think that’s a failure mode for a lot of the accessibility; toying with it just gives you a wall of issues that are sort of intractable and hard to sink your teeth into.
Thomas Logan: And are you doing any learning with your agent to what have you heard from people using it or curious if you’ve gotten user feedback on how that process works and any takeaways from it so far.
Cameron Cundiff: For me, the people who are most excited about this are people who are adjacent to accessibility. They know accessibility is a requirement. They’re invested and committed to making it happen, but they don’t have the subject matter expertise to go and remediate these files quickly.
The examples I showed for a seasoned accessibility expert would be pretty straightforward to just go through the editor and fix. This is really geared more towards a mainstream software engineer who’s interested and committed to accessibility, but doesn’t have that much domain expertise.
Thomas Logan: I think that’s great because that’s the majority of the people that we’re trying to influence, are those people, right? Because there’s a much smaller number of people who know accessibility backwards and forwards. That’s logical.
Does AI improve code?
Ken Nakata: Cameron, you mentioned hallucinations a little while ago, which kind of leads to the next question, which is, how does the developer know that the recommendations they’re getting from AI are improving the code?
Cameron Cundiff: That’s a great question. When I approached this tool, I took a combination of procedural static analysis, so linting basically, and an AI, a Gen AI approach. So first, when it said, “I have 33 issues,” that’s not AI detecting the issues, that’s the static analysis. It guarantees that the human at some point, has deemed these issues as a problem for accessibility, based on a set of predefined rules. And that gives a lot of context to the AI, which helps prevent hallucination and fabricating changes. It doesn’t really replace human judgment, but it does augment human capacity, and it’s a good safeguard against hallucination when people are in the loop like this.
Thomas Logan: And my last follow-up question, so you mentioned React is what it’s optimized for work done, but what are the ideas are you thinking of going to other frameworks, or what is that thought process for next steps for this project?
Cameron Cundiff: Yeah, I think React makes a lot of sense because it’s such a ubiquitous framework. I did do a spike on Vue, which is another front-end framework in JavaScript, but I think right now people are finding a lot of utility in the React approach. So, definitely leaning in on that.
Thomas Logan: Yeah, I would say from my consulting, that makes sense to me. It seems like so many of the projects are React-based these days.
So, logical there too.
Hey, Cameron. Anything exciting you and the AI agent or AI inspection world today?
Cameron Cundiff: Sure. I think the most exciting thing to me now is, Anthropic just released, well, maybe a few weeks ago now, a CLI tool called Claude Code. It allows a lot of the same interactions or like conceptually similar interactions that I showed with accessibility agent, but much more general purpose.
And I’m eager to see how I can use Claude Code to inspire, to augment, to potentially support some of these accessibility approaches that I’ve been taking so far.
Thomas Logan: Well, we can’t wait to see it. And, you know, thanks for being a leader in explaining these topics and showing what’s happening in the world so that we can all learn from it.
Thanks, Cameron.
Cameron Cundiff: Thank you, Thomas. Thanks, Ken.
Thomas Logan: Cameron, thanks so much for being with us today. If people listening want to get in touch with you or find out more about the projects you mentioned, how can they reach you?
Cameron Cundiff: For the open-source work that I’ve done, you can go to GitHub.com/accesslint, and if you want to find me on YouTube, you can find me at A11y Nerd where I walk through a lot of these projects on video.
Thomas Logan: Thank you all for being here and listening with us today. We’d love to hear from you, so let’s continue the conversation. Please add your comments either onto this post or wherever you research it and find it. We’re happy to keep the conversation going. See you next time.
Do you need help with accessibility compliance?
Our years of experience working with lawyers and being expert witnesses in lawsuits have given us a unique perspective when it comes to explaining and justifying our clients’ accessibility compliance.
If you are experiencing any legal issues related to the accessibility of your technology, please contact us to discuss how we can help.