Two New Artificial Intelligence Laws and Their Effects on Accessibility

Image Description: Illustration of Thomas and Ken at a desk with A11y Insights. Thomas has a laptop in front of him. A city skyline is in the distance behind them. The news window shows a large conference table with professionals around it and the screen in the middle represents artificial intelligence

For a while now, disability inclusion and accessibility professionals have expressed concerns about artificial intelligence bias against people with disabilities. Of course, this is on top of many people’s concerns about AI’s making up information and data collection privacy.

Lawmakers in the state of Colorado have been concerned about this, too. The  Colorado Sun reports they’ve passed Colorado Senate Bill 250 in an attempt to regulate the exploding artificial intelligence industry.

Ken Nakata’s Senate Bill 205 post says its purpose is to reduce discrimination when applying for a job, loan, and other services where AI could impact decisions.

“The law addresses the increased risk of ‘algorithmic discrimination’ in these high-risk AI systems,” Ken writes. “It requires that anyone deploying one of these systems monitor their systems for possible algorithmic discrimination and take active steps to address it. The law also requires organizations to notify consumers about possible algorithmic discrimination and offer them a way to redress it when it occurs. The law requires deployers to describe how they expect to mitigate against algorithmic discrimination before it occurs.”

However, legislators have concerns about the law’s potential to thwart AI advancements. In fact, many thought Gov. Jared Polis was going to veto the bill. Polis opted to sign the bill saying that it would not go into effect until February 2026. The governor believes there will be enough time for lawmakers to tweak the bill to be effective without slowing down advancements.

Shortly thereafter, the EU passed a regulation on artificial intelligence.

Thomas Logan: Hello, everyone. This is Thomas Logan from Equal Entry here with Ken Nakata of Converge Accessibility.  In this episode of Accessibility Insights, we’re talking about the new Colorado SB 205 bill, which is about artificial intelligence and making sure that that’s inclusive and available to everyone. I think right now this is a very new idea. Not many people are talking about it.

What Are the Artificial Intelligence Laws?

Thomas Logan: So let’s start off. Ken, what are the laws? What’s happening in this space?

Ken Nakata: Thanks, Thomas. There are two laws that are popping up. They almost happen simultaneously. One happened on May 17th, Friday. That was the first one. That was Colorado SB 205. And that one is titled “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems.”

That one focuses mostly on what they call high-risk AI systems, which means AI systems where artificial intelligence can be a substantial factor in making a decision about certain categories of transactions, such as employment opportunities or education opportunities. It gives a short list of those things, it’s also things like insurance applications.

Pretty much the kinds of things that in society that we apply for, but don’t necessarily have any guarantee that we’re going to receive. And that law focuses on what they call potential algorithmic discrimination and puts some safeguards around preventing that kind of algorithmic discrimination.

And also it requires that in Colorado that people who are deploying these kinds of systems inform people from the public that they’re being subjected to one of these systems because they might not know it. Just a few days later in the EU, the EU passed a new regulation around artificial intelligence.

And their system is quite a bit broader than Colorado because it doesn’t cover just high-risk AI systems. It covers basically any application of AI. But then develops a what they consider a risk-based analysis or risk spectrum that categorizes some things as high risk, some things as super high risk and are just prohibited, and some things that are relatively low risk.

And in the EU model, the number of safeguards that you have to take to make sure that discrimination isn’t occurring is really dependent upon the level of risk. And in the EU system, generally, the high-risk systems would be pretty much the same kinds of things that are covered as high-risk in Colorado.

There are certain cases of prohibited use, which are at a super high risk. And I can’t think of all the examples, but those are things where you’re using AI to specifically figure out whether a person is a person with a disability or a person is a minority and then using that to target them for something else.

So, those are the two laws that popped up, and each one claims to be the first, although I think technically Colorado was first.

How the Laws Address the Problems with Artificial Intelligence

Thomas Logan: Let’s give them their dues, and also let’s critique what they’re doing, right? So, I think in your explanation of what they’re trying to do and with AI, it’s always, obviously, we’re talking to you all here, listening to our podcasts as humans, and we’re not trying to be computers in evaluating people.

But some of the problems that can happen are AI can learn certain best practices or certain observations based off of the data that it’s trained on. And so one example that we discuss prior to making this podcast was we know as people that are involved in the disability community. Gallaudet and RIT (Rochester Institute of Technology), these are both universities that support and have enrolled a lot of people with disabilities.

This could be a flag for an AI system, even if they didn’t mean to deterministically say, this is right, like a reason not to hire someone. It might happen just via the AI system. So, Ken, what do you think about, like, these things can happen, right?

Just from data observations of tech and then what’s the end result of AI where it’s not a human passing a judgment? It’s a computer being like, “Oh, yeah, the people that get hired from this university often have a disability and need accommodation,” for example.

Ken Nakata: Yeah, that’s a great question, Thomas. The great thing about AI is that it’s a black box, and it can find patterns that human beings can’t. The bad thing about AI is that it’s a black box, and so we can’t really see inside of it to figure out how it’s making its determinations.

An example is that we were thinking about was I think the college essay example, where AI might be evaluating a writing sample or a college essay that somebody puts into their college application. And it may be screening those people out before a human being ever gets to see it.

And that can have obviously a huge impact on someone who say for instance, English is not their first language, or it could tend to target people who use ASL as their first language. In both of those cases, it may have this insidious effect of discriminating against people with disabilities.

Thomas Logan: I also just want to like add, I think that’s great to even in 2024 observe, obviously, like we live in the current world. People know ChatGPT exists. Probably people use ChatGPT to produce their college essays. And so if you’re using ChatGPT also to reject people using that. You just really have to be aware of, this is the world we live in.

People are using it to submit. People are using it to review. Let’s be real. And then let’s also make sure that, like the discrimination as we’ve discussed. I think that, AI might highlight discrimination in a way that’s not even easily observable for most people using these systems. It’s like it just happens via the product of using the technology. What do you think?

What Does  Facially Neutral Mean?

Ken Nakata: Exactly! Thomas, that’s exactly right. Again, the beautiful thing about AI and the worst thing about AI is that it’s a black box, and we can’t attribute any kind of motive or intent to AI system or to a user or someone who’s deploying an AI system, because it may be discriminating against people and we won’t even know it.

I can’t help but think about another analogous example in the law, which is what we call facially neutral discriminatory policies that have a discriminatory effect. And the example of using college writing samples, for instance, the one that we mentioned earlier, is of direct parallel to a Supreme Court case from 1976, Washington versus Davis, where some applicants to the District of Columbia Police Academy were being passed over based upon their writing skills in a written portion of the examination.

And at the time, and this is, again, back in the ’70s, where there still lingering effects of discrimination in schools. That system tended to screen out applicants who were black because they didn’t have the same educational opportunities that other applicants did.

So, it’s kind of funny that now we’re coming full circle and now we’re dealing with an AI system that could be having exactly that same effect.

Thomas Logan: And by the term “facially neutral” do you mean that if in that time period, you would actually like meet them face to face, so you would actually like see their face, you’d see they were black? You would judge them?

Ken Nakata: No, it doesn’t quite mean that. What it means is that the policy is facially neutral. The policy is simply what are your writing skills like on a test? And that’s a very loaded question in some regards because when you’re just evaluating a person’s writing skills, well, that is obviously dependent upon their educational opportunities that they had.

And those educational opportunities might be much more limited for somebody who was black in the 1970s than it would be for somebody who wasn’t black in the 1970s. So, It tended to screen out people who were black candidates because they didn’t have as many opportunities to develop their writing skills when they were in grade school and in high school. So, that’s what I mean by facially neutral.

Thomas Logan: And by facially neutral, do you mean that there was actually like a process to make sure that you never saw their face and you only observed them via text?

Ken Nakata: Facially neutral is not literally a person’s face. Facially neutral in this context means something that on its surface doesn’t reveal any kind of indiscriminatory intent.

And that’s the simple writing test. It sounds like a neutral test, it sounds like it should be equal to anybody regardless of their race. But when you actually apply it, it tended to screen out people who were black. And so it was actually discriminatory.

And that is exactly the same thing I could imagine happening if you’re a college applicant who uses ASL for nine-tenths of your communication. So, if you’re deaf, for instance, and you’re using a college essay, and the writing skills that are evident in the college essay to figure out whether you’re going to go to the next level of a college application process or you’re just going to get screened out right there, that seems to me almost like a direct parallel to Washington versus Davis.

Except that this time it would be the AI that’s doing the cut. And arguably, the AI, yes, it’s facially neutral because it’s just looking at whether the sentences have the right sentence structure, but ultimately the effect would be discriminatory.

What’s a Good Way to Use Artificial Intelligence?

Thomas Logan: So, just to think for the future, what’s a good way to use AI technology? Is there a way to, like, take advantage of the benefits? That we have, but also be aware of this. Do you have any ideas for how it could be used more effectively where you can take advantage?

Ken Nakata: That’s also a great question, Thomas. And I think that that’s really what these laws are trying to address. I mean, AI gives us so many opportunities, and I think really advances the ball in science and so many other areas. But we still need to be cognizant of some of the risks that are involved. And especially when the AI systems are being used to make these big, important decisions that really do affect people’s lives.

So, I could tell you that the both sets of legislation are pretty comprehensive. I’m not going to claim to be an expert on the EU law, or with the Colorado law, but more so I’m not going to claim to be an expert on the EU law because it’s 419 pages long. But, I will say that both laws try to put some safeguards around the use of AI.

So, this means things like if you notice that there is a discriminatory effect or potentially a discriminatory effect, you have to take active steps to address it. If a deployer of an AI system suspects that there’s a potential for an AI system to have this Algorithmic discrimination, then you’re supposed to take certain safeguards to prevent those from happening in the first place.

And I think the two interesting things that we haven’t talked about, I’ll mention one highlight of the Colorado law is that it requires deployers to let people know when an AI system is being used to make a decision about whether a person is going to continue to the next step. And one advantage of the EU system is that if a system has a potential for causing discrimination on the basis of disability, that elevates it to a higher level of risk.

So, just by the fact that it may affect screen out people who with disabilities means it’s a higher risk system. With that said, I also think that a lot of this is really, really difficult. And it may be inadequate because it is requiring us to really look at discrimination after it may have already occurred.

And it makes it really, really hard to say, “Oh yeah, the reason why I was screened out was probably because of AI.” Again, because it’s just a big black box. Both laws fortunately do allow you to contest a result or seek redress. If a person feels that they’ve been screened out because the AI improperly excluded them.

But I’m a bit of a pessimist when it comes to that as I just really can’t see a lot of people complaining about that. Because they’re not going to be able to prove that the system discriminated against them. Again, because AI is a black box.

Thomas Logan: But you know, that’s why we’re doing this podcast, right? We’re trying to like educate. Hopefully you all in our listener, audience. It’s like this is something very important. This is current and as Ken mentioned, it’s not really being discussed in the broader community. I think it’s a huge issue. I think it’s something to have a perspective on.

And I think we’d love to hear from you. Let’s continue the conversation. If you really feel something from what you’ve heard from, like, what’s happening in this current legislation. Please share your thoughts and feelings on how you feel about this. We will listen to you and we will also learn from you. So, thank you for your time. And we look forward to seeing you in our next episode.

Be Ready for DOJ Ruling

With the Department of Justice (DOJ) 2024 ruling about ADA regulations, we’ve had many requests for procurement and digital accessibility training. We now offer “DOJ Ruling: Accessibility Regulations vs Reality” training.

This training is a 60-minute introductory class intended for IT teams, sales teams, and product managers, anyone who buys or sells software / applications and needs to review VPATs.

The training focuses on ways the new DOJ rule for digital accessibility will affect your organization and employees’ role in addressing the requirements.

We are happy to consult with your procurement teams to ensure you understand the effects this new ruling will have on your organization and how to work with third-party vendors. Contact us to book a conversation.

Equal Entry
Accessibility technology company that offers services including accessibility audits, training, and expert witness on cases related to digital accessibility.

2 comments:

  1. Great thoughts and perspectives Thomas and Ken. Thank you for bringing additional light here.

    AI is tricky because I think this new frontier is going to be flirting with over regulation. Understanding how AI is already penetrating aspects of our lives at a scale and density never seen before is a cause for concern. And ultimately, we are a society of capitalism.

    So while the lure of time and cost savings are enormous, enabling non-humans to make decisions that influence another human beings shelter, education, safety, health, and employment must be regulated heavily, especially when those people are in protected classes.

    1. Hi Tanner, I completely agree. For me at least every AI demo I receive always has multiple items that need to be corrected. For the future I believe there always has to be a human involved!

Leave a Reply

Your email address will not be published. Required fields are marked *