How AI Is Changing Accessibility: Progress, Challenges, and the Path Ahead

A woman speaks into her phone for voice-to-text dictation.

Image: A woman speaks into her phone for voice-to-text dictation.



Accessible technology is essential for people to function in our ever-increasingly digitized society. For millions of people with disabilities, access to technology can mean the difference between participating fully in education, employment, healthcare, and civic life, or being excluded as a result of barriers to accessibility. Whether it’s reading a document, navigating public transportation, joining a meeting, or accessing government services, technology should be designed with access and equity in mind, so that everyone can use technology as a tool for communication and independent living. As artificial intelligence (AI) becomes a larger part of the tools and systems we rely on, it is having an outsized impact on how we use technology, access the internet, get information, and communicate with each other. In its early stages, AI may not be addressing accessibility optimally, and yet at the same time, AI is providing advantages in technology use that are significant improvements for people with disabilities. Following, we’ll explore some AI-related tech innovations, and consider what may lie ahead, and what’s still needed.  

The Good News: Strides That Are Changing Lives

AI is advancing quickly, creating new possibilities for accessibility that simply didn’t exist before. The following examples highlight how AI is already improving services and technologies that support people with disabilities.

Seeing the world in detail, instantly.

The nonprofit Be My Eyes recently launched Be My AI, a virtual volunteer powered by AI that can describe images in rich detail for blind and low-vision users. From reading oven dials to identifying the expiration date on a milk carton, it’s a simple app that’s changing everyday independence. 

With this app, a user can simply snap a photo and receive a vivid, conversational description from an AI that “sees” the image. It doesn’t just identify “a can of soup”; it reads the label, lists the ingredients, and notes the expiration date. This can mean visually-impaired individuals would be able to shop independently at King Soopers, double-check their prescription labels at home, or verify the settings on a household appliance without waiting for a friend, family member, or volunteer to be “their eyes.”. This can increase independence and confidence, and relieve family and friends of always having to be available.


Accessible travel in real time.

NaviLens is a high-contrast, color-coded tagging system designed to help people who are blind or have low vision navigate public spaces independently. The system uses a specially designed code, similar to a QR code, but optimized for long-distance, quick scanning, that can be read by a free app for your smartphone.

Unlike traditional QR codes, which require users to line up their phone camera precisely and be close to the code, NaviLens tags can be detected from up to about 50–60 feet away and from nearly any angle. The app doesn’t require the user to take a photo or press a button; it continuously scans the environment and announces when a tag is in view.

A colorful Navilens QR code is seen on a post at a subway station in Boston.

Image: A colorful Navilens QR code is seen on a post at a subway station in Boston.

Once detected, the tag delivers spoken information in the user’s preferred language, such as directions to a platform, arrival times for buses or trains, descriptions of building entrances, or details about nearby facilities. The system can also guide users step-by-step toward a destination, updating instructions as they move.

NaviLens is now being used in public transit systems, museums, universities, and city streets. In the U.S., it’s in use in parts of New York City’s subway and bus network, San Antonio’s VIA Metropolitan Transit system, and Boston’s MBTA commuter rail stations. Because it works without an internet connection once installed, it’s particularly useful in underground stations or other areas with poor reception.

Airports are also embracing AI-powered inclusion. Aira, a live visual interpreting service, connects travelers to trained agents via smartphone or smart glasses. The agents describe surroundings, read signs, and guide users through unfamiliar spaces. Many airports now offer Aira for free on-site, including Denver International Airport. For a blind traveler, that could mean independently navigating from the check-in counter, through security, and to the boarding gate without relying on airport staff escorts.

Captions and transcription that keep pace with conversation.

For people who are Deaf or Hard of Hearing, or those who process information better visually than auditorily, AI-driven captioning has become a game-changer. On Android devices, Live Caption instantly adds captions to any audio or video, from YouTube clips to live phone calls, without needing an internet connection. Live Transcribe takes it further by turning in-person conversations into a live, scrollable text feed. 

Apple and Microsoft have also expanded their built-in accessibility features, such as real-time screen narration, voice preservation tools, and improved captions for FaceTime and Teams. In practice, these tools mean a student can follow along in a college lecture without struggling to lip-read, or a job seeker can participate fully in a Zoom interview without needing a separate captioning service.

Voice recognition that adapts to atypical speech.

For years, voice-controlled technology worked best for people who spoke in a way the software expected, often leaving out those with speech affected by conditions like ALS, cerebral palsy, or after a stroke. Project Euphonia, led by Google Research, is working to change that by training speech recognition models on recordings from people with atypical speech patterns. The goal is simple but powerful: make voice commands, dictation, and auto-captions work for everyone.

Early results show significant accuracy improvements, meaning someone with slurred or slower speech can now use voice commands to send a text, search the web, or operate a smart home device without the frustration of repeated misinterpretations. 

Progress on making policies universal 

Technology doesn’t exist in a vacuum; it’s shaped by policy. In 2024, the U.S. Department of Justice finalized new digital accessibility rules under Title II of the Americans with Disabilities Act, requiring all state and local government websites and apps to meet clear accessibility standards. This means state agencies, county offices, and municipal websites must ensure that public documents, online forms, and virtual meetings are fully accessible.

Internationally, the European Accessibility Act, coming into effect in late 2025, will set similar requirements for a wide range of products and services, from e-commerce platforms to e-books. While it’s a European law, its ripple effects will likely reach U.S. companies that operate globally, pushing them toward more inclusive design. Together, these measures create both a legal framework and a cultural expectation: accessibility isn’t optional, and AI-driven tools must meet real, human needs.


The Challenges We Still Face


While the potential of AI is enormous, it’s not without its growing pains. And in some cases, these challenges risk creating new forms of exclusion. One of the most pressing concerns is the growing reliance on “accessibility overlays” for websites. These are automated tools that claim to make websites instantly compliant by adding AI-generated image descriptions, restructuring menus, or inserting keyboard navigation shortcuts. The reality of what these programs can do often falls short of expectations. AI can, and does, make mistakes: labeling a photo of a wheelchair ramp as “stairs,” for example, or rearranging navigation elements in ways that confuse screen readers. Far from improving access, poorly implemented overlays can sometimes make websites harder to use. And because many organizations install them instead of doing the hard work of structural accessibility, real issues can remain unaddressed. However, when website designers build a site with proper accessibility from the start, these tools can function as intended by enhancing access rather than replacing it.

Speech recognition bias is another major challenge. Although groundbreaking projects like Google’s Project Euphonia are making measurable improvements for people with atypical speech, many voice-driven systems still struggle to accurately understand a range of speech patterns, regional accents, and dialects. For someone with a speech difference, this can mean repeated failed attempts to give a simple command, turning a tool meant to empower into a source of frustration. 

Privacy concerns are also now popping up. AI-powered services that process images, live video, or audio often capture more than the user realizes. A visual interpreting app might reveal not just the layout of your home but also personal documents on a table, family photos, or visible medications. Without clear, informed consent and robust privacy protections, people may unknowingly share information that could be stored, analyzed, or even used to train future AI models. And too often, privacy settings are buried in menus or written in legal language that’s difficult to parse.

A woman speaks into her phone while also working on her laptop.

Image: A woman speaks into her phone while also working on her laptop.

Then there’s the issue of automatic captions. While they’ve come a long way in recent years, especially for widely spoken, standardized English, they are still far from perfect. In noisy environments, or when speakers have strong accents or use specialized vocabulary, auto-captions can garble meaning entirely. Imagine a public meeting on environmental policy where a captioning error changes “water rights” to “water rides.” In a classroom, a science lecture on “neutrinos” could become “new trees.” These can fundamentally distort understanding. That’s why, for important contexts like public hearings, educational settings, and healthcare communication, human review of AI-generated captions is still necessary.

In short, AI can be an incredible ally for accessibility, but without careful oversight, inclusive design practices, and strong user protections, it can also replicate and even amplify the very barriers it seeks to remove. Fortunately, the technology is young, and developers are working hard to remedy these “digital errors,” seeking to ever increase the accuracy of content, images, captions, and voice-over. 


What the Future Could Bring


The next wave of AI accessibility will likely be defined by two major shifts: more on-device processing and stronger enforcement of accessibility laws. On-device AI means features can run without sending data to the cloud, improving both speed and privacy. We’re also likely to see major improvements in speech recognition for diverse voices, thanks to the growing datasets being developed in collaboration with people with disabilities.

As technology continues to evolve, state agencies, local businesses, and community organizations must adopt a “build with us, not for us” mindset, integrating people with disabilities into the design and testing process from the start.

Why This Matters

AI can be the ramp that opens the door, but only if it’s built with the disability community in mind. The future of accessibility will depend on collaboration between technologists, policymakers, and the people who rely on these tools every day. 

CPWD is committed to ensuring that AI serves as a bridge to independence, not a new barrier. Through our Beyond Vision program, we offer our Assistive Technology Library, where consumers can come and try out new assistive technology devices and learn to use this technology with our Independent Living Advisors. Check out our Service Calendar to see when you can join one of the sessions and try new AI-driven tech, or email info@cpwd.org to connect with an Independent Living Advisor. 

Through advocacy, education, and partnerships, we will continue to champion technology that empowers people with disabilities in Colorado and beyond, so that as AI reshapes the world, no one is left behind.

Please let us know if you have had an experience with a newer AI technology that you found helpful or interesting. We’d love to hear from you. Info@cpwd.org

Next
Next

Public Accommodations in Colorado: Progress in Motion