Products are increasingly being designed to talk to consumers, explains Dr. Abraham Glasser, assistant professor in the School of Science, Technology, Accessibility, Mathematics, and Public Health. Cars invite drivers to ask for directions, and devices such as Alexa and Google Assistant are meant to chat about the weather, recipes, and other topics. “It is adding more and more barriers for people who don’t use speech,” says Glasser, who is deaf and uses American Sign Language (ASL). “There is no way for me to input sign into that.”

Glasser offered his perspective as part of the panel, “I’m Sorry, I Don’t Understand: How Voice AI Poses Barriers — and Some Solutions,” during the annual meeting of the American Association for the Advancement of Science (AAAS), February 15-17, in Denver, Colorado. Moderated by Dr. Shelley B. Brundage of George Washington University, who organized the session with Dr. Nan Bernstein Ratner of the University of Maryland, it also included researchers looking at the problems this technology poses for people who stutter, mumble, or have other kinds of speech impediments.

A group of five people stand together beneath a large screen with text that reads, "I'm Sorry, I Don't Understand: How Voice AI Poses Barriers — and Some Solutions."
Dr. Abraham Glasser (far right) appeared on the panel at the AAAS meeting with other researchers considering how voice assistants can be made for accessible for all. In the top photo (credit: Robb Cohen Photography & Video), he is presenting on the importance of sign recognition technology.

“The moderator was good at connecting all of our work into one cohesive story,” says Glasser, whose presentation was titled, “Voice AI, Deaf Speech, and the Search for Signed Language AI Interfaces.” “We want technology that works for all people, and all of us had common issues.” A major flaw with the technology is that it cannot recognize a range of different voices. “Some deaf people speak for themselves, but it doesn’t understand deaf accents and speech,” Glasser adds.

Glasser is looking into other methods of input, and collecting data on what deaf people want from this technology. “They need to find ways for me to sign everything,” he says. He believes that AI that understands ASL (and other sign languages) is possible, and that it is critical to be thinking about how to make that happen.

“Looking at the audience, I saw a lot of light bulbs coming on,” Glasser says. “Often companies don’t consider people with disabilities at all.”

Glasser also appeared March 16 at the IEEE VR 2024 conference in Orlando, Florida. His presentation, “XR Research Avenues for Deaf Users,” was part of a workshop on Inclusion, Diversity, Equity, Accessibility, Transparency, and Ethics in XR.

Get the Details

Fill out our inquiry form for an Admissions Counselor to contact you.

Inquiry Form

Apply Today

Create an account to start Your Applications.

Create an Account

Contact the Admissions Office?

Undergraduate Admissions

Recent News

Stay up to date on all the gallaudet happenings, both stories, and initiatives, we are doing with our Signing community!