Directories
Gallaudet University
Who We Are
Our Work
Overview
News & Stories
Jun 24, 2025
Jun 18, 2025
Jun 13, 2025
Upcoming Events
August 29, 2025
August 30, 2025
University Wide Events
No Communication Compromises
Areas of Study
Schools
Programs
Changing the world
Research
Community & Innovation
Research Experiences & Services
Our Global Presence
Global at Home
Global Learning For All
Global Engagement
Your Journey Starts Here
Admissions
Financial Aid
Explore Our Campus
Connect
Discover
Influence
Popular Keywords
GU
/
Linguistics
Position Statements
ASL and AI tools
202-250-2043
202-448-7067
Email Us
Visual description: Julie and Miako are standing in front of a gray background. They are signing in ASL.
Dr. Julie A Hochgesang: This position statement is about AI, artificial intelligence. AI is a hot topic right now with everyone talking about AI. It’s everywhere. We need to understand what AI really is. Essentially, it is automation. “AI” is really a marketing term. It boils down to computers doing things for us, automating our processes. AI relies on language models. In essence, language models can be thought of as the massive amounts of data within computer networks, which then use algorithms to try to figure out what people typically say, how people would respond to a given situation, create a prediction and give an output for whatever the intended purposes are. However, language models must have data sources from people who grew up using that language and use that language daily. So much of the content available online today is from people who learned ASL later in life and don’t use it as their primary language. These new signers whose ASL teachers asked them to upload their homework assignments or even interpreters for news briefings and the like have flooded the online space. People developing AI extract this data without regard for their purposes. We do not recommend this at all.
Dr. Miako Villanueva: This is true. I think there are two considerations when designing AI: the dataset, the language itself, and then the automated process of making predictions and outputs. In regards to the data, as you mentioned, we have to consider who is represented there. If the model is looking to use ASL as data, is there variety in ASL use? Is the diversity of the deaf communities represented? Is there natural everyday representation? Then the videos in the dataset have to be labeled so that the computer can filter and read the data. With labeling, how do you decide what labels to use for which sign and do it consistently? All of those factors will impact the results. Additionally, the people involved in that process and making relevant decisions must be involved in the deaf communities, have a background as a deaf signed language user and must be able to think about the ethical issues.
This also applies to the automation process itself. How do you decide which filters to apply, which decisions to make, what are the most common responses of that group? These decisions must be made with the involvement of the communities who are familiar with that context. Not some unknowledgeable person who applies the wrong filters and the resulting outputs are then used for the wrong purposes. This also presents an ethical conflict. We have to consider both those factors and we must have the deaf communities involved from inception to implementation to ensure that clear ethical decisions are made in regards to language use.
Julie A Hochgesang: And related to our communities.
In short: The best language models are those who depend on and use those languages everyday. Don’t train AI models on new signers or hearing interpreters. Signing deaf representation is important.
There are a lot of discussions out there about “AI”. Professor Emily Bender at the University of Washington points out that “AI” is a shiny marketing term. She proposes that it is beneficial to discuss “AI” as automation. Computer programs can be used to automate many different tasks, successfully or not.
In linguistics, this automation can mean translation of information from one format to another, such as from speech to text or from text of one language to another. Automation of these tasks is accomplished with a set of training data, and with programs that extract frequent patterns in that training data. Automation systems apply the extracted patterns to generate an output.
Dr. Bender’s argument is that when we think about “AI” as automation, many of the ethical and legal systems that we already have in place can help us navigate how automation is used.
When it comes to “ASL and AI tools”, companies and researchers must continue to think about potential harms of automation systems that are trained on sign language data. Considering potential harms means considering 1) the training data: where did it come from, what does it look like? and 2) the automation systems: how can they potentially be used or misused?
To the first point, training data, our position is that companies and researchers must build their ASL training data using video data from deaf signers that represent the diversity of signing communities. Scraping an online ASL dictionary may seem like a quick fix. Working with a hearing person who is learning some ASL may seem convenient. Contracting an interpreter to read a list of prepared sentences may seem straightforward. But these decisions will lead to very low-quality output, and will bias your dataset. These decisions also devalue the communities of people who use ASL in a variety of contexts in their everyday lives.
To the second point, automation systems, our position is that companies and researchers must build their ASL automation systems with deaf signers in mind. What use cases do deaf people envision for your tool? What feedback have you gotten from focus groups made up of diverse signers? Who among your team of researchers is a deaf signer, themselves? If you cannot answer these questions (and even if you can), then be cautious about the potential harms and misuse that your tool will cause, because you have developed it without the involvement of the linguistic community you are profiting from.
The best linguistic technologies, whether automated or not, are built with data that comes from people who use the target language every day, and are designed to suit the needs of those same people, as well.
References
Aashaka Desai, Maartje De Meulder, Julie A. Hochgesang, Annemarie Kocab, and Alex X. Lu. 2024. Systemic Biases in Sign Language AI Research: A Deaf-Led Call to Reevaluate Research Agendas. In Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources, pages 54–65, Torino, Italia. ELRA and ICCL. https://aclanthology.org/2024.signlang-1.6
Emily M. Bender. 2024. Resisting Dehumanization in the Age of “AI”. Current Directions in Psychological Science, 33(2), 114-120. https://doi.org/10.1177/09637214231217286
Maartje De Meulder. 2021. Is “good enough” good enough? Ethical and responsible development of sign language technologies. In Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL), pages 12–22, Virtual. Association for Machine Translation in the Americas.
https://aclanthology.org/2021.mtsummit-at4ssl.2
Carl Börstell. 2023. Ableist Language Teching over Sign Language Research. Proceedings of the Second Workshop on Resources and Representations for Under-Resourced Languages and Domains (RESOURCEFUL-2023), 1–10. https://aclanthology.org/2023.resourceful-1.1
Neil Fox, Bencie Woll, and Kearsy Cormier. 2023. Best practices for sign language technology research. Universal Access in the Information Society. https://doi.org/10.1007/s10209-023-01039-1
BSL summary: https://youtu.be/hcAOBsRreh8
Further reading
ChatGPT is having a really bad impact on the environment | TechRadar
ChatGPT is bullshit | Ethics and Information Technology
Opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge” | by Emily M. Bender | Medium
A bottle of water per email: the hidden environmental costs of using AI chatbots
Americans increasingly using ChatGPT, but few trust its 2024 election information | Pew Research Center
Why A.I. Isn’t Going to Make Art | The New Yorker
How to cite this position statement
Gallaudet Linguistics Department. (2025). ASL and Artificial Intelligence (AI) Tools – Gallaudet Linguistics Department Position Statement. https://doi.org/10.6084/m9.figshare.28392227
Fill out our inquiry form for an Admissions Counselor to contact you.
Create an account to start Your Applications.