Gallaudet University
Who We Are
Our Work
Overview
News & Stories
Jan 26, 2023
Jan 19, 2023
Upcoming Events
November 11, 2022
November 17, 2022
November 19, 2022
University Wide Events
No Communication Compromises
Areas of Study
Schools
Programs
Changing the world
Research
Community & Innovation
Research Experiences & Services
Your Journey Starts Here
Study
Learn
Undergraduate Support
Information
Tools & Resources
Explore Our Campus
Connect
Discover
Influence
Dec 9, 2022
Quick Links
Labs
Units
Donate
Departments
Projects
Support
News & Events
Meet the Team
FAQs
Testimonials
Contact
GU
/
Visual Language and Visual ...
Motion Light Lab (ML2)
SLCC 1103
(202) 651-5085
Email Us
Motion Light Lab (ML2) is an award-winning research and development lab at Gallaudet University in Washington, D.C. We are a part of the Visual Language and Visual Learning Center (VL2). We engage in a wide range of research-driven projects, creative R&D, including the development and distribution of bilingual storybook apps, the provision of training to support literacy development for deaf children, and the creation of advanced 3D avatars with sign language through motion capture technology. Our efforts intersect creative literature and digital technology that integrate with VL2 science to create new knowledge, and benefit society.
Find us on the first floor of the Sorenson Language and Communication Center building at Gallaudet University. We offer students rich opportunities for training in computational and digital media innovation, and we regularly look for volunteers for research studies.
We are seeking partnerships with Deaf schools and Deaf programs to create Deaf-centered storybook apps. Email us at motionlightlab@gallaudet.edu for more information.
Learn more about the work we do here.
Director Motion Light Lab
Associate Project Manager
Interactive Engineer
Associate 3D Animator
Creative Designer
Associate Professor
Program Specialist
Social Strategist & Community Engagement
We wish! As of right now, they have to be purchased and downloaded individually.
Melissa Malzkuhn