Project Overview
Project Summary:
“Translate SL” revolutionizes Google Translate by integrating real-time sign language
translation. Using device cameras, the app detects sign language gestures and converts
them into text or speech, enabling seamless communication between deaf and hearing
individuals. Leveraging Google’s Gemini AI, it suggests context-aware responses and
generates sign language video demonstrations for users to share. This feature bridges
communication gaps in everyday interactions, from healthcare settings to educational
environments, foresting inclusivity. The system combines mobile phone vision for
gesture recognition, Natural Language Processing (NLP) for text conversion, and
generative AI for responsive suggestions. By embedding this functionality into Google’s
widely used platform, Translate SL ensure scalability and accessibility for 70 million
deaf individuals globally. The project aligns with SDG 4 (Quality Education) inclusive
learning for deaf students, SDG 8 (Decent Work & Economic Growth) reducing workplace
barriers, SDG 10 (Reduced Inequalities) bridging the gap between deaf and hearing
communities, and SDG 11 (Sustainable Cities & Communities) enhancing accessibility in public
spaces.
Identifying the Challenge
The Social Problem:
Deaf individuals face significant communication barriers, with only 5% of the global
hearing population fluent in sign language. This isolation impacts education,
employment, and healthcare access. Existing Google Translate application does not
offer any sign language translation. Translate SL addresses this by integrating into
Google Translate, a tool with 1 billion+ users, ensuring widespread adoption. In South
Korea, where 250,000+ people use Korean Sign Language (KSL), the lack of real-time
translation tools exacerbates social exclusion. The project tackles this gap by providing
intuitive, AI-powered solution that requires no additional devices, making
communication effortless and equitable.
Innovation and Uniqueness
Why Our Project Stands Out:
Translate SL stands out as the first-to-market solution to integrate sign language
translation directly into Google Translate, harnessing its global infrastructure for
widespread accessibility. Powered by context-aware Gemini AI, it intelligently suggests
culturally appropriate responses, such as distinguishing between formal and informal
Korean Sign Language (KSL) and generates instructional videos for accurate signing.
Designed with accessibility at its core, Translate SL features a user-friendly interface
tailored for deaf users, incorporating visual feedback and vibration alerts to confirm
gesture detection. Additionally, it offers an offline mode that processes basic gestures
without internet access, making it especially valuable in regions with limited
connectivity.
Insights and Development
Learning Journey:
We initially struggled with gesture recognition accuracy due to regional sign variations.
By collaborating with deaf communities in Malaysia and South Korea, we refined our
dataset to include diverse signing styles. Switching from OpenCV to MediaPipe
improved real-time processing by 40%. Ethical considerations, like avoiding AI bias in
gesture interpretation, led us to adopt federated learning for privacy-preserving data
training.
Development Process:
Using TensorFlow Lite for edge-device compatibility, we built a prototype in Python with
Flask for backend processing. Frontend testing revealed color contrast issues for lowvision users, prompting UI redesigns. Future steps include partnering with Google for
API integration and expanding to 10+ sign languages.