
Communication often breaks down in the simplest moments; at a hospital desk, during a job interview, in a classroom discussion. For millions of deaf and hard-of-hearing people, these moments can require planning, waiting, or outside help. Talksign wants to reduce that friction.
The Nigeria and UK-based AI company has launched Talksign-1, a sign language recognition model that translates American Sign Language (ASL) into speech and text in under 100 milliseconds. The system also works in reverse, converting spoken or typed words into sign language video through a web browser.
With this release, Talksign steps into a growing global conversation around accessibility, inclusion, and how AI can support real-world communication.
Read: Everything You Should Know About Lagos Tech Fest 2026
The Company and the Founders
Edidiong Ekong and Kazi Mahathir Rahman founded Talksign in November 2025. The company’s idea stems from real-life experience: growing up in Nigeria, Ekong had close friends who were deaf and learned sign language at a young age. He noticed that everyday systems—from customer service desks to digital platforms, often exclude people who rely on sign language.
That experience shaped Talksign’s direction. The company now operates across Nigeria and the United Kingdom, focusing on building AI tools that improve accessibility in practical settings.During development, Talksign worked with deaf educators, native ASL users, and accessibility advocates. This collaboration helped guide the vocabulary choices, refine the model’s outputs, and test how the tool performs in everyday conditions.
Talksign says its goal is to make direct communication easier between deaf and hearing individuals. At the same time, the company is clear that the tool is not a replacement for professional interpreters, especially in high-stakes medical, legal, or safety situations.
Inside Talksign-1: The Technology Update

Talksign-1 recognises 250 commonly used ASL signs. It was trained on the WLASL2000 dataset, one of the most comprehensive public datasets for ASL research. On isolated sign recognition, the model achieves 84.7% accuracy.
Here’s how it works.
Using a standard webcam, the system captures hand, body, and facial movements. Instead of sending raw video to the cloud, it extracts 3D landmark data directly in the user’s browser. Only processed keypoint data is transmitted for analysis. This approach strengthens privacy and reduces the risk of sensitive footage being stored externally.
The model buffers about one second of signing before making a prediction. That short window helps it balance speed with recognition accuracy. Once analysed, the translated speech or text appears in under 100 milliseconds.
On the reverse side, spoken or typed input passes through a speech-to-sign engine, which returns a sequence of sign language video clips. This enables two-way interaction within the same browser interface.
For now, the system handles isolated signs only. It does not yet support continuous sentence-level translation or fingerspelling. Talksign says future updates will expand vocabulary size, add sentence recognition, and support additional sign languages beyond ASL.
The platform runs on a scalable cloud architecture using containerised services. This setup allows the company to expand capacity as demand grows without rebuilding its infrastructure.
Read: Alloysius Attah: From Ghanaian Farmboy to AgriTech Trailblazer
Why This Matters
According to global health data, hundreds of millions of people live with hearing loss, and tens of millions use sign language as their primary means of communication. Yet many digital tools and public services still assume users can hear and speak.
Talksign sees applications across education, healthcare, workplaces, transport systems, emergency alerts, and broadcasting. In classrooms, it could support more inclusive lessons. In hospitals, it could assist basic interactions while waiting for interpreters. Even , in offices, it could make meetings more accessible.
Performance may vary depending on lighting, camera angle, and signing style. And because the vocabulary is limited to 250 signs in this version, there are practical constraints. Still, the launch marks a meaningful step in accessibility-focused AI emerging from Africa and the UK.
At RefinedNG, we track the innovations shaping Africa’s tech future, from AI breakthroughs to startup launches and digital policy shifts. Follow RefinedNG for trusted tech news, sharp analysis, and stories that matter to founders, builders, and forward-thinking professionals.
