Back to Search Start Over

Efhamni: A Deep Learning-Based Saudi Sign Language Recognition Application.

Authors :
Al Khuzayem, Lama
Shafi, Suha
Aljahdali, Safia
Alkhamesie, Rawan
Alzamzami, Ohoud
Source :
Sensors (14248220); May2024, Vol. 24 Issue 10, p3112, 35p
Publication Year :
2024

Abstract

Deaf and hard-of-hearing people mainly communicate using sign language, which is a set of signs made using hand gestures combined with facial expressions to make meaningful and complete sentences. The problem that faces deaf and hard-of-hearing people is the lack of automatic tools that translate sign languages into written or spoken text, which has led to a communication gap between them and their communities. Most state-of-the-art vision-based sign language recognition approaches focus on translating non-Arabic sign languages, with few targeting the Arabic Sign Language (ArSL) and even fewer targeting the Saudi Sign Language (SSL). This paper proposes a mobile application that helps deaf and hard-of-hearing people in Saudi Arabia to communicate efficiently with their communities. The prototype is an Android-based mobile application that applies deep learning techniques to translate isolated SSL to text and audio and includes unique features that are not available in other related applications targeting ArSL. The proposed approach, when evaluated on a comprehensive dataset, has demonstrated its effectiveness by outperforming several state-of-the-art approaches and producing results that are comparable to these approaches. Moreover, testing the prototype on several deaf and hard-of-hearing users, in addition to hearing users, proved its usefulness. In the future, we aim to improve the accuracy of the model and enrich the application with more features. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14248220
Volume :
24
Issue :
10
Database :
Complementary Index
Journal :
Sensors (14248220)
Publication Type :
Academic Journal
Accession number :
177490280
Full Text :
https://doi.org/10.3390/s24103112