Sign Language Recognition Using Deep Learning and Computer Vision

R.S. Sabeenian, S. Sai Bharathwaj and M. Mohamed Aadhil

Inability to speak is true disability. Speech impairment is a disability that affects an individual’s ability to communicate using speech and hearing. Mode of communication such as sign language is used by people affected by this impairment. There exists a challenge for non-signers to communicate with signers although the sign language is ubiquitous in recent times. There has been a strong progress in the fields of motion and recognition of gestures with the recent advancements in computer vision and deep learning techniques. The major focus of this work is to create a deep learning-based application that offers sign language translation to text thereby aiding communication between signers and non-signers. We use a custom CNN (Convolutional Neural Network) for recognizing the sign from a video frame. MNIST dataset is used.

Volume 12 | 05-Special Issue

Pages: 964-968

DOI: 10.5373/JARDCS/V12SP5/20201842