American Sign Language Translator
Abstract
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf or Muted people. Every normal human being sees, listens, and reacts to surrounding. But those are unlucky individuals who does not have this important blessing. Such individuals, mainly deaf and dumb, they depend on communication via sign language to interact with others. However, communication with ordinary individuals is a major impairment for them since not every typical people comprehend their sign language. This paper proposes an application which would help in recognizing the different signs which is called ASL (American Sign Language) by using Python, OpenCV, Tensorflow and Keras. The images are of the palm side of hand and are loaded at runtime. The method has been developed with respect to single user at a time. The real time images called as training data is captured first and then stored in directory. Then feature extraction will take place to identify which sign has been articulated by the user. Finally a CNN (Convolution Neural Network) model which is used sequential classifier and RELU (Recurrent Linear Units) activation function was created and saved as a json file. The comparisons will be performed and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the json model. There are 41 signs in ASL corresponding to each 26 English alphabet, 0 – 9 numbers and some simple words also. This model provided with 95% accurate results for input images captured at many possible angle and distance in a pleasant environment.
Collections
- Computing [68]