K-nearest correlated neighbor classification for Indian sign language gesture recognition using feature fusion
A sign language recognition system is an attempt to bring the speech and the hearing impaired community closer to more regular and convenient forms of communication. Thus, this system requires to recognize the gestures from a sign language and convert them to a form easily understood by the hearing. The model that has been proposed in this paper recognizes static images of the signed alphabets in the Indian Sign Language. Unlike the alphabets in other sign languages like the American Sign Language and the Chinese Sign language, the ISL alphabet are both single-handed and double-handed. Hence, to make recognition easier the model first categorizes them as single-handed or double-handed. For both categories two kinds of features, namely HOG and SIFT, are extracted for a set of training images and are combined in a single matrix. After which, HOG and SIFT features for the input test image are combined with the HOG and SIFT feature matrices of the training set. Correlation is computed for these matrices and is fed to a K-Nearest Neighbor Classifier to obtain the resultant classification of the test image.