Prediction of Sign Language Using Cross Pooling Highway Classifier Network
Main Article Content
Abstract
Sign language recognition (SLR) has the potential to significantly diminish the communication barrier between the hearing and vocally impaired group and the rest of society. The first and essential step in comprehending and deciphering sign language is word-level sign language recognition (WSLR). Identifying cues from films, however, is a challenging endeavour since the interpretation of a phrase relies on a combination of nuanced body movements, hand positions, and other behaviours. Current pose-based WSLR approaches either fail to completely use the spatial information while representing the temporal information, or they simultaneously describe the spatial and temporal links among postures in different frames. We use a distinctive methodology rooted in artificial intelligence (AI) that collects data on posture and performs recognition in order to address the problem of WSLR. Initially, the data was obtained and preprocessed using a Fibonacci error filter (FEF). The area of interest is segmented using the Loop string moth optimisation technique. Ultimately, the cross-pooling highway classifier architecture thoroughly examines the connections between features and accurately interprets the significance of sign language, all in order to facilitate the decision-making process. The findings obtained from analysing a well-established dataset for word-level sign language recognition clearly indicate the superiority of the proposed system compared to conventional systems. The recommended system achieves very accurate predictions within a short timeframe, making it highly efficient for decision making.