Robust Devanagari Character Recognition Using Transfer Learning
Main Article Content
Abstract
Many applications, such as digital archiving, automatic transcription, and improving accessibility for native speakers, depended on Devanagari Character Recognition (DCR). This study used VGGNet and AlexNet, two pre-trained deep learning models, to examine the effectiveness of transfer learning in DCR. Utilizing the rich feature representations acquired from extensive image datasets, transfer learning significantly improved recognition performance even with a small amount of labeled data. Using a large dataset of handwritten and printed Devanagari letters, we optimized VGGNet and AlexNet to capture a variety of styles and variants. The outcomes of our investigation showed that both models were highly accurate at identifying letters, numbers, and conjunct characters in handwritten Devanagari text. Identifying characters, numbers, and conjunct characters from images required the detection, segmentation, and identification of Devanagari text. Once the text was discovered, it was converted into a machine- or digitally-encoded representation. VGGNet and AlexNet were used in this study as feature extractors inside a transfer learning framework. Our method reduced computing time and produced better results. We conducted experiments over 120 epochs for each model. The outcomes showed that VGGNet, with an accuracy 96.50 % in average period time of 121 seconds, obtained greater accuracy. In contrast, AlexNet had an average epoch time of 114 seconds and an accuracy of 95.97%. These results demonstrated the effectiveness of using VGGNet and AlexNet in conjunction with transfer learning to achieve reliable and effective Devanagari character recognition, implying notable gains in performance for real-world use.