Nepali Sign Language Characters Recognition: Dataset Development and Deep Learning Approaches
DOI:
https://doi.org/10.3126/kjse.v10i1.93865Keywords:
CNN, Nepali Sign Language, Image Classification, MobileNetV2, ResNet50, Transfer Learning, Fine-TuningAbstract
Sign languages serve as vital communication systems for individuals with hearing or speech impairments, yet digital resources for underrepresented languages such as Nepali Sign Language (NSL) remain scarce. Although tens of thousands of people in Nepal rely on NSL, the lack of systematically curated datasets has slowed progress in computational assistive technologies. This study introduces a large-scale image-based dataset for NSL, consisting of 54,000 images spanning 36 gesture classes, with 1,500 samples per class. The dataset includes samples captured against both plain and varied backgrounds to enhance robustness and generalization. To establish baseline recognition performance, we fine-tuned MobileNetV2 and ResNet50 Convolutional Neural Network (CNN) models using transfer learning. Experimental results indicate that MobileNetV2 outperforms ResNet50, achieving a classification accuracy of 90.45% compared to 88.78%, suggesting that lightweight architectures generalize more effectively in low-resource settings. To the best of our knowledge, this dataset represents one of the first large-scale image-based resources for NSL, providing a foundation for advancing research on underexplored sign languages.