Comparative Analysis of Transformer Model: mBART and mT5 on Question Answering System for Nepali Text
DOI:
https://doi.org/10.3126/batuk.v12i1.90049Keywords:
question answering, standard question answer dataset, multilingual transformers, mBART, mT5Abstract
Despite significant advances in English question answering using transformer models such as Text-To-Text Transfer Transformer (T5), Bidirectional Auto-Regressive Transformers (BART), and Generative Pre-trained Transformer (GPT) trained on datasets like Standford Question Answering (SQuAD), research on Nepali question answering remains limited due to the scarcity of annotated data and fine-tuned models. This study presents a comparative analysis of two multilingual transformer models mBART and mT5 for Nepali question answering using transfer learning. A translated Nepali SQuAD dataset was developed and fine-tuned with both models, incorporating data augmentation to address data scarcity. Evaluation using BLEU, ROUGE, BERTScore, Exact Match, and F1 Score shows that both models perform well, with mBART slightly outperforming mT5. This work provides a foundation for future research on Nepali question answering systems.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Nesfield International College

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator.