Comparative Analysis of Transformer Model: mBART and mT5 on Question Answering System for Nepali Text

Authors

  • Raju Shrestha Asian College of Higher Studies
  • Krisha Shrestha Asian College of Higher Studies
  • Basant Karki Asian College of Higher Studies

DOI:

https://doi.org/10.3126/batuk.v12i1.90049

Keywords:

question answering, standard question answer dataset, multilingual transformers, mBART, mT5

Abstract

Despite significant advances in English question answering using transformer models such as Text-To-Text Transfer Transformer (T5), Bidirectional Auto-Regressive Transformers (BART), and Generative Pre-trained Transformer (GPT) trained on datasets like Standford Question Answering (SQuAD), research on Nepali question answering remains limited due to the scarcity of annotated data and fine-tuned models. This study presents a comparative analysis of two multilingual transformer models mBART and mT5 for Nepali question answering using transfer learning. A translated Nepali SQuAD dataset was developed and fine-tuned with both models, incorporating data augmentation to address data scarcity. Evaluation using BLEU, ROUGE, BERTScore, Exact Match, and F1 Score shows that both models perform well, with mBART slightly outperforming mT5. This work provides a foundation for future research on Nepali question answering systems.

Downloads

Download data is not yet available.
Abstract
0
pdf
0

Author Biographies

Raju Shrestha, Asian College of Higher Studies

Lecturer

Krisha Shrestha, Asian College of Higher Studies

Lecturer

Basant Karki, Asian College of Higher Studies

Lecturer

Downloads

Published

2026-01-28

How to Cite

Shrestha, R., Shrestha, K., & Karki, B. (2026). Comparative Analysis of Transformer Model: mBART and mT5 on Question Answering System for Nepali Text. The Batuk, 12(1), 111–120. https://doi.org/10.3126/batuk.v12i1.90049

Issue

Section

Part II: Humanities and Social Sciences