Multilingual BERT-Based Question Difficulty Classification Model for Adaptive Learning Systems
编号:17
访问权限:仅限参会人
更新:2025-12-03 21:58:08 浏览:7次
口头报告
摘要
The classification of question difficulty is a critical task in educational technology and adaptive learning systems, enabling personalized question delivery based on a learner’s proficiency. Traditional methods using TF-IDF and shallow machine learning models such as XGBoost, while effective, often fail to capture deep contextual and semantic nuances across multiple languages. Although Large Language Models (LLMs) demonstrate strong generalization abilities, their deployment is computationally expensive and less efficient for focused classification tasks. In this work, we propose a fine-tuned multilingual BERT-based model for question difficulty classification, capable of understanding linguistic context in English, Tamil, Hindi, and Sanskrit. Unlike general-purpose LLMs, the finetuned BERT model provides task-specific optimization with lower computational overhead and improved interpretability. The model leverages contextual embeddings to identify semantic complexity, linguistic variation, and syntactic depth, leading to more accurate and language-agnostic difficulty predictions. Experimental evaluation on a multilingual question dataset shows that our approach significantly improves accuracy and F1-score over traditional TF-IDF and LLM-based baselines, achieving both performance and efficiency in multilingual educational assessment.
关键词
Question Difficulty Classification, Multilingual BERT, Natural Language Processing, Transfer Learning, Finetuning, Semantic Representation, Contextual Embeddings, Machine Learning, Adaptive Learning Systems.
稿件作者
Shajeena J
Assistant Professor
发表评论