Fine Tuning BERT For Fake News Detection
编号:18
访问权限:仅限参会人
更新:2025-12-03 21:58:23 浏览:8次
口头报告
摘要
The augmentation of fake news across online platforms has come forth as a challenge to society and threat to democracy. Fake news gnaws confidence in reliable news sources and threatens social cohesion and belief in democracy. Fake news comes from different sources and spreads like a wildfire. It becomes difficult to distinguish the authenticity of real news from fake news. While numerous studies have addressed fake news detection using machine learning algorithms, many conventional approaches are limited by their reliance on manual feature engineering or an incomplete understanding of linguistic context. This paper works on a more advanced approach using a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model to overcome this limitation. This research work puts emphasis on fine-tuning a pre-trained BERT model on a task-specific news dataset. Fine tuning can significantly improve detection accuracy. An extensive study has been carried out on the ISOT dataset taken from the University of Victoria that consists of thousands of real and fake news articles. The model used in the research achieved an accuracy of 99.97%, precision of 100%, F-1 score of 99.97% and recall of 99.94%, validating its superiority over previously reported methods.
关键词
Fake news, BERT, Fine Tuning, Pre-processing, Natural Language Processing, Machine Learning Algorithms.
稿件作者
Kshitij Kumar
Manav Rachna International Institute of Research and Studies
Aryan Agnihotri
Manav Rachna International Institute of Research and Studies
Kavita Arora
Manav Rachna International Institute of Research and Studies
发表评论