A Predictive Model for Early Intervention Efficacy of Fake News based on Epistemic Vigilance

Authors

  • Yihan Zhao

DOI:

https://doi.org/10.6981/FEM.202604_7(4).0007

Keywords:

Fake News; Epistemic Vigilance; Truth-Default Theory; Early Intervention; Natural Language Processing; Design Science.

Abstract

Early automated intervention against fake news is critical for social media platform governance. This paper proposes a predictive method for intervention efficacy grounded in Epistemic Vigilance. First, using Truth-Default Theory (TDT) as a theoretical lens, empirical analysis on the RumourEval 2019 dataset confirms that "Deny" first-comments effectively break audiences' default inertia, significantly suppressing the subsequent support ratio to 0.525. Second, we translate this mechanism into a probability prediction task, utilizing the ELECTRA architecture and Focal Loss to address extreme sample imbalance. Cross-validation results demonstrate that our method substantially improves the recall rate of high-potential refutational texts to 0.898. By reversely applying stance evolution logic to active intervention evaluation, this study provides a scientific reference for platforms to deploy high-recall early-blocking strategies.

Downloads

Download data is not yet available.

References

[1] Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211-236.

[2] Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.

[3] Levine, T. R. (2014). Truth-Default Theory (TDT): A Theory of Human Deception and Deception Detection. Journal of Language and Social Psychology, 33(4), 378–392.

[4] Lee, E. J., & Jang, Y. J. (2010). What do others' reactions to news on Internet portal sites tell us? Effects of presentation format and readers' need for cognition on reality perception. Communication Research, 37(6), 825-846.

[5] Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication, 65(4), 619-638.

[6] Lewandowsky, S., & van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 32(2), 348-384.

[7] Zubiaga, A., Liakata, M., Procter, R., Wong Sak Hoi, G., & Tolmie, P. (2016). Analysing how people orient to and spread rumours in social media by looking at conversational threads. PLoS ONE, 11(3), e0150989.

[8] Zubiaga, A., Aker, A., Bontcheva, K., Liakata, M., & Procter, R. (2018). Detection and resolution of rumours in social media: A survey. ACM Computing Surveys (CSUR), 51(2), 1-36.

[9] Gorrell, G., Kochkina, E., Liakata, M., Aker, A., Zubiaga, A., Lukasik, M., & Bontcheva, K. (2019). SemEval-2019 Task 8: RumourEval 2019: Determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation (pp. 845-854).

[10] Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Wong Sak Hoi, G., & Zubiaga, A. (2017). SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017) (pp. 69-76).

[11] Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science, 66(11), 4944-4957.

[12] Ma, J., Gao, W., Mitra, P., Kwon, S., Perez, D. J., Wong, K. F., & Cha, M. (2016). Detecting rumors from microblogs with recurrent neural networks. IJCAI.

[13] Wu, L., Rao, Y., Zhao, Y., Liang, H., & Nazir, A. (2019). Trace fake news in social media: A unified framework with text, comment and propagation. KDD.

[14] Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind & Language, 25(4), 359–393.

[15] Procter, R., Crump, J., Karstedt, S., Voss, A., & Cantijoch, M. (2013). Reading the riots: what were the police doing on Twitter?. Policing and society, 23(4), 413-436.

[16] Mendoza, M., Poblete, B., & Castillo, C. (2010). Twitter Under Crisis: Can we trust what we RT?. In Proceedings of the First Workshop on Social Media Analytics (pp. 71-79).

[17] Qazvinian, V., Rosengren, E., Radev, D. R., & Mei, Q. (2011). Rumor has it: Identifying misinformation in microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (pp. 1589-1599).

[18] Murungi, D. M., Purao, S., & Yates, D. (2018). Beyond facts: A new spin on fake news in the age of social media. In Proceedings of the 24th Americas Conference on Information Systems (AMCIS).

[19] Jin, Y., van der Meer, T. G., Lee, Y. I., & Lu, X. (2020). The effects of corrective communication and employee backup on the effectiveness of fighting crisis misinformation. Public Relations Review, 46(3), 101910.

[20] Paek, H. J., & Hove, T. (2019). Effective strategies for responding to rumors about risks: The case of radiation-contaminated food in South Korea. Public Relations Review, 45(3), 101762.

[21] Oh, O., Agrawal, M., & Rao, H. R. (2013). Community intelligence and social media services: A rumor theoretic analysis of tweets during social crises. MIS Quarterly, 37(2), 407-426.

[22] Liu, B. F., Fraustino, J. D., & Jin, Y. (2015). Social media use during disasters: How information form and source influence intended behavioral responses. Communication Research, 42(5), 626-646.

[23] Chua, A. Y., & Banerjee, S. (2017). To share or not to share: The role of epistemic belief in online health rumors. International Journal of Medical Informatics, 97, 108-115.

[24] Shao, C., Ciampaglia, G. L., Varol, O., Yang, K. C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1), 4787.

[25] Lee, E. J., & Jang, Y. J. (2010). What do others' reactions to news on Internet portal sites tell us? Effects of presentation format and readers' need for cognition on reality perception. Communication Research, 37(6), 825-846.

[26] Gao, Y., Zhang, M. M., & Lysyakov, M. (2025). Does Social Bot Help Socialize? Evidence from a Microblogging Platform. Information Systems Research. https://doi.org/10.1287/isre.2024.1089

[27] Gambino, A., Fox, J., & Ratan, R. A. (2020). Building a stronger CASA: Extending the computers are social actors paradigm. Human-Machine Communication, 1, 71-85.

[28] Clark, K., Luong, M. T., Le, Q. V., & Manning, C. D. (2020). Electra: Pre-training text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations (ICLR 2020).

[29] Fajcik, M., Smrz, P., & Burget, L. (2019). BUT-FIT at SemEval-2019 Task 7: Determining the rumour stance with pre-trained deep bidirectional transformers. In Proceedings of the 13th International Workshop on Semantic Evaluation (pp. 1097-1104).

[30] Li, Q., Zhang, Q., & Si, L. (2019). Rumor detection by exploiting user credibility information, attention and multi-task learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL) (pp. 119-129).

[31] Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision (ICCV) (pp. 2980-2988).

Downloads

Published

2026-04-16

Issue

Section

Articles

How to Cite

Zhao, Y. (2026). A Predictive Model for Early Intervention Efficacy of Fake News based on Epistemic Vigilance. Frontiers in Economics and Management, 7(4), 51-58. https://doi.org/10.6981/FEM.202604_7(4).0007