Research Articles

When Algorithms Become Sacred: A Cognitive-Psychological Model of Trust, Authority, and Meaning in AI Interaction

Chengwen Song
ROR Department of Public Administration, Graduate School of Public Administration, Kookmin University, Seoul 02707, Republic of Korea
Yang Hu (Corresponding Author)
ROR Institute of Malaysia and International Studies (IKMAS), Universiti Kebangsaan Malaysia, Bangi, Malaysia
Feifei Chen
ROR School of Nursing and Health Management, Wuhan Donghu University, China
Sebastian Lenz
ROR Panorama Scholarly Group, Berlin, Germany
Jiale Li
ROR College of Music, Midwest University, Missouri, USA
Yanlin Feng
ROR College of Music, Midwest University, Missouri, USA
Xinhuan Zeng
ROR Department of Dance, College of Performing Arts and Sport, Hanyang University, Seoul, South Korea
Mengzhou Wu
ROR Party School of the CPC Ning Guo Municipal Committee (Ning Guo Administrative College), China
Journal of Social Cognition and Communication
Published:2026-03-24

Abstract

This study develops a theory-building framework explaining why artificial intelligence (AI) is increasingly perceived not merely as a tool, but as an epistemic and moral authority in contemporary digital societies. Integrating social cognition, communication theory, and the cognitive science of religion, the paper introduces the concept of cognitive sacredness to capture a threshold condition in which AI is treated as trustworthy beyond verification and normatively binding.

The proposed model specifies a five-stage process linking uncertainty, anthropomorphism, epistemic elevation, and quasi-religious attribution to behavioral reliance and ritualized interaction. It further identifies boundary conditions at the individual, contextual, and technological levels. The framework therefore clarifies not only how sacralization emerges, but also when it is more or less likely to occur.

By theorizing AI as a cognitively sacred object, the article reconceptualizes human-AI interaction as a process of authority construction rather than mere technology use, offering a novel theoretical explanation for how epistemic authority is constructed and stabilized in human-AI interaction.

Keywords:

artificial intelligence; trust; authority; social cognition; sacredness; algorithmic governance
Journal Cover
43 Views

PDF Downloads

Download data is not yet available.

Journal Info

ISSN3054-6958
PublisherPanorama Scholarly Group

How to Cite

Song, C., Hu, Y., Chen, F., Lenz, S., Li, J., Feng, Y., Zeng, X., & Wu, M. . (2026). When Algorithms Become Sacred: A Cognitive-Psychological Model of Trust, Authority, and Meaning in AI Interaction. Journal of Social Cognition and Communication, 1(1), 22-31. https://doi.org/10.63802/jscc.V1.I1.291

References

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989. https://doi.org/10.1177/1461444816676645

Atran, S., & Norenzayan, A. (2004). Religion's evolutionary landscape: Counterintuition, commitment, compassion, communion. Behavioral and Brain Sciences, 27(6), 713-730. https://doi.org/10.1017/S0140525X04000172

Barrett, J. L. (2004). Why would anyone believe in God? AltaMira Press.

Bloom, P. (2007). Religion is natural. Developmental Science, 10(1), 147-151. https://doi.org/10.1111/j.1467-7687.2007.00577.x

Boyer, P. (2001). Religion explained: The evolutionary origins of religious thought. Basic Books.

Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30-44. https://doi.org/10.1080/1369118X.2016.1154086

Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809-825. https://doi.org/10.1177/0022243719851788

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Dennett, D. C. (1987). The intentional stance. MIT Press.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. https://doi.org/10.1037/xge0000033

Durkheim, E. (1995). The elementary forms of religious life (K. E. Fields, Trans.). Free Press. (Original work published 1912)

Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864-886. https://doi.org/10.1037/0033-295X.114.4.864

Fiske, S. T., & Taylor, S. E. (2013). Social cognition: From brains to culture (2nd ed.). Sage.

Geraci, R. M. (2008). Apocalyptic AI: Religion and the promise of artificial intelligence. Journal of the American Academy of Religion, 76(1), 138-166. https://doi.org/10.1093/jaarel/lfm101

Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660. https://doi.org/10.5465/annals.2018.0057

Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619. https://doi.org/10.1126/science.1134475

Guthrie, S. (1993). Faces in the clouds: A new theory of religion. Oxford University Press.

Hauswald, R. (2025). Artificial epistemic authorities. Philosophy & Technology, 38, Article 18. https://doi.org/10.1007/s13347-024-00813-3

Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. American Journal of Psychology, 57(2), 243-259. https://doi.org/10.2307/1416950

Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434. https://doi.org/10.1177/0018720814547570

Hu, Y., Jusoh, S., Faliq Abd Razak, M., Bin Mohd Yusof, A. R., Xie, Q., Song, C., & Zong, X. (2026). Understanding the digital transformation of SMEs: A systematic review from a dynamic capabilities perspective across business, technology, and societal domains. Global Review of Humanities, Arts, and Society, 2(1), 1-17. https://doi.org/10.63802/grhas.V2.I1.186

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2

Kay, A. C., Gaucher, D., Napier, J. L., Callan, M. J., & Laurin, K. (2008). God and the government: Testing a compensatory control mechanism for the support of external systems. Journal of Personality and Social Psychology, 95(1), 18-35. https://doi.org/10.1037/0022-3514.95.1.18

Kelemen, D., & Rosset, E. (2009). The human function compunction: Teleological explanation in adults. Cognition, 111(1), 138-143. https://doi.org/10.1016/j.cognition.2009.01.001

Kerasidou, A., & Kerasidou, C. X. (2025). Epistemic authority and medical AI: Epistemological differences and challenges in medical practice. Medicine, Health Care and Philosophy, 28, 1-12. https://doi.org/10.1007/s11019-025-10306-2

Lang, M., & Kundt, R. (2024). The cognitive science of religion and its philosophical implications. Routledge.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50_30392

Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103. https://doi.org/10.1016/j.obhdp.2018.12.005

Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650. https://doi.org/10.1093/jcr/ucz013

Lustig, C., & Nardi, B. (2015). Algorithmic authority: The case of Bitcoin. In 2015 48th Hawaii International Conference on System Sciences (pp. 743-752). IEEE. https://doi.org/10.1109/HICSS.2015.95

Milella, F., & Cabitza, F. (2026). Perceiving AI as an epistemic authority or algority: A user study on the human attribution of authority to AI. Machine Learning and Knowledge Extraction, 8(2), Article 36. https://doi.org/10.3390/make8020036

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103. https://doi.org/10.1111/0022-4537.00153

Norenzayan, A. (2013). Big gods: How religion transformed cooperation and conflict. Princeton University Press.

Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230-253. https://doi.org/10.1518/001872097778543886

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A. S., ... Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486. https://doi.org/10.1038/s41586-019-1138-y

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. CSLI Publications.

Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717738104

Singler, B. (2020). Religion and artificial intelligence: An introduction. Religions, 11(3), 129. https://doi.org/10.3390/rel11030129

Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74-88. https://doi.org/10.1093/jcmc/zmz026

Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131. https://doi.org/10.1126/science.185.4157.1124

Verhoef, P. C., Broekhuizen, T., Bart, Y., Bhattacharya, A., Dong, J. Q., Fabian, N., & Haenlein, M. (2021). Digital transformation: A multidisciplinary reflection and research agenda. Journal of Business Research, 122, 889-901. https://doi.org/10.1016/j.jbusres.2019.09.022

Vial, G. (2019). Understanding digital transformation: A review and a research agenda. Journal of Strategic Information Systems, 28(2), 118-144. https://doi.org/10.1016/j.jsis.2019.01.003

Waytz, A., Cacioppo, J., & Epley, N. (2010a). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219-232. https://doi.org/10.1177/1745691610369336

Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010b). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383-388. https://doi.org/10.1016/j.tics.2010.05.006

Weber, M. (1978). Economy and society: An outline of interpretive sociology (G. Roth & C. Wittich, Eds.). University of California Press.

Whitehouse, H. (2021). The ritual animal: Imitation and cohesion in the evolution of social complexity. Oxford University Press.

Willard, A. K., & Norenzayan, A. (2013). Cognitive biases explain religious belief, paranormal belief, and belief in life's purpose. Cognition, 129(2), 379-391. https://doi.org/10.1016/j.cognition.2013.07.016

Zagzebski, L. (2012). Epistemic authority: A theory of trust, authority, and autonomy in belief. Oxford University Press.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.