Critical AI Studies

A Lacanian Interpretation of Artificial Intelligence Hallucination

Yuhong Wang (Corresponding Author)
ROR North China Electric Power University
AI & Future Society
Published:2025-10-22

Abstract

With the rapid advancement of artificial intelligence (AI) technology, the remarkable generative capabilities of AI systems are accompanied by the persistent phenomenon of "AI hallucination," which troubles both users and researchers. This phenomenon manifests as the generation of information detached from reality, undermining the credibility of AI and posing challenges to its application in substantive professional work. Interestingly, AI hallucination exhibits a structural homology with the concept of hallucination as proposed by Jacques Lacan within psychoanalytic theory. Interpreting AI hallucination through a Lacanian psychoanalytic lens can broaden the scope of research on this issue and foster interdisciplinary studies. This paper aims to introduce Lacan’s psychoanalytic framework to provide a novel perspective on understanding AI hallucination.

Keywords:

Large Language Models; AI Hallucination; Lacan; Psychoanalysis; AI Applications
Journal Cover
651 Views

PDF Downloads

Download data is not yet available.

Journal Info

ISSN3053-4011
PublisherPanorama Scholarly Group

How to Cite

Wang, Y. (2025). A Lacanian Interpretation of Artificial Intelligence Hallucination. AI & Future Society, 1(1), 13-16. https://doi.org/10.63802/afs.v1.i1.93

References

Boden, M. A. (2008). Mind as machine: A history of cognitive science. Oxford University Press, USA.

Cannon, B. (2016). Reply: What would I do with Lacan today? Thoughts on Sartre, Lacan, and contemporary psychoanalysis. Sartre Studies International, 22(2), 13–38. https://doi.org/10.3167/ssi.2016.220202

Chen, J. (2015). Psychosis in Lacanian theory. Journal of University of Jinan (Social Science Edition), (3), 68–73.

Fink, B. (2024). The Lacanian subject: Between language and jouissance. Shanghai Literature and Art Publishing House.

Homer, S. (2014). Introducing Lacan (X. Li, Trans.). Contemporary China Publishing House.

Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., ... & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38.

Lacan, J. (1966). Écrits: A selection. W.W. Norton & Company.

Lacan, J. (2011). Le désir et son interprétation [Desire and its interpretation]. Valas.fr.https://www.valas.fr/Jacques-Lacan-Le-desir-et-son-interpretation-1958-1959,250?lang=fr (Original work published 1958–1959)

Lee, M. (2023). A mathematical investigation of hallucination and creativity in GPT models. Mathematics, 11(10), 2320. https://doi.org/10.3390/math11102320

Liu, Y., & Zhao, Y. Z. (2025). Research on the generative mechanism and governance strategies of AI hallucination phenomenon. New Media and Society, 1–17.

McKenna, N., Li, T., Cheng, L., Hosseini, M. J., Johnson, M., & Steedman, M. (2023). Sources of hallucination by large language models on inference tasks. arXiv Preprint arXiv:2305.14552.

Mitchell, M. (2025). Artificial intelligence learns to reason. Science, 387, eadw5211. https://doi.org/10.1126/science.adt0007

Østergaard, S. D., & Nielbo, K. L. (2023). False responses from artificial intelligence models are not hallucinations. Schizophrenia Bulletin, 49(5), 1105–1107. https://doi.org/10.1093/schbul/sbac167

Possati, L. M. (2020). Algorithmic unconscious: Why psychoanalysis helps in understanding AI. Palgrave Communications, 6, 70. https://doi.org/10.1057/s41599-020-0445-0

Smith, A. L., Greaves, F., & Panch, T. (2023). Hallucination or confabulation? Neuroanatomy as metaphor in large language models. PLOS Digital Health, 2(11), e0000388. https://doi.org/10.1371/journal.pdig.0000388

Sun, Y., Sheng, D., Zhou, Z., et al. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities & Social Sciences Communications, 11, 1278. https://doi.org/10.1057/s41599-024-03811-x

Tarizzo, D. (2014). Introduzione a Lacan [Introduction to Lacan]. Gius. Laterza & Figli Spa.

Thorp, H. H. (2024). ChatGPT to the rescue? Science, 385, 1143. https://doi.org/10.1126/science.adt0007

Yu, Z. (2025). An equitable framework for the proposed world AI organization informed by the 2025 world AI conference. AI & Future Society, 1(1), 1–2. https://doi.org/10.63802/afs.v1.i1.79

Zhou, L., Schellaert, W., Martínez-Plumed, F., Moros-Daval, Y., Ferri, C., & Hernández-Orallo, J. (2024). Larger and more instructable language models become less reliable. Nature, 634(8032), 61–68. https://doi.org/10.1038/s41586-024-07424-5

Similar Articles

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)