AI Hallucinations? What About Human Hallucination?! Addressing Human Imperfection Is Needed for an Ethical AI.

Authors

  • Ahmed Tlili Beijing Normal University (BNU).
  • Daniel Burgos Universidad Internacional de La Rioja, MIU City University Miami.

DOI:

https://doi.org/10.9781/ijimai.2025.02.010

Keywords:

Artificial Hallucination, Ethics, Human Hallucination, Human-Machine Collaboration, Morals and Responsibility
Supporting Agencies
This work is supported by the research grants, Research on Strategies for Improving Students’ Ability to Solve Complex Problems through Human-Computer Collaboration based on ChatGPT (Grant ID: 1233100004) and Mechanism and Teaching Intervention Research on the Impact of Generative Artificial Intelligence on College Students’ Creative Problem-Solving (Grant ID: 24YTC880129).

Abstract

This study discusses how the human imperfection nature, also known as the human hallucination, could contribute to or emphasize technology (generally) and Artificial Intelligence (AI, particularly) hallucination. While the ongoing debate puts more efforts on improving AI for its ethical use, a shift should be made to also cover us, humans, who are the technology designer, developer, and user. Identifying and understanding the link between human and AI hallucination will ultimately help to develop effective and safe AI-powered systems that could have some positive societal impact in the long run.

Downloads

Download data is not yet available.

References

T. Hagendorff and S. Fabi, “Why we need biased AI: How including cognitive biases can enhance AI systems,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 36, no. 8, pp. 1885-1898, 2024.

D. Dzhuhalyk, “Character.AI chatbot is accused of driving a teenager to suicide,” Available: https://mezha.media/en/2024/10/24/character-ai-chatbot-is-accused-of-driving-a-teenager-to-suicide/

A. Clark and M. Mahtani, “Google AI chatbot responds with a threatening message: “Human ... Please die.”,” Available: https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/

N. Maleki, B. Padmanabhan, and K. Dutta, “AI hallucinations: a misnomer worth clarifying,” in 2024 IEEE Conference on Artificial Intelligence (CAI), 2024, pp. 133–138.

E. Sengupta, D. Garg, T. Choudhury, and A. Aggarwal, “Techniques to eliminate human bias in machine learning,” in 2018 International Conference on System Modeling & Advancement in Research Trends (SMART), November 2018, pp. 226–230.

H. Ibrahim, F. Liu, Y. Zaki, and T. Rahwan, “Google Scholar is manipulatable,” 2024, arXiv preprint arXiv:2402.04607.

Y. Xu, M. Wang, K. Moty, and M. Rhodes, “How culture shapes the early development of essentialist beliefs,” Developmental Science, vol. 28, no. 1, p. e13586, 2025.

S. J. Watkins and C. Musselwhite, “Recognised cognitive biases: How far do they explain transport behaviour?,” Journal of Transport & Health, vol. 40, p. 101941, 2025.

G. D. Baxter, E. F. Churchill, and F. E. Ritter, “Addressing the fundamental attribution error of design using the ABCS,” AIS SIGCHI Newsletter, vol. 13, no. 1, pp. 76–77, 2014

L. Ross, T. M. Amabile, and J. L. Steinmetz, “Social roles, social control, and biases in social-perception processes,” Journal of Personality and Social Psychology, vol. 35, pp. 485–494, 1977.

T. Bolukbasi, K. W. Chang, J. Y. Zou, V. Saligrama, and A. T. Kalai, “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings,” Advances in Neural Information Processing Systems, vol. 29, 2016.

J. Zhao, T. Wang, M. Yatskar, V. Ordonez, and K. W. Chang, “Men also like shopping: Reducing gender bias amplification using corpus-level constraints,” arXiv preprint arXiv:1707.09457, 2017.

L. A. Hendricks, K. Burns, K. Saenko, T. Darrell, and A. Rohrbach, “Women also snowboard: Overcoming bias in captioning models,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 771–787.

J, Guynn, “‘You’re the ultimate editor,’ Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg accused of censoring conservatives.” Available on: https://eu.usatoday.com/story/tech/2020/11/17/facebook-twitter-dorsey-zuckerberg-donald-trump-conservative-bias-antitrust/6317585002/

G. Pennycook, A. Bear, E. T. Collins, and D. G. Rand, “The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings,” Management Science, vol. 66, no. 11, pp. 4944–4957, 2020.

C. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2017.

S. L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach, “Language (technology) is power: A critical survey of ‘bias’ in NLP,” arXiv preprint arXiv:2005.14050, 2020.

CTOL, “NeurIPS 2024 Sparks Controversy: MIT Professor’s Remarks Ignite “Racism” Backlash Amid Chinese Researchers’ Triumphs.” Available on: https://www.ctol.digital/news/neurips-2024-controversy-mit-professor-remarks-chinese-researchers-triumphs/

A. Bozkurt, J. Xiao, R. Farrow, J. Y. Bai, C. Nerantzi, S. Moore, and T. I. Asino (Eds.), “The manifesto for teaching and learning in a time of generative AI: A critical collective stance to better navigate the future,” Open Praxis, vol. 16, no. 4, pp. 487–513, 2024.

J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Conference on Fairness, Accountability and Transparency, 2018, pp. 77–91.

J. M. Carroll, “Human-computer interaction: psychology as a science of design,” International Journal of Human-Computer Studies, vol. 46, pp. 501–522, 1997.

C. Lewis and J. Rieman, Task-Centered User Interface Design: A Practical Introduction, 1993. Published as shareware. Available from: https://hcibib.org/tcuid/tcuid.pdf.

B. Friedman and H. Nissenbaum, “Bias in computer systems,” ACM Transactions on Information Systems, vol. 14, no. 3, pp. 330-347, 1996.

C. O. Morningstar and F. R. Farmer, “The lessons of Lucasfilm’s Habitat,” in B. Michael, Ed., Cyberspace: The First Steps. Cambridge, MA: MIT Press, 1991.

Downloads

Published

2025-03-01
Metrics
Views/Downloads
  • Abstract
    144
  • PDF
    66

How to Cite

Tlili, A. and Burgos, D. (2025). AI Hallucinations? What About Human Hallucination?! Addressing Human Imperfection Is Needed for an Ethical AI. International Journal of Interactive Multimedia and Artificial Intelligence, 9(2), 68–71. https://doi.org/10.9781/ijimai.2025.02.010

Most read articles by the same author(s)