Reflecting on the Ethics of AI “Hallucinations”
- Parmjit Singh
- Sep 4
- 2 min read
When I hear “hallucination,” I think of something deeply human, something that comes from our mind, our senses, even our vulnerabilities. But AI isn’t human. It doesn’t think, imagine, or dream. What we call hallucinations are really just wrong outputs: guesses gone astray, patterns strung together without grounding in reality.
And yet, by giving these mistakes a human label, I worry that we blur the line between people and machines. It might seem harmless, but I can see how it creates problems. People might start trusting AI more than they should, believing it has some kind of intuition. Or worse, we might treat these mistakes as natural quirks, rather than signs that the system or its governance needs improving.
For me, the ethical heart of the issue is accountability. If we think of AI as “hallucinating,” we almost let it off the hook as if the machine owns its behaviour. But AI can’t carry responsibility. Only humans , the developers, the organisations, the regulators can. Shifting the language shifts the weight of responsibility, and that doesn’t sit right with me.
I also think about trust. In healthcare, in law, in finance areas where people’s lives and futures are on the line an AI’s so-called hallucination isn’t just a curious glitch. It could mislead someone into harm. If we soften the seriousness of that with language that makes it sound imaginative, we risk underestimating the damage it can do.
So where does that leave me? I think we need to be more precise. Instead of calling these errors hallucinations, maybe we should call them what they are: fabrications, unsupported outputs, mistakes. At the same time, systems should be clearer about how confident they are in their answers, and humans need to stay firmly in the loop when the stakes are high. And just as importantly, people need better education about what AI can and can’t do, so we don’t fall into the trap of expecting too much from it.
The more I reflect on it, the clearer it becomes: AI is not human, and we shouldn’t talk about it as though it were. By resisting that temptation, we can keep responsibility where it belongs with us and build a healthier, more honest relationship with these tools.




Comments