AI terminology annoyance
LLMs do not have a sensory experience. They cannot hallucinate. I see no reason to believe they are doing something intrinsically different when they give incorrect information compared to when they give correct information. They are just outputting tokens, which are only "true" or "false" when someone "interprets" them as a "statement".