Follow

AI terminology annoyance 

LLMs do not have a sensory experience. They cannot hallucinate. I see no reason to believe they are doing something intrinsically different when they give incorrect information compared to when they give correct information. They are just outputting tokens, which are only "true" or "false" when someone "interprets" them as a "statement".

Sign in to participate in the conversation
Computer Fairies

Computer Fairies is a Mastodon instance that aims to be as queer, friendly and furry as possible. We welcome all kinds of computer fairies!