Every single time an LLM hallucinates, I am grateful: Grateful that I spotted it, and thus remind myself that any and all LLM output needs to be validated. You can never trust these things 100%, unless you have additional validation in place that is 100% reliable.
A recent example: I pasted very long text into Claude, and asked it to identify duplicate parts that can be removed, showing exact quotes. It hallucinated parts, with quotes, that do not even exist in any input!
10.36K