Hallucination

Simon BudziakCTO
In AI, a Hallucination occurs when a Large Language Model (LLM) generates a response that is confident but factually incorrect or nonsensical. This happens because LLMs predict the next likely word based on patterns, not on a database of verified facts.
Techniques like RAG (Retrieval-Augmented Generation) are specifically designed to mitigate hallucinations by grounding the model's responses in retrieved, verifiable evidence.
Techniques like RAG (Retrieval-Augmented Generation) are specifically designed to mitigate hallucinations by grounding the model's responses in retrieved, verifiable evidence.