Grounding in AI refers to connecting model outputs to verifiable external reality, ensuring that claims made by the model are supported by specific, retrievable sources rather than patterns learned during training. An ungrounded model reasons purely from statistical associations in its weights, which can produce confident hallucinations. A grounded model ties its outputs to citations, retrieved documents, or real-time data, allowing verification. Google's AI Overviews and Bing Copilot implement grounding by citing web sources for claims. Retrieval-augmented generation is a grounding technique: the model must base its answer on retrieved documents. Tool use is another: when the model executes a calculation rather than estimating, the result is grounded in arithmetic. Grounding is fundamental to deploying AI in high-stakes applications. A medical diagnosis AI must be grounded in clinical literature. A legal research tool must cite actual cases. Without grounding, AI systems are unreliable for any domain where factual accuracy matters. The challenge is that even grounded models can misrepresent their sources or selectively cite supporting evidence while ignoring contradictory information.
Back to Glossary