Hallucination
When AI models confidently generate incorrect, misleading, or fabricated information that appears plausible but has no basis in their training data or reality. These aren't random errors but convincing-sounding falsehoods that the AI presents with the same confidence as accurate information. Think of it like a confident storyteller who fills in gaps in their memory with plausible-sounding details that never actually happened. For example, an AI might cite a non-existent research paper with realistic-looking authors and publication details, or provide confidently wrong facts about historical events. This is why it's crucial to verify AI-generated information, especially for important decisions or factual claims.