Prepare for the AWS Certified AI Practitioner Exam with flashcards and multiple choice questions. Each question includes hints and explanations to help you succeed on your test. Get ready for certification!

Practice this question and more.


What issue is the LLM experiencing if it generates content that sounds plausible but is factually incorrect?

  1. Data leakage

  2. Hallucination

  3. Overfitting

  4. Underfitting

The correct answer is: Hallucination

The situation described refers to an issue known as hallucination, which occurs when a language model generates text that appears to be plausible and coherent but is not based on factual or valid information. This phenomenon highlights a limitation in how the model processes and synthesizes language. While the content generated may follow grammatical rules and sound reasonable, it lacks grounding in the actual data or knowledge base from which the model has been trained. Hallucination can arise due to various factors, including biases in training data, gaps in knowledge, or the model's tendency to follow patterns without a true understanding of the underlying content. Unlike issues related to data leakage, overfitting, or underfitting—which pertain to the model's training process and how well it generalizes or adheres to training data—hallucination specifically pertains to the integrity and factuality of the model's output. Therefore, recognizing hallucination is crucial when evaluating the reliability of generated content from a language model.