Prepare for the AWS Certified AI Practitioner Exam with flashcards and multiple choice questions. Each question includes hints and explanations to help you succeed on your test. Get ready for certification!

Practice this question and more.


A company is using a large language model (LLM) for sentiment analysis. What prompt engineering strategy should they use?

  1. Provide examples of text passages with corresponding positive or negative labels

  2. Explain sentiment analysis and how LLMs work in the prompt

  3. Provide the new text passage without context or examples

  4. Include unrelated tasks with the new text passage

The correct answer is: Provide examples of text passages with corresponding positive or negative labels

Using a strategy that involves providing examples of text passages with corresponding positive or negative labels is effective in prompt engineering for sentiment analysis with a large language model (LLM). This approach allows the model to learn from clear and structured examples, helping it to understand the nuances of sentiment in different contexts. By showcasing labeled examples, the model can better grasp how different phrases, tones, and contexts translate to positive or negative sentiments. This method leverages the supervised learning capability of LLMs, where they benefit from patterns learned during training when faced with new, unlabeled data. Providing labeled examples also helps to reduce ambiguity in interpreting sentiment. The model can use these examples to establish a framework for assessing the sentiment of new text passages, enabling more accurate and relevant outputs. The other strategies might not effectively guide the model in the specific task of sentiment analysis as well as providing labeled examples does. Explaining sentiment analysis and how LLMs work in the prompt could overload the model with unnecessary information. Presenting a new text passage without context or examples leaves the model without the reference points needed for accurate analysis. Including unrelated tasks may confuse the model about its primary objective, diluting its focus on sentiment discernment.