Prepare for the AWS Certified AI Practitioner Exam with flashcards and multiple choice questions. Each question includes hints and explanations to help you succeed on your test. Get ready for certification!

Practice this question and more.


What action should a company take to align the outputs of a pre-trained LLM with specific expectations for chatbot responses?

  1. Adjust the prompt

  2. Choose an LLM of a different size

  3. Increase the temperature

  4. Increase the Top K value

The correct answer is: Adjust the prompt

To align the outputs of a pre-trained large language model (LLM) with specific expectations for chatbot responses, adjusting the prompt is the most effective action. The prompt serves as the initial input that guides the LLM in generating responses. By providing a more detailed or refined prompt, you can influence the style, tone, and content of the answers that the chatbot generates. This tailored approach enables the model to better understand the context and the desired formality or specificity, which is crucial for producing outputs that meet particular business requirements or user needs. For instance, if a company wants its chatbot to provide responses in a friendly and casual manner, modifying the prompt to include such directives can lead to outputs that align closely with these expectations. This process is fundamental in fine-tuning the performance of a chatbot without the need for extensive retraining of the model itself. Other methods mentioned, like choosing an LLM of a different size or increasing parameters related to randomness and diversity, such as temperature or Top K value, could alter the output in some way but may not directly ensure alignment with specific response expectations. Adjusting the prompt directly addresses the issue of content relevance and appropriateness in a way that those other adjustments might not.