Prepare for the AWS Certified AI Practitioner Exam with flashcards and multiple choice questions. Each question includes hints and explanations to help you succeed on your test. Get ready for certification!

Practice this question and more.


What type of bias is present when a security camera ML model excessively flags individuals from a specific ethnic group?

  1. Measurement bias

  2. Sampling bias

  3. Observer bias

  4. Conformation bias

The correct answer is: Sampling bias

The scenario describes a situation where a security camera machine learning model is flagging individuals from a specific ethnic group more frequently than others. This points to sampling bias, which occurs when the data used to train the model is not representative of the actual population. In this case, if the training data included a disproportionately high number of images of individuals from that specific ethnic group, the model would learn to associate those characteristics with the likelihood of being flagged, leading to unfair treatment of individuals from that group. Sampling bias can emerge from several sources, such as imbalanced data collection or insufficient diversity in the dataset, which may skew the model’s performance and increase the rate at which it makes incorrect predictions for that specific group. This can have real-world implications, particularly in sensitive applications like security, where biased outcomes can perpetuate stereotypes or result in disproportionate scrutiny. Understanding sampling bias is crucial for developing fair and effective machine learning models, particularly in contexts where societal implications are significant. It highlights the importance of using diverse and representative datasets during the training phase to ensure that models perform equitably across all groups.