Bias in AI 

To begin, welcome to Tammy’s Teaching Tidbit #100! I hope you have learned a great deal through reading these articles; I certainly have learned a lot while researching for them! 

Now onto the actual topic of this article:  bias in artificial intelligence. Bias in AI is a major concern with these tools. Remember, AI cannot critically think. It simply generates outputs based off information from users’ prompts and what people have already created (and placed on the internet). As such, humans need to check AI responses for accuracy and inclusivity.

Different Types of Biases 

A wide variety of biases can exist in AI-generated data, including:

  1. Algorithm bias can occur when prompts are not specific enough or ask biased questions. The quality of AI outputs is directly guided by the quality of input from the AI users.
  2. Cognitive bias/experimenter bias happens when the information that AI pulls from is biased itself. Remember, humans are the creators of all data AI uses; people are imperfect themselves.
  3. Exclusion bias/measurement bias/sample bias entails data sets that do not include the entire relevant population. For example, graduation factors should include both graduates and dropouts.
  4. Prejudice bias is where stereotypes affect AI results; these results are not necessarily harmful, but they can be misleading of the real world. For example, AI-generated images of nurses are far more likely to contain Asian women than Black women.
  5. Recall bias happens when data is not labeled consistently.
  6. Stereotyping bias involves information that reinforces harmful (often racial, ethic, or gender-related) stereotypes.
  7. Confirmation bias occurs when only information that supports or confirms something the user already believes is selected.
  8. Bandwagon effect entails when a trend occurs for a short while that swings data from a community. This can lead to overrepresentation (Holdsworth, 2023, Barney & Schmelzer, 2023).

Combatting Bias in AI:  The 3 Cs 

The 3 Cs of critical, criticality, and contribution can be a way to help evaluate and combat bias within AI-generated content.

Critical entails asking and analyzing AI’s answer to a summary or topic. Students cross-check AI-provided information with class notes, textbooks, and other online resources. AI can also be asked clarifying questions about its answers.

Criticality evaluates missing perspectives, as well as assumptions. Learners make a list of potential missing perspectives. What groups are not represented in the AI-generated content? What stereotypes are presented in this content?

Contribution involves students coming up with arguments and questions for AI tools. Basically, students debate AI’s answers to curate a higher level of accuracy in future responses (McDowell, 2023).

Combatting Bias in AI:  Perspective Analysis 

Perspective analysis is another way to help combat bias. This method entails pupils examining various points of view on a given topic. Learners acknowledge that numerous factors (such as family-dynamics, cultural traditions, and personal trials and triumphs) affect how different people view situations, and then dig into those factors. AI, coupled with perspective analysis, might look like learners prompting AI to identify one side of a situation and then change the prompt to identify another perspective in that same situation (McDowell, 2023).

AI Evaluation Follows Guidelines Librarians Already Use!  

While generative AI can feel like it took the world by storm, pupils should be taught to evaluate AI-generated content in much the same ways as they judge all media content by:

  1. Checking for accuracy, relevance, currency, and purpose — Library Media Content Standards 6.E.2, 7.E.2, 8.E.2, 9.E.2, 10.E.2, 11.E.2, and 12.E.2. Author authority can be difficult to check with AI since many platforms do not cite their sources or cite sources incorrectly, and
  2. Considering multiple perspectives and biases (while creating prompts) – Library Media Content Standards 6.E.1, 7.E.1, 8.E.1, 9.E.1, 10.E.1, 11.E.1, 12.E.1. Prompt-writing is a skill set that requires practice. Vera Cubero created the following guide for crafting a strong prompt:

Text, chat or text message

Description automatically generated

Lesson Plans 

Common Sense Education:  Understanding AI Bias Lesson Plan
Anti-Defamation League:  What is Bias in Al?
Miami University:  Confronting Bias in Generated AI
Common Sense Education:  How AI Bias Impacts Our Lives
Science Buddies:  Happy or Sad? Exploring Bias in Machine Learning

Book Suggestions for Educators 

Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini
AI, Race, and Discrimination: Confronting Racial Bias in Artificial Intelligence by John Angus Scantling Jr 
Invisible Women: Data Bias in a World Designed for Men by Caroline Criado Perez
When Machines Can be Judge, Jury, and Executioner:  Justice in the Age of Artificial Intelligence by Katherine B Forrest

References 

Barney, N., & Schmelzer, R. (2023, July 28). 6 ways to reduce different types of bias in machine learning. TechTarget. https://www.techtarget.com/searchenterpriseai/feature/6-ways-to-reduce-different-types-of-bias-in-machine-learning.

Holdsworth, J. (2023, December 22). What is AI bias?. IBM. https://www.ibm.com/topics/ai-bias#:~:text=Bias%20reduces%20AI’s%20accuracy%2C%20and,community%2C%20or%20other%20marginalized%20groups.

McDowell, M. (2023, December 20). Helping students check for bias in AI outputs. Edutopia. https://www.edutopia.org/article/checking-bias-ai-outputs/.

Leave a comment