Analysing the Values and Biases in AI-Generated Stories

“Create a story about an Asian woman who faces prejudice and discrimination in her workplace. Despite her qualifications and skills, she is constantly overlooked and dismissed because of her ethnicity.”

Li Wei, a highly qualified software engineer, had just joined a prestigious tech company. Despite her exceptional skills and experience, her colleagues often overlooked her ideas during team meetings. She was always the last to speak and her suggestions were quickly dismissed. Her colleagues, who were mostly white men, assumed that she was quiet and submissive because of her background. They would often interrupt her when she tried to present a new idea or talk over her as if she didn’t have anything valuable to say. Li Wei struggled with the stereotype that Asian women were supposed to be meek and reserved, but she eventually found her voice. By building strong professional relationships and demonstrating her expertise through her work, she broke down the stereotypes and gained the respect of her peers.


Analysing the values and biases present in AI-generated stories or images is essential for understanding how AI systems can reflect societal prejudices. In this blog post, I will examine an AI-generated story about an Asian woman, Li Wei, using concepts from Algorithms of Oppression by Noble (2018), such as Algorithmic Bias, Cultural Bias, and Structural Inequality. The AI generates a story about Li Wei, an Asian woman who faces discrimination in her workplace. Despite her qualifications, she is stereotyped as “quiet and submissive”, and her contributions are dismissed by her colleagues. Eventually, she challenges these stereotypes and proves her worth.

Values and Biases Manifested in the Story

Firstly, the story reflects cultural bias, as Li Wei is depicted as “quiet and submissive”—a stereotype often unfairly attributed to Asian women. This stereotype mirrors the societal prejudices ingrained in the data that AI is trained on, which perpetuate the idea that Asian women are meek and less vocal. Such stereotypes continue to affect the representation of Asian women in both fictional narratives and real-world contexts. The problem lies in the reinforcement of these stereotypes, which can affect how individuals from marginalised communities are perceived and treated, both in fictional and real-life scenarios.

And secondly, AI’s reproduction of these stereotypes’ points to algorithmic bias. AI systems learn from large datasets, which may contain biased societal views. These biases are reflected in the generated content. Li Wei’s treatment in the story highlights how AI can inadvertently reinforce harmful stereotypes due to biased training data. This demonstrates the risks of AI systems unintentionally perpetuating societal prejudices when trained on data that includes such biases. Over time, these biases may deepen social divisions, as AI-generated content shapes public perceptions and reinforces existing social hierarchies.

Lastly, Li Wei’s experience also illustrates structural inequality. Despite her qualifications, she is overlooked and undervalued because of her ethnicity. This reflects how systemic bias can influence career opportunities, particularly for women of colour, and shows how AI-generated content can reflect and even amplify these power imbalances. The story highlights how deeply rooted structural inequalities can persist, both in society and in the narratives AI systems produce. By failing to challenge these systemic issues, AI may inadvertently uphold existing barriers to equality and representation in professional environments.


Analysis Using Concepts from Noble (2018)

Bias in Data

Noble (2018) discusses how AI models learn from biased data. The stereotypes depicted in the story reflect the biased nature of real-world data, which influences the output of AI systems. Li Wei’s portrayal, based on racial stereotypes, highlights how AI is influenced by the racial biases embedded in the data it uses to generate content. This bias, ingrained in both the training data and societal structures, results in AI outputs that reinforce traditional social dynamics, rather than challenging them.

Social Construction of Reality

The concept of the social construction of reality suggests that race and gender biases are not biologically determined but socially constructed. AI systems, influenced by these social constructs, replicate them in the stories they generate. In Li Wei’s case, her treatment in the workplace is shaped by socially constructed stereotypes about Asian women, which AI perpetuates through the content it produces. These constructs, while not rooted in biological reality, have tangible effects on individuals’ experiences and opportunities, further complicating the narrative of progress in the fight against inequality.

Conclusion

AI-generated content can reflect and reinforce cultural, algorithmic, and structural biases. By analysing these biases through concepts like bias in data and the social construction of reality, we gain a deeper understanding of how AI can reproduce harmful stereotypes. Acknowledging these biases is crucial for developing AI systems that are fairer and more inclusive, ensuring that they do not perpetuate existing inequalities. As AI continues to shape our narratives, it is essential that we work towards more equitable systems that challenge rather than reinforce harmful societal biases. In doing so, we can move closer to creating technology that truly represents all individuals, free from the limitations of historical and cultural prejudice.


Reference

Noble, S. U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press: New York.

Leave a Reply

Your email address will not be published. Required fields are marked *