Penn State and University of Washington researchers found that AI models may cause cultural harms that go beyond surface-level biases
UNIVERSITY PARK, Pa. — Generative artificial intelligence (AI) models often contain cultural misrepresentations as a result of the way they were trained. Researchers from Penn State and the University of Washington set out to understand how these biases translate into tangible harms for minority communities.
Pranav Narayanan Venkit, a graduate student pursuing a doctorate in informatics in the College of Information Sciences and Technology (IST), discussed how AI systems are developed without enough cultural sensitivity the need for more inclusive, community-informed approaches in AI development.
Q: What motivated your research?
Venkit: My previous studies on how generative AI models represent various nationalities and demographics revealed a troubling trend: non-Western or non-Global North cultures were often misrepresented and stereotyped, mainly due to the fact that the model aligns or is trained on more Western-centric dataset. This research builds on that, shifting focus to a more community-centered perspective that examines how non-Western — Indian culture specifically, for this work — is depicted by text-to-image generators. By engaging participants from diverse Indian subcultures, we wanted to uncover deeper insights into these misrepresentations and their real-world consequences.
Q: How does this differ from your previous work?
Venkit: While many studies acknowledge the biases inherent in AI models, few have looked into how they translate into tangible harms for minority communities. Instead of relying solely on quantitative measures of model performance, we took a qualitative approach, involving the affected communities to understand how these biases manifest and impact them. By focusing on Indian culture through a community-centric lens, we were able to uncover nuanced forms of misrepresentation that go beyond surface-level biases. This enabled us to capture subcultural insights as well as develop guidelines to create better and safer models in the future.
Q: What did you find?
Venkit: Our findings confirm that generative AI models can indeed be problematic when it comes to representing non-Western cultures. However, what stands out is how these models tend to depict cultures through an outsider’s lens rather than accurately reflecting the communities themselves: for example, showcasing traditional celebrations to be more colorful than they are or depicting a certain subculture more prominently than another. The generated data — text or images — might seem right or non-problematic during the initial parsing. With further scrutiny, however, we captured the issues that then led to conversations of how these biases can transform into the various forms of harms that we were able to identify. This reinforces the idea that cultural understanding is highly complex, involving layers of subcultures that AI models often fail to portray or understand.
Q: What are some of those harms?
Venkit: Generative AI models, particularly text-to-image generators, are not just biased — they can cause real harm by misrepresenting non-Western cultures. We found that these models often reinforce exoticized and stereotypical portrayals of Indian culture, which can contribute to cultural misappropriation and marginalization.
AI is increasingly shaping how cultures and identities are portrayed online and in media. We use generative AI models in our daily life for everything from brainstorming ideas to creating content for social spaces. If these portrayals are inaccurate or harmful, it could deepen existing biases and misunderstandings about diverse cultures. Our research underscores the importance of making AI systems that are equitable and culturally aware, ensuring that technology doesn’t exacerbate societal divisions but instead promotes a more respectful and accurate representation of all communities.
For people from non-Western cultures, especially marginalized communities, the way they are depicted by AI can influence how they are perceived globally, which can affect everything from personal identity to economic opportunities. By promoting more culturally sensitive AI, this work could lead to better representation, reducing the harmful effects of misrepresentation and ultimately fostering a more respectful global digital landscape.
Q: What are your proposed solutions?
Venkit: Our proposed solutions center around creating design guidelines for generative AI systems that prioritize cultural sensitivity and inclusivity. What makes this approach unique is the emphasis on a community-centered framework. Instead of relying solely on technical fixes, we integrate feedback from the communities most impacted by these misrepresentations — in this case, various Indian subcultures. By combining sociotechnical insights with lived experiences, we propose a more complete solution that goes beyond adjusting datasets or algorithms. This approach addresses not only the technical aspects of AI but also the broader societal context in which these technologies operate, aiming to ensure that generative AI systems are more culturally aware and equitable. The design guidelines are developed in a manner that users, developers and researchers all have a part to play in creating a safer system for the future.
This work is the result of equal contributions from Venkit; Sanjana Gautam, a recent doctoral graduate from Penn State; and Sourojit Ghosh, doctoral candidate at the University of Washington. Their research was guided by the mentorship of Aylin Caliskan, assistant professor at the University of Washington, and Shomir Wilson, associate professor in the College of IST at Penn State.
The researchers will present their work at the Conference on AI, Ethics and Society, which will take place on Oct. 21-23 in San Jose, California.
This work was supported by the U.S. National Institute of Standards and Technology (NIST).