In today’s digital age, visual communication and branding strategies wield significant influence on consumer perception and engagement. Simultaneously, the rise of AI-generated content brings forth new complexities and concerns. Marnell Kirsten, visual studies lecturer at Red &  Yellow Creative School of Business and widely-published researcher, has made a profound observation regarding the potential perpetuation of stereotypes, particularly those associated with visual representations of Africa, via AI technologies.

WHERE AI GETS IT WRONG

Drawing on the perspectives of philosophers Michel Foucault and Jacques Derrida, Kirsten (2023) explains, ‘Even when power structures are no longer in official or ‘visible’ power, we struggle to really get rid of the knowledge and discourses that propped up those structures’. Despite colonialism no longer being an official practice, its remnants persist, shaping the 21st-century imagery of Africa. AI, learning from us, tends to adopt the same biases and stereotypes, often inadvertently promoting dated and harmful discourses.  

Contemporary instances of this issue are apparent in facial recognition software’s tendency to misidentify Black and Asian faces (Thomson Reuters, 2019) and AI applications like Lensa, which controversially lightens Black skin, presumably to ‘beautify’ the user’s face (Enking, 2022). Both examples signal the enduring traces of colonial discourse and beauty standards prevalent in 21st-century Western visual culture.

THE VERY REAL THREAT OF STEREOTYPING IN AI 

The media often mediates our understanding of ‘the real’. Kirsten (2023) explains, ‘Representations, including images, are mediations – they go between ‘the real’ and our understanding of whatever that ‘’real’’ is’. The stereotypical and often misleading representations of Africa – an area described with ‘nouns and adjectives like hut, dark, tribe, King Kong, tribalism, primitive, nomad, animism, jungle, cannibal, savage, underdeveloped, third world, developing’ (Chavis, 1998) – skew global perception (Adichie, 2009). 

In discussing spaces such as Ratanga Junction and Shakaland, Kirsten reveals how these places amplify existing stereotypes about Africa (Kirsten, 2020). This is not limited to physical spaces. Virtual spaces, which can also mirror harmful stereotypes, need to transform into platforms fostering authentic cultural understanding and inclusivity.

HOW MARKETING AND ADVERTISING IS AFFECTED

Addressing the role of marketers and advertisers, Kirsten stresses the need for an entire agency or company to align their operations with an inherent sensitivity to discursive dynamics, thus actively avoiding the perpetuation of stereotypes or cultural appropriation. Yet, such transformations cannot be superficial or tokenistic. Kirsten (2023) warns, ‘Too often these attempts or any diversity, equity and inclusion (DEI) projects are tick boxes for businesses using these efforts as a way to merely market themselves’. Genuine engagement and inclusivity necessitate involving the voices and contributions of the represented communities – ‘Nothing about us without us’. 

Navigating AI’s contribution to visual representations of Africa entails acknowledging and actively combating its potential to perpetuate stereotypes. As AI-generated content often bears the biases of its creators, ethical guidelines must govern its usage. Kirsten (2023) advises, ‘It is important to very carefully and critically review any content, whether written or visual, that AI platforms generate before using it in any business communication’. 

Examining the motives behind leveraging AI is also important. Corporations must avoid utilising AI-generated content as a cost-cutting measure under the guise of promoting diversity. Kirsten points to Levi’s recent announcement of testing AI-generated clothing models to ‘increase diversity’ as a potentially unethical and irresponsible move (Weatherbed, 2023).

A MORAL CONUNDRUM IN AI

Moreover, it’s important to remember that the ethical considerations extend beyond content creation itself to encompass the practices of the AI platforms being used. Recent reports have cast a spotlight on some concerning issues. For instance, OpenAI faced criticism for allegedly paying Kenyan workers meagre wages to review highly disturbing content in an attempt to make ChatGPT ‘less toxic’ (Perrigo, 2023). This highlights the moral conundrums that businesses must navigate when leveraging AI tools. While a company may intend to use such platforms critically, ethically, and responsibly, the inherent complexities and troubling histories of these platforms must be acknowledged.  

The ethical footprint of a business’s communication strategy is not just about the visible output – it extends to the underlying infrastructure and processes involved in its creation. Kirsten’s insights provide a timely reminder for businesses and professionals navigating this challenging landscape. This addition reinforces the ethical implications and responsibilities that businesses must consider when employing AI technologies. Not only is it vital to ensure responsible and fair representation in the content generated, but it’s equally important to ensure the methods and processes used to create that content uphold the same ethical standards.

When addressing the complex relationship between space, representation, and power, visual communication professionals carry a responsibility to contribute to the decolonisation of spaces and narratives. This involves not merely navigating the tricky landscape of discourse and imagery but also taking active measures to dismantle existing stereotypes and reshape narratives. Marnell Kirsten’s insight is a clarion call for professionals.

Marnell Kirsten is a researcher and lecturer with a background in visual studies and science communication, holding a master’s degree in each of these fields. Her research interests and skills extend to media studies, politics of representation, the semiotic analysis of cultural products, and understanding cultural trends in communication. Her most recent work focuses on data science and data visualisation from a data feminist perspective, and the traces of visual colonial discourse in AI-generated content. She will be starting a PhD position in Sweden, focusing on the circular economy, later this year.

REFERENCES

Adichie, C.N. (2009). ‘Chimamanda Ngozi Adichie: The danger of a single story | TED’.

YouTube [website] <https://www.youtube.com/watch?v=D9Ihs241zeg&t> Accessed 7 July 2023. 

Chavis, R. (1998). ‘Africa in the Western Media’. Paper presented at the Sixth Annual African Studies Consortium Workshop, University of Pennsylvania – African Studies Center, October 2. Available at: https://www.africa.upenn.edu/Workshop/chavis98.html#:~:text=Images%20of%20Africa%20in%20the,an%20abyss%20and%20negative%20void. Accessed 7 July 2023. 

Enking, M. (2022). ‘Is Popular A.I. Photo App Lensa Stealing From Artists?’. Smithsonian Magazine [website] <https://www.smithsonianmag.com/smart-news/is-popular-photo-app-lensas-ai-stealing-from-artists-180981281/> Accessed 7 July 2023. 

Kirsten, M. (2020). ‘The March continues: A critique of the Long March to Freedom statue collection exhibited in Century City’, Image & Text [Preprint], (34). doi:10.17159/2617-3255/2020/n34a8. 

Kirsten, M. Visual studies lecturer, Red &  Yellow Creative School of Business. (2023) ‘Interview. 3 July, South Africa. 

Perrigo, B. (2023). ‘Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic’. TIME [website] <https://time.com/6247678/openai-chatgpt-kenya-workers/> Accessed 7 July 2023. 

Thomson Reuters. (2019). ‘Black and Asian faces misidentified more often by facial recognition software’. CBC [website] <https://www.cbc.ca/news/science/facial-recognition-race-1.5403899> Accessed 7 July 2023. 

Weatherbed, J. (2023). ‘Levi’s will test AI-generated clothing models to ‘increase diversity’’. The Verge [website] <https://www.theverge.com/2023/3/27/23658385/levis-ai-generated-clothing-model-diversity-denim> Accessed 7 July 2023. 

Share This Story, Choose Your Platform!