Is This What AI Thinks Africa Looks Like?

By Rachel Marty

While AI has rapidly become a useful everyday tool for many, it is often criticized for issues related to reliability, lack of proper citation, and inherent biases. Recently, I was asked by my internship, the Alliance for Reproductive Health Rights (ARHR), to explore how they could use AI to improve their workflows. They specifically wanted to expand and expedite the process of grant research and proposal writing in order to secure more funding.

As a student, I have been taught to embrace emerging technology—that’s part of adapting to an ever-changing world. AI is a large part of that. It has proven to be quick to integrate itself into all aspects of our modern lives. Leveraging AI has been instilled in me. It is most helpful as a thought-partner, as opposed to a replacement for my own thoughts or work. From my knowledge, experience, and research, I compiled a list of AI platforms, tested them, and gave a lengthy presentation showcasing their potential for ARHR. The tools ranged from text generation and editing to image and video creation.

As the team began testing out the generative AI tools, something became immediately clear. Unless we explicitly prompted the AI with terms like African or Ghanaian, the outputs defaulted to white people. Even more concerning, when we requested images of “Ghanaian children,” they often appeared barefoot—perpetuating negative stereotypes.

There was a brief acknowledgment of the bias, followed by a laugh—but no outrage. I was surprised by the mild reaction. I sat back and observed as my coworkers played with the various tools; they were observant but relatively unbothered by the problematic bias within AI.

I couldn’t help but question, “Shouldn’t we be more disturbed by this? What does it mean for AI to produce assumptions about race and poverty? And what happens when we shrug it off rather than become outraged?”

As these questions floated around in my head, I remembered a conversation I had a couple of days before. A man I’d met on the street shared a thought that stuck with me: Expressing anger might threaten your sense of peace more than it would change the situation. This perspective felt like a powerless realization, but it wasn’t a surrender.

While we must value our peace, if we want AI to serve people equitably, these moments cannot be entirely dismissed. Bias in AI is not simply a technical flaw—it’s a reflection of the data we feed it and our world. To change, we need to question how these tools work, who they’re built for, who they’re built by, and what assumptions they pass along.

Popular generative AI tools like DALL·E and Midjourney have shown clear bias in the past by showing white men in high-status jobs, oversexualizing women, and misrepresenting people of color. These issues reflect much deeper problems in the data they’re trained on.

AI has tremendous potential to transform how we work, but it also requires that we stay vigilant and approach it with discernment.

Leave a Reply

Your email address will not be published. Required fields are marked *