LIS
August 21, 2023
8
MINS READ

How AI Image Generators Make Bias Worse

Dr. Ash Brockwell
Jon Cheung
Man standing and talking to a crowd
There exists a mental health crisis across Higher Education

Buzzfeed recently published a now deleted article on what AI thinks Barbies would look like from different countries around the world. The results contained extreme forms of representational bias — including colourist and racist depictions, with AI image generators currently gaining huge popularity, it’s important that we are vigilant about the forms of bias that these technologies can fuel.

What does a typical prisoner look like? What about a lawyer? A nurse? A drug dealer? What about a fast food worker? A cleaner, a terrorist or a CEO?

These are all images created by an artificial intelligence image generator called MidJourney, which creates unique images based off of simple word problems.

A.I. image generators have exploded in popularity thanks to their impressive results and ease of use. within a few years, as much as 90% of the images on the Internet could be artificially generated.

A.I. bias is more extreme than real-world bias


Given that generative A.I. will have a huge impact in shaping our visual landscape, it’s important to understand that the reality it presents can often be distorted, where harmful biases relating to gender, race, age and skin color can be more exaggerated and more extreme than in the real world.

Leonardo Nicoletti and Dina Bass are Bloomberg Technology generated and analyzed more than 5000 images created by Stable They prompted it to generate portraits of workers in different professions and sorted the images according to skin tone and perceived gender.

They found that higher paying professions like CEO, lawyer and politician were dominated by lighter skin tones, where subjects with darker skin tones were more commonly associated with lower income jobs like dishwasher, janitor and fast food worker. A similar story emerged when categorizing by gender with higher income jobs such as doctors, CEOs and engineers being predominantly represented by men while professions like cashier, social worker and housekeeper were mostly represented by women.

These racial and gender biases were found to be stronger than the reality of what was recorded in the U.S. Bureau of Labor Statistics for instance. Women make up 39% of doctors in the U.S., but only appeared in 7% of the images. these exaggerated biases that AI systems create are known as representational harms.

A slideshow of AI generated barbies

These are harms which degrade certain social groups reinforcing the status quo or by amplifying stereotypes. A particularly extreme example was created when BuzzFeed published an article of A.I. generated Barbies from different countries around the world.

The Barbies from Latin America were all presented as fair skinned, perpetuating a form of discrimination known as colorism, where lighter skin tones are favored over darker ones.

The Barbie from Germany was dressed in clothes reminiscent of an SS Nazi uniform, and the Barbie from South Sudan was depicted holding a rifle by her side. is that the quality of the outputs will depend on the quality of the datasets.

The millions of labeled images that the AI has been trained on, if there are biases in the dataset, the AI will acquire and replicate those biases But there is no such thing. as a neutral dataset.

As Melissa Terras puts it:

“All data is historical data, the product of a time, place, political, economic, technical and social climate. If you are not considering why your data exists and other datasets don’t, you are doing data science wrong.” — Melissa Terras

How negative feedback loops make the problem worse

An animation of various headshots

The bigger problem is not the A.I. systems simply reflect back the prejudice baked into their datasets. It’s that in many instances they feel and exacerbate these biases through feedback loops.

For A.I. image generators trained on biased datasets. They will fill the Internet with biased images, which will then be fed back into datasets for other A.I. image generators, that will then create even more biased images and so on and so forth, thus creating a vicious feedback loop which will both spread and intensify existing biases.

The philosophical challenge of defining bias


So, how do we stop this from happening? Resolving the problem, goes beyond a technical question of simply using better, more representative datasets to reduce bias. One has to tackle deeper philosophical questions which look at how you define bias and fairness in the first place.

Take the example of gender disparity when it comes to generating an image of a Fortune 500 CEO. What should a fair gender split look like?

Should it accurately reflect the current statistics which are roughly 9 to 1? In one sense, this can be seen as a fair representation of reality. But others might see this as unfair for perpetuating unequal and unjust power structures and for discouraging women from applying for C-suite roles.

Perhaps a distribution should be a 50/50 split. this would achieve demographic parity as it matches the gender split of the population.

We could consider applying this across all jobs and roles, but this assumes that there are no intrinsic differences between genders. Would it be fair to depict prisoners at a 50/50 split, for example, when men currently make up 93% of the global prison population?

Perhaps the only fair distribution is to make the output completely random. But even then, defining the gender category with a binary value has already introduced bias into the system.

Clearly, there are no easy answers. The difficulty in defining what a fair process or outcome actually is. Has led to many ethics initiatives being guided by high minded but ultimately vacuous principles like, “Do the right Thing?”

These sorts of principles have been criticised as virtue signalling, which haven’t led to much in the way of tangible action guiding recommendations.

Can governments step in to regulate A.I?


If self-governance by tech companies is not enough to mitigate any harms, we might look to governments to pass laws and regulations. Ultimately, they’re the only institutions that can hold companies to account for their harms on society and force them to act.

To redress representational harms caused by generative AI. Lawmakers could establish oversight bodies to deal with complaints over biased images. tech companies could be forced to update their algorithms or retrain their eye on better datasets.

Regulators could also impose standards on the quality of training datasets, requiring them to be transparent, diverse, and more representative.

An illustration of The Collingridge Dilemma

But whatever laws and regulations are put in place, their timing will be crucial. The Collingridge dilemma named after the sociologist David Collingridge, highlights the difficulty in regulating dynamic technologies like generative AI, whose full effects and consequences are only understood as time goes on.

If lawmakers legislate too early, they may come up with ineffective and irrelevant policies, leading to a loss of legitimacy amongst the public and feeding perceptions of out-of-touch bureaucrats dealing with technologies they don’t understand.

However, if lawmakers wait too long to understand technology’s impacts and harms by the time they act, it may be too late for them to control, the technology will have been widely adopted and now be too deeply integrated into people’s lives for meaningful change to happen.

So has Pandora’s box already been opened? Or can we guide generative A.I. towards a path that better serves their needs and build a fairer society in the process?

As computer scientist and digital activist Dr. Joy Buolamwini puts it:

“Whether A.I will help us reach our aspirations or reinforce unjust inequalities is ultimately up to us.” — Dr. Joy Buolamwini

At LIS, we believe that solutions to the world's most complex problems won't come from a single specialism or discipline. We need to bring together experts and knowledge from across the arts, sciences and humanities.

If you're interested in connecting the dots, explore our bachelor's and master's degrees, as well our leadership programme.

Sign up below to keep in the know. Don't miss out on important updates including course information, new announcements, Open Day dates and the latest LIS news.

Share this story

Commenter Name
March 20th 2023

This is a comment related to the post above. It was submitted in a form, formatted by Make, and then approved by an admin. After getting approved, it was sent to Webflow and stored in a rich text field.

Reply to Commenter

Leave a comment

Your comments will appear above once approved. We appreciate you!

Thank you!

You comment will appear above automagically ✨

Refresh Page
Oops! Something went wrong while submitting the form.

Sign up for our newsletter

Don't miss out on important updates including course information, new announcements, Open Day dates and the latest LIS news.

Thank you!
You’re signing up!
Oops! Something went wrong while submitting the form.