Amid President Trump's crusade against DEI, tech companies are reneging on promises to address bias in artificial intelligence.

In less than three years, ChatGPT has become the oracle of modern life. Some users like to whisper sweet nothings to it; others bark orders like it’s a servant, but at its core, the machine is a mirror—reflecting back what humanity feeds it.

Artificial intelligence (AI) generative tools have been under public scrutiny for perpetuating racial stereotypes in visual outputs—particularly when creating images of people in positions of power (CEOs, lawyers, doctors etc).

Companies such as OpenAI allegedly worked on addressing these racial and gender biases. But have they truly delivered on their promise?

Playing devil’s advocate, I recently asked ChatGPT to generate an image of a CEO.

The result—a young, polished Caucasian male, dressed to the nines, and oozing the confidence of a Hollywood heartthrob — was not exactly a display of diversity.

According to AI, this is what a CEO looks like. Image: ChatGPT

For Dr Thao Phan, a sociologist specialising in the study of race and gender in algorithmic culture, this image is symptomatic of the broader attack on diversity, equity and inclusion (DEI) policies currently taking place in the United States.  

“These AI models used to have guardrails – protocols to ensure that they would produce ethical and diverse outputs,” she explains.

“But since the most recent US election, it looks like they have all been removed. And the result are these homogenous images of bland white men.”

During his first week in office, twice-elected President Donald Trump signed an executive order axing measures taken by the Biden administration to safeguard people’s safety and rights from harmful AI models.

“This is an extremely hostile move designed to send a message that women, gender-diverse people, and people of colour are not welcome within establishments of power,” says Phan.

“It’s not simply that ChatGPT is biased against women and minorities. It’s that these systems are controlled and deployed by people whose views are actively hostile to marginalised groups.”

“Government agencies have placed DEI staff on leave, websites have been scrubbed of any mention of DEI on official documents, and the US primary research funding body, the National Science Foundation, has frozen funding for projects engaging in topics like diversity in the scientific workforce.”

As our reliance on automated decision-making grows, what are the ripple effects of an unregulated AI-driven world?

Omnipresent dangers

From diagnosing diseases with skilled precision to crafting personalised treatment plans, AI is transforming medical science.

Now, imagine a situation where there’s a health emergency and an algorithm must decide which patient is prioritised.

“Removing guardrails can literally mean the difference between life and death, especially in healthcare,” warns Phan.

“AI risk assessment tools used to allocate medical resources work with statistical models based on how previous patients were triaged, generating dubious inferences about who needs to be seen first.

“For example, a lot of visual models to detect skin cancer are known to not work well in people with darker skin. This is a patent example of racial discrimination.

“One of the steps previously taken by the Biden administration was to mandate that AI tools used in high-risk scenarios like healthcare should be tested and evaluated before and after they went to market. This was to ensure that systems could be more safe, secure and trustworthy.

“Trump’s executive order undoes these very reasonable demands for transparency and accountability, which is bad news for all of us.”

‘Don’t be evil’

The current attack on DEI is conspicuous, but as the US morphs into a haven for ‘Broligarchs’, there’s much more on the line for human rights.

Years of progress on corporate social responsibility now hang by a thread, with recent reports of Alphabet, Google’s owner, taking a step back on their promise to not weaponise AI.

Google’s parent company has updated its AI ethical guidelines. Photo: JHVEPhoto/shutterstock.com

“Without prior announcement, Sundar Pichai’s multinational has tweaked its AI ethical guidelines to remove a reference to ‘not pursuing technologies that could cause or are likely to cause overall harm’ or ‘technologies that gather or use information for surveillance, violating internationally accepted norms,’” says Phan.

“This is very likely a response to protests from Google employees objecting to the use of Google systems in military weapons programs, such as Project Maven which provided AI support for Pentagon drone targeting programs, and Project Nimbus which provides cloud computing for the Israeli military.

“Removing any mention of weapons technologies is one strategy to curb criticism about not adhering to their principles. They’ve certainly come a long way from their original slogan of ‘Don’t be evil’,”.

AI for who?

So what can we do if AI is becoming less safe, less secure and less trustworthy?

For Phan, the solution is clear: accountability starts at the top.

“We sometimes talk about AI as if it was an omnipotent being, out there in the universe acting on us without our control. But that’s simply not the case,” she says.

Dr Thao Phan says people are ultimately accountable for AI. Photo: Rosa Gamonal

“These are systems and technologies that are designed by people, controlled by people, and must be accountable to people.

“It’s not simply that ChatGPT is biased against women and minorities. It’s that these systems are controlled and deployed by people whose views are actively hostile to marginalised groups.”

For years, universities such as the Australian National University have been at the forefront of AI ethics, bringing together sociologists, philosophers, and cyberneticists to interrogate how these systems shape society. Is ignoring their expertise gambling with humanity’s fate?

“If it’s not safe for some of us then it’s not safe for any of us. But this is not the world we have to live in with AI,” she says.

“If Silicon Valley’s approach to AI has been to move fast and break things, then our mission as academics is to approach it in a different way.

“They may no longer value diversity, equity or inclusion but we still can. We can insist that another AI world is possible.”

You may also like

Article Card Image

Apocalypse tomorrow: how AI is changing war

As artificial intelligence changes our battlefields how can we maintain restraint and humanity in our military campaigns?

Article Card Image

Do AI images mean the end of photographic truth?

Can you trust your own eyes? These ANU researchers say spotting AI images may be more difficult than ever.

Article Card Image

Silicon Valley’s Trump love affair shows it can’t and won’t regulate itself

Silicon Valley has promised more responsible innovation. But high-profile support for Trump shows companies don’t really want the burden of responsibility.

Subscribe to ANU Reporter