Despite the repeated claims of business leaders, the tech industry can’t be left to its own devices on the regulation of artificial intelligence like ChatGPT.

With the emergence of chatbot tool ChatGPT, artificial intelligence (AI) has become dinner table conversation once again. But while the technology promises to change our lives, how can we be sure it will be for the better?

Well, according to former Google CEO Eric Schmidt, governments should chill out on the AI regulation and let the industry show them how it is done.

“There’s no way a non-industry person can understand what’s possible,” Schmidt said in an interview on American television. The technology is “too new, too hard”, meaning that “no one in the government can get it right, but the industry can roughly get it right”.

Schmidt’s comments seemed pointed, given that just two weeks ago, the European Union (EU) moved another step closer to concluding the world’s first major AI regulation, known as ‘the AI Act’.

Former Google CEO Eric Schmidt says the industry should be given space to self-regulate on artificial intelligence technology. Photo: OFFICIAL LEWEB PHOTOS/Flickr (CC BY 2.0)

The EU has a trailblazing record in technology regulation. Its 2016 General Data Protection Regulation forced industry and other governments to take personal data seriously and strengthen data protection laws. Now, the EU is poised to become a global trendsetter in AI governance.

Tech companies have fought against the EU legislation’s risk-based approach, echoing Schmidt’s sentiments that policymakers should not mess with technology they “do not understand”. But deep technical know-how doesn’t mean the industry is fit to self-regulate AI.

Many opponents of the regulation fail to see it for what it truly is: a forward-looking, public-oriented, social project. Its fundamental purpose is normative – to steer “what is” towards “what should be”.

The tech industry approach of creating regulatory guardrails based solely on “how the technology works” would render regulation a glorified depiction of the status quo.

Granted, the supposedly irreverent Silicon Valley sub-culture dominated by male founders (or ‘tech bros’) may have improved somewhat in recent years. We’ve seen an explosion of self-imposed ethics boards or charters in tech companies, ostensibly committed to aligning industry practice with public values such as responsibility, safety, transparency and fairness.

But there is a gulf between these values and the industry’s incentives and actions, making it difficult to trust that the tech industry’s vision for a world brimming with advanced robots is a safe or a fair one.

Firstly, the industry is addicted to technical breakthroughs and the profits they bestow. In big commercial moments, the disgraceful operating principle of “move fast and break things” takes hold.

Take ChatGPT, the versatile chatbot that’s poised to disrupt the knowledge industry as we know it.

For all the understandable excitement around the technology, it comes as a package deal with well-documented failure points that make it capable of producing believable misinforamtion at scale. Nevertheless, its release has continued apace, leaving society ill-prepared for its limitations and potential misuses.

New artificial intelligence technology like ChatGPT is a major disrupter in the knowledge sector, but it needs a moral and ethical vision to guide its development. Photo: Ascannio/stock.adobe.com

In fact, the competitive pressure towards the next breakthrough is driving the industry further away from the open-source, public spirit of its origin. With ChatGPT’s stunning appearance, other tech giants including Alphabet’s AI teams are rushing to test more powerful AI systems behind tightly closed doors. This is reducing the transparency of training datasets and methods, making ethical scrutiny and public oversight basically impossible.

Secondly, the priorities of the tech industry are not always consistent with fundamental human rights.

In March, alarmed by the rapid race towards artificial general intelligence (AGI)—a system that can think and problem-solve in all domains of human intelligence – hundreds of prominent AI industry leaders and experts signed an open letter that declared: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Calling for a moratorium on the AGI race, the open letter briefly appeals to some of the same human rights values that underlie the EU’s AI Act: safety, transparency and human-centricity.

But that doesn’t mean the signatories are all thinking like the EU regulators. Many of the letter’s signatories, such as the Tesla and Twitter CEO Elon Musk, are admirers of a controversial ideology known as ‘longtermism’.

To the ‘longtermists’, the existential risk related to a malevolent AGI is arguably much more worrying than the immediate human suffering caused by today’s already-happening discrimination, misinformation, or even death associated with AI.

Leaving aside the ongoing debate on whether AGI is anything close to becoming a reality, longtermist thinking is perilous because it engenders a cavalier attitude towards human lives and suffering, observable in comments from many in the industry. With a hyper-focus on an AGI-induced apocalypse, the importance of current human suffering is negligible.

Twitter and SpaceX CEO Elon Musk has expressed his support for controversial ‘longtermist’ arguments. Photo: NASA Kennedy/Flickr (CC BY-NC-ND 2.0)

For Nick Bostrom, longtermism’s thought leader, our current priority should be to technologically enhance our species’ chance of future survival. And while he denied supporting eugenics “as the term is commonly understood”, he has written about processes like embryo selection and how it could provide cognative enhancement and improve global productivity.

To be fit to self-regulate, the tech industry must give us cause to trust that its vision for AI is aligned with fundamental human rights – not in an imagined cyborg-filled future, but in the messy reality of today.

It is true that policymakers may never crack open the black box of AI, but even if they fail to get it right this time, they have a solid normative vision to guide future iterations of the law, based on principles of transparency, fairness and safety.

As for the tech industry, so far there is very little evidence to suggest that they can do the same.

This piece first appeared at The Canberra Times.

You may also like

Article Card Image

World-leading cybernetics pioneer receives ANU honorary doctorate

A pioneer in the field of cybernetics, Jasia Reichardt, has transformed how we interact with technology and art.   

Article Card Image

Free as a bird: how forensic genomics is helping to stop wildlife trafficking

Researchers have developed a way to protect parrots from the illegal wildlife trade.

Article Card Image

Science superstars honoured by academy

Three exceptional ANU scientists have been recognised for their world-leading work with our forests, marine megafauna and critical metals.

Subscribe to ANU Reporter

Anu Logo

+61 2 6125 5111

The Australian National University, Canberra

CRICOS Provider: 00120C

ABN: 52 234 063 906

EDX Logo
APRU Logo
IARU Logo
Group of eight Australia Logo