With more than three billion people heading to the polls in 2024, ensuring trust in news and electoral processes has never been more important.
Within our democracy, universities are the institutions with the longest-term view of our society. This is in contrast to the three-year political cycle. The idea of tenure, where an academic gets a job for life, does not just give substance to academic freedom, it also means universities can work on issues for decades. And many issues require this timescale.
For example, ANU has been working on artificial intelligence (AI) for nearly 50 years. AI became a focus of worldwide attention about a year ago with the arrival of ChatGPT — a massive leap forward in technology that seemingly made computers able to discuss fluently almost anything imaginable. ChatGPT, and other generative AI advances, are able to write poetry, create art, write essays and solve complex problems.
Over the past year, we have also seen the limitations of this technology. It uses other people’s work indiscriminately without attribution, it makes things up and it gets a lot of things wrong, with the same confidence I had when I was a first-year undergraduate.
Is ChatGPT the end of the university? No.
Universities — at least good ones — will teach our students how to be empowered by these technologies, not replaced by them. Exams and assignments that can be done by AI will be replaced with ones that students are expected to do with AI as a tool. One can imagine a new group of graduates super-charged by this newfound technology — able to get an astonishingly large amount of things done, quickly.
But we need some guardrails. AI technology has many flaws and inherent biases that we need to understand before we just go out and use it blindly. This will be part of our students’ education as well.
And there will be societal implications: a significant amount of work done by people will be replaced by that done by AI. And much of that work is currently done by our graduates, especially in their early years in the workforce, where they often do jobs as part of their training which can be largely replaced by AI.
There is the danger of learned helplessness. We as humans learn by doing — making mistakes, correcting, improving — and as we do this over and over again, we become skilled in our work. Along the way, we may realise how we can improve the processes, or occasionally have revolutionary ideas that change the way people think and act.
AI — at least for now — does not think. It interpolates on the grid of all the information it has been fed. It has no ability to extrapolate beyond the boundary of knowledge — but it can be very good at making inferences on what we already know.
There is a real risk, in my view, that we become so reliant on AI to do our thinking that an individual human’s learning becomes curtailed during their lifetime, and humanity is no longer able to advance. We get stuck where we are. This a multi-decadal concern.
We also have AI challenges much nearer afoot, and which bring us back to the issue of trust.
We are entering the age of massive-scale generative AI — where individuals, organisations and nations will be able to create manipulated or completely synthetic digital documents, images, videos and sound that will essentially be impossible to discern from reality. Already we see individual clips that surprise us in their apparent authenticity.
“Universities — at least good ones — will teach our students how to be empowered by these technologies, not replaced by them.”
ANU Vice-Chancellor Professor Brian P Schmidt
But imagine a time when there is more fabricated digital content than real content — where no one can trust what is real, and what is not. That time is nearer than you might think — probably just a year or two away.
Some sort of digital watermarking for all digital material will likely emerge — but there is going to be a scary transition, and that transition is starting now.
As with so much of the trust we need to rebuild for the sake of our democracy, this is a place where I hope universities and media might work together. The 24-hour news cycle with its rapid-action reporting will be highly vulnerable to manipulation — and expertise is going to be required to accurately report and analyse what is happening in the world. This might be a chance to break the 24-hour news cycle and replace it with a slower more reflective type of reporting that embodies trust and truth.
Although I have focused today on Australia, there is a global urgency to my argument.
In 2024, 40 countries representing 3.2 billion people will have their elections. We should expect the beginnings of massive AI-generated misinformation campaigns to emerge in these elections, which include nearby Indonesia, as well as the United States (US).
To say I am worried about the US election is an understatement. While I do not know if Donald Trump will be able to re-take the presidency, it is certainly a distinct possibility. Add into the mix disinformation on a scale never seen before, and we have the makings of an even more deeply divided US than we have today.
ANU Vice-Chancellor Professor Brian P Schmidt says AI could further destabilise our democracies. Photo: National Press Club of Australia
And we have seen what that dysfunction looks like. We have seen that based on Trump’s behaviour in his previous term, and the current workings of Congress. The prospect of a highly dysfunctional US cannot be ignored.
Current betting markets — which are as good as anything at this point at understanding the probability of a Trump presidency — put it at more than 30 per cent. I’m not a betting man, but those odds are scarily close to this becoming a reality.
His last presidency ended with the most significant threat to the transfer of power in US history. It would be naïve to think that Donald Trump, and most importantly, those who support him, have not learned from what happened four years ago.
The next US election is precisely 11 months away — and that gives us only a small window to make change — or we may well face our media screens being filled with riots and the US Capitol being overrun.
This is an excerpt from Professor Brian P Schmidt’s speech to the National Press Club in Canberra on Tuesday 5 December 2023. Watch the full speech on YouTube.
Top image: Professor Brian P Schmidt on the risk AI poses risks to democracies. Photo: National Press Club of Australia
Related tags:
The health of Australia’s democracy rests on restoring public trust in media.
Despite the repeated claims of business leaders, the tech industry can’t be left to its own devices on the regulation of artificial intelligence like ChatGPT.
More people thought AI-generated White faces were human than the faces of real people, according to a new study.