ChatGPT is the newest AI tool taking the internet by storm. But should we be excited or intimidated by this software?
Article by:
Contributing writer
Want to know about the creation of the universe, find a delicious mushroom recipe or chuckle at a cheesy Star Wars joke? ChatGPT can help.
Since launching late last year, the chatbox’s ability to provide articulate and (mostly) factual answers to a range of questions has caused a mix of concern and excitement amongst internet users.
Here’s what you need to know.
ChatGPT, or Chat Generative Pre-Trained Transformer, was launched by tech company OpenAI in November 2022. It is a language processing tool that can partake in real time conversations. Not only does it answer questions, it can also translate languages, write essays, produce poems, assist with reading comprehension and script lines of code.
ChatGPT works through an easy to use chat interface. One simply enters a question or command, and ChatGPT will begin crafting a response instantaneously.
While this particular chatbot has generated global media coverage, Dr Ehsan Nabavi from the College of Science at The Australian National University (ANU) says it’s a progression in an existing field, rather than an entirely new technology.
“ChatGPT is not the unique technological breakthrough that much of the public assumes,” Nabavi says. “It’s a continuation of the progress happening in the field of large language models over the last few years.”
By using machine learning to generate text, the ChatGPT model learns patterns and relationships between words to form phrases and sentences. As a pre-trained software, it draws from a large dataset of text from across the internet, including websites, articles and books.
“The model is trained on large sets of text, really whatever exists online, to mimic patterns of writing and thinking in English,” Nabavi says.
While ChatGPT is a multilingual tool, it does not currently extract answers from works written in other languages.
But don’t just take our word for it, here’s how ChatGPT explains its modus operandi:
When given a starting text or prompt, it uses this knowledge to generate a response that is similar to the text it was trained on. The response is generated by predicting the next word in a sequence based on the previous words. This process is repeated until the model generates a complete response or reaches a certain length of text. The model is fine-tuned for specific tasks such as answering questions or generating creative writing.
The creators used a technique called Reinforcement Learning from Human Feedback, which means ChatGPT incorporates human feedback into its training loop to minimise untruthful, harmful or biased outputs.
ChatGPT has been successful in filtering out a range of controversial topics and proven to be less toxic than several chatbot counterparts.
Nabavi notes that this technological advancement reportedly comes at a human cost, having been optimised through the labour of poorly paid workers.
“This is an important reminder that it is still the poorest parts of the world that power Artificial Intelligence (AI) technology and its impressive performance,” Nabavi says.
In short, no. While there are calls to ban the technology to prevent students from using the software to cheat on written assignments, Nabavi says this would not be an effective long term solution.
“It’s like asking students not to go on Wikipedia pages if they have a question, or use spell-check for their essays.”
People have expressed worries about the long-term impact of technology on the future of the workforce and how its expansion might affect jobs in industries such as copywriting, graphic design and even software engineering.
According to Nabavi, there are also concerns about the significant energy consumption required for training large AI models, as well as fears around how the software could impact science communication and the spread of misinformation.
“ChatGPT has highlighted that we need to take responsibility for what we innovate; and how, where, and why we bring it into our lives,” Nabavi says.
But despite the alarm, Associate Professor Catherine Ball from the ANU College of Engineering, Computing and Cybernetics is positive about the future of such technology.
“I am not scared of AI. Actually, I think it’s going to be the tool we as humans need to understand what is happening not just to planet Earth, but to ourselves and our lives moving forward,” Ball says.
New technology has always and will continue to change the way we live.
By learning how the algorithms behind these tools work, users can feel empowered instead of overpowered by AI.
“AI on its own is nothing, it’s AI used by a human that actually has power,” Ball says.
“Without AI we are left in the dark.”
Top image: Iryna Imago/Shutterstock.com
In a post-truth era, the way we perceive researchers, and their funding sources, can make or break science.
Four stellar ANU scientists have been recognised as STEM Superstars.
A celebrity-backed company wants to bring extinct animals back. But scientists and philosophers have concerns about this real-life take on Jurassic Park.