Recently I’ve had the opportunity to present at a few tech conferences. To no great surprise, it seems like the most interesting (and best attended) sessions have all included “AI” in their title. The tech world is abuzz about the newest high-profile AI systems, and it seems like every week there is another new announcement about ground-breaking AI.
Did every data scientist and machine learning engineer spend the entirety of the pandemic locked away developing their revolutionary AI? Probably. But I don’t think that is what’s behind the surge in AI development. Let’s dive into what has been happening with AI recently, specifically the large language models, and more importantly why it's such a big deal for folks who want to use AI.
The restrictions of early AI language models
Large language models have been in development at companies like OpenAI (GPT), Google (Bard), and Meta (Llama) over the past few years. At 1904labs, we have been using them for some time to generate synthetic datasets and even had GPT-2 write a blog post for us back in 2019.
While these models have shown increasing abilities to perform select tasks, they have been somewhat restricted by the way users had to interact with them. Yes, like the uber popular ChatGPT, users interacted with these models by providing a text prompt as the model’s instructions, but previously these prompts had to be structured in specific ways to return acceptable output. Small, seemingly insignificant changes to the prompt had unintended and unforeseen effects on the output from the model.
As a result, over the past few years, the term “prompt engineering” became popular. Even as recently as last year, as OpenAI increased the ability of GPT-3 to respond to instructions, it still took a lot of effort to find the right prompt to produce the desired output. Flash forward to the release of ChatGPT in November last year, and that all changed. Suddenly, the only requirement for interacting with a large language model is knowing conversational English (or one of the several other languages that ChatGPT understands).
How ChatGPT has changed AI
In effect, OpenAI has created a user interface that allows anyone to utilize and benefit from ChatGPT. To be clear, by “user interface” we don't mean the website. We mean the way in which users interact with the model: a conversation. This ability to converse back and forth with AI having a continual dialogue where each request sent to the model builds upon what has already happened in the interaction has been the dream only realized in science fiction movies and the MCU. But now it is totally real!
In the development of this conversation-like interface, the big difference between GPT-3 and ChatGPT was achieved by training ChatGPT on a larger, more diverse, and more conversational dataset. To train both models, data scientists at OpenAI applied Reinforcement Learning with Human Feedback, a technique that uses humans to score the model’s responses which are then fed back to the model. In the case of ChatGPT, the responses were scored higher if they were more conversational, thus ChatGPT was rewarded for producing human-like, contextually-appropriate responses, and penalized for producing nonsensical or irrelevant responses. This allowed ChatGPT to learn from its mistakes and gradually improve its response accuracy.
What does the development of AI mean for us?
With a decent understanding of how these AI models have taken such giant leaps forward, the real questions become: What does this mean for those that work with these models? What does this mean for those who work in industries that might be impacted? What can we expect from the competitors to OpenAI and their GPT-like models such as Google? We’ll explore all those questions in following posts.
____
This post is part of a series about the explosion of AI happening right now. Check out the first post about the timeline of OpenAI's model development and how these releases will only accelerate.
Photo credit: Timon - stock.adobe.com