A Brief History of OpenAI’s Acceleration

 | 
May 23, 2023
May 23, 2023
 | 
Read time: 
5
 min
A Brief History of OpenAI’s Acceleration

A brief timeline at OpenAI

AI is the big tech revolution we are facing - and the pace of change is only going to accelerate. 

“When the car was being introduced and the current paradigm was the horse and buggy and everyone thought ‘what are we going to do with all our horses?’ and horses are fine. It’s just a new world. It’s a new paradigm.” 

- Brandon Fischer

The development of OpenAI (the company behind ChatGPT, GPT-4, Dall-E, etc) and early GPT models

In December of 2015, a group of wealthy entrepreneurs and tech people (Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Olivier Grabias) came together to start an organization to progress the pursuit of General Artificial Intelligence and ensure that whatever is built in the AI space is open and available to everyone. It took two and a half years to release their first major model, a text-generation Large Language Model which they called GPT for Generative Pre-trained Transformer after the training method and the architecture type of the model. Building and improving AI models takes a lot of time and investment, both to gather and build a dataset that can be used for training as well as the actual training and tuning of the model itself. 

During that time, the team at OpenAI was curating data sets, having the model read essentially the entire text of the internet (in multiple languages), and allowing the model to learn how to generate the next word in a sequence. While it took two and a half years to release the first GPT, less than two years later OpenAI released two improved versions of their large language model: GPT-2 & GPT-3. GPT-2 was the first language model to garner widespread attention for its ability to generate human-like text, which OpenAI showcased with a generated article about scientists discovering unicorns in the Andes Mountains. The release of GPT-3 was considered a major leap forward in human-like language generation, primarily because of how much more data that model had to learn against and the increase in the sheer size of the model itself. From that point forward, you can see just how quickly everything accelerated, with new models being released at a higher and higher frequency. 

The release of ChatGPT and GPT-4

Now, until November of last year, you could be forgiven for not being up to date with the latest in language AI, but with the release of ChatGPT, suddenly it seems like the whole world is talking about AI systems and their threat to our current paradigm. ChatGPT is remarkably human with its ability to generate conversational text. You might even have used ChatGPT and thought, “We’ve done it! The perfect AI.” But then 4 months later OpenAI released another update, the latest and greatest in language models, GPT-4. 

GPT-4 now accepts multimodal prompts, meaning that GPT-4 accepts both text and image prompts to generate text. The goal has always been to achieve General Artificial Intelligence. That means the pursuit of an AI system that both has the ability to receive information in all the ways that humans can and produce information in all the ways that humans can. Humans read, but we also take in information through sight, sound, and touch. The enabling of multimodal prompts allows the AI to process information from images and moves it closer to being able to effectively “see.” As impressive as this version is, I promise you it won't be the last.

The future of AI

This timeline of OpenAI illustrates how rapidly this technology is moving. I see articles published every week posing the “should we or shouldn’t we” question about developing AI, but it’s too late for those questions. AI will continue to advance and get better. At this point we need to be focused on ensuring that AI is developed to improve the lives and productivity of humanity.

I see the potential of large language models, like GPT, to enhance our creativity and output. While I understand that as these models improve, their output will eventually become indistinguishable from that of a human, I’m not afraid of that future. I believe AI will make us better. If you instead believe this road leads to an AI-powered dystopia like the one presented in The Terminator, I won't fault you for that belief. According to that timeline, we still have 5 or so years before the machines take over, so I guess -- enjoy it while you still can.

____

This post is part of a series about the explosion of AI happening right now. Stay tuned for more!