GPT-2 Language Model
Language models are used for a variety of tasks such as text generation, reading comprehension, translation, speech-to-text, information retrieval, and more. This app makes use of the text generation capability of the smallest version of OpenAI’s GPT-2 model.
At 1904labs, we have fine-tuned the 124M GPT-2 model with different text corpuses to produce text in various styles.
Larger GPT-2 models produce more coherent text. We used the Large 755M parameter model to generate our blog post. The X-Large 1.5B parameter model can produce even more coherent text, such as this article on talking unicorns.
For performance and speed considerations, we used the small, 124M parameter model to perform fine-tuning using several different text corpuses. What is remarkable about the models below is that they learned to generate text in the style of each text corpus. However, because of the smaller model size, the text is less coherent compared to outputs from the larger models.
DISCLAIMER: These models were trained on text from millions of webpages on the internet and occasionally produce text that may be offensive or biased.