We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.
Stunning results. And the OpenAI team clearly thinks so as well: this is the first time that they haven’t actually released their work along with the announcement, citing concerns about malicious usage.
If you read nothing else, read the unicorn article written by the model. Very impressive.