First time experiences with GPT-3 and Codex
This month we’d like to share some insights into two super exciting projects by OpenAI that have left programmers and AI specialists speechless over the past months. We took a look at the language model GPT-3 and its descendant Codex, an application combining the training of the GPT-3 plus vast amounts of publicly available code. We tried out the applications (in part) after finally getting our access in October and November after a long wait and prepared them for our Playground.
The concentrated power of OpenAI’s language models
We always appeal that having demonstrations and hands-on examples help to figure out what new technologies can do in practice. Therefore, OpenAI provides great examples as inspiration since it’s quite fascinating to have conversation with an AI where answers are often not predictable. To get an idea of how powerful the language models already are, it can only be recommended to test them to see for yourself. The specified Playground is for direct use in your browser (not to be confused with our very own Playground).
Not without reason, the language model was not freely accessible for a longer time. Due to new progresses with safeguards, there are no longer any waiting list restrictions, you just need to register. However, products created with GPT-3 must still be released manually by the company. It’s easy to get lost in time when using conversational models and after a while your writing behavior will hardly be different from what you would use in a human conversation. Answers are not standardized and relate to what has already been written.
Of course, the whole thing is not just for playing around, but is primarily intended as an API one can use for its own applications and development of projects. On the image above OpenAI, actively promotes various use cases and model pre-sets and writes that their API “can be applied to virtually any task that involves understanding or generating natural language or code”.
Codex: Reinventing (Programming) Language
When thinking about language, its origins, evolutions, and differentiations play obviously big part of its construct. Without now plunging into language theory or thinking of limits of one’s own language, we’d like to point to a new intersection. Besides natural language, since the last century there are also programming languages who were partly responsible for bringing the world into the computer.
When looking back into Konrad Zuses Z1 computer, his corresponding programming language Plankalkül (1942) is considered the earliest in theory. Others followed shortly thereafter in practice, such as IBM’s FORTRAN, developed in 1954, and Remington Rand’s UNIVAC I, both of which were in mass use.
Above you can find a great demo video of the Codex developer with some further insights. On our own Playground we showcase an example where we build a basketball online game online using natural language input.
Every month we present two new applications for our GEDANKENFABRIK AI Playground. Read the intros, visit our Playground and try them out yourself!
AI-Plays of the month: Ask Delphi & Deep Dream
AI-Plays of the month: Stealing Ur Feelings & air-drawing
AI-Plays of the month: AI21 Studio & Wordtune
AI-Plays of the month: Zendo & Scroobly
AI-Plays of the month: JFK Files & Gen Studio
AI-Plays of the month: The Moral Machine & Interactive TV
AI-Plays of the month: Sythesia & Giorgio Cam
AI-Plays of the month: Deep Nostalgia & AI Dungeon
How to play: First time experiences
Time to Play: The AI Playground