You’ve heard rumors of a GPT-4 release, but today—on Pi Day!—it actually happened! The team at OpenAI have been hard at work on this for years. GPT-4 is a large multimodal model, meaning it accepts text AND images as inputs (GPT-3 only accepts text). It’s also smarter overall than the previous version, allowing you to ask more complex questions—and even make sense of things like the tax code! ?
In this hands-on tutorial, I’ll give you a first look at what I’ve learned so far today, and highlight the things that came out of the OpenAI Developer Demo (https://www.youtube.com/watch?v=outcGtbnMuQ). I’ll show you how to play with the new APIs through the ChatGPT interface (ChatGPT Plus required), a look at the Playground, and how to get on the waitlist for API access.
? Also check out the official blog: https://openai.com/research/gpt-4
? And get on the waitlist here: https://openai.com/waitlist/gpt-4-api
00:00 – GPT-4 is here!
00:26 – How to access GPT-4 through ChatGPT (chat.openai.com), using ChatGPT Plus
01:03 – Seeing how GPT-4 is smarter than GPT-3
02:33 – Using the playground from platform.openai.com
03:00 – But wait! The GPT-4 model isn’t available in the playground!
03:10 – Getting on the API waitlist
03:24 – Using the playground to fine-tune your prompts
04:17 – Providing new data to GPT-4 that it wasn’t trained on
05:00 – Having GPT-4 correct its own code by giving it error messages
05:12 – Using GPT-4 with images for input
06:04 – Using GPT-4 to translate a handwritten image into code. Woah!