Fine tuning GPT, explained

Demystifying zero-shot, few-shot, & fine tuning all in one email

Welcome to another edition of what we’re determined to make the best damn newsletter in AI. Here we’ll break down AI topics that matter, open your mind to use cases, and keep you ahead of the curve.

Our #1 goal is to be useful. So please shoot us an email 📩 if you have questions or feedback, and especially if you implement something we share!

Here's what we're covering today:

  • Unlocking GPT through customizing for your use case

  • A helpful use case deep dive with a sports-related example

  • ChatGPT is continuing to alter the marketing landscape

... and if someone forwarded this email to you, thank them 😉, and subscribe here!

Let’s get to it! 👇


Let’s demystify “fine tuning” GPT

How do I fine tune AI models like GPT-3? This is one of the most common questions we get asked.

And in the nicest way possible, we’re here to tell you that you’re probably asking the wrong question (at least most of you are).

Really what you mean to ask is - How to customize these models to have better performance for your use case.

In a nutshell, there are 4 ways to customize GPT-3:

  1. Customize via prompts - Zero-Shot Learning

  2. Customize via examples - Few-Shot Learning

  3. Customize via training data - Fine Tuning

  4. Customize via added info - Semantic Search

4 ways to customize GPT

The right approach varies greatly on your use case (and frankly budget).

Most use cases can be accomplished with Zero-Shot or Few-Shot Learning, which is basically getting good at prompt engineering!

Use cases where you need to teach the AI a specific and highly nuanced style, or have a lot of variability in your tasks, might benefit from Fine Tuning (although we think there’s still a way to do this with Few-Shot Learning).

And if you want to add new or more precise information into the model, Semantic Search (which we’ll cover in a future newsletter) is quickly becoming the de facto approach.

Interested to dig in? You’re in luck!

We put together a walkthrough of three of these approaches for you, including tutorials and a free Google Sheets template to calculate fine tuning costs. Check it out here!


Download and explain TikTok videos

Starting with a demo on generating Sports Commentary for a World Cup noob

Imagine a world where anything can be explained for any knowledge level in a matter of seconds.

Oh wait, we’re already living in that world.

We call this the AI Enabled Expert theory and it’s one of the clearest ways organizations can get value out of GPT today.

Take any expert topic, and ask AI to explain it for any knowledge level you’d like.

Here’s a quick walk through on how you can do this with TikTok videos specifically:

1) Grab a link to a TikTok video

2) Input the link in a video downloader like Snaptik (free) -

3) Upload the MP4 to Open AI’s playground using their Speech to Text feature. (btw you can also record audio straight in the tool)

Transcribe in playground

4) Then “Use as Input” and craft a prompt like the following

Sports example - explain transcript

This example is fun because it’s about sports, but this technique could also be used for explaining technical concepts to a non-technical audience, adding commentary to an important briefing for an executive… the list goes on!


For your reading list 📚

ChatGPT updates you should be watching...

ChatGPT is altering the marketing landscape as we speak...

And if you're really nerdy ...

  • People are charging $2K+ to fine tune GPT-3 for you w/ Open AI's APIs, but here's a Google Colab notebook to show you how to DIY for free (+ cost of training of course)

  • GPT-4 is rumored to have over one trillion parameters and is expected to increase the LLM's accuracy in mimicking human behavior and speech patterns

  • If you’re interested in the nerdy details about how GPT-3 was trained, here’s some great visuals

That's all!

We'll see you again on Thursday. Thoughts, feedback and questions are much appreciated - respond here or shoot us a note at [email protected]

... and if someone forwarded this email to you, thank them 😉, and subscribe here!


🪄 The AI Exchange Team