Meta's Next Move with LLaMA

Will open source models outpace GPT-4 and others; going deep on prompt engineering

Welcome to another edition of what we’re determined to make the best damn newsletter in AI. Here we’ll break down AI topics that matter, open your mind to use cases, and keep you ahead of the curve.

Our #1 goal is to be useful. So please shoot us an email 📩 if you have questions or feedback, and especially if you implement something we share!

Here's what we're covering today:

  • Whether or not Meta’s upcoming release of LLaMA that works for commercial use is a big deal

  • How Notion AI is fitting into our workflow

  • 🔥 New resource drops for members on prompt engineering

  • EU AI Act and the latest on AI adoption across industries

... and if someone forwarded this email to you, thank them 😉, and subscribe here!

Let’s get to it! 👇

TODAY'S PERSPECTIVE

Meta is reportedly prepping LLaMA for open source commercial release… is that a big deal?

According to multiple credible sources, Meta is preparing to release a version of its open source large language model LLaMA that would be available for commercial purposes. AKA businesses could use it.

How can we tell if this is a big deal or not?

Well it all boils down to something we’ll call a Model Capability Threshold - how “good” does an AI model have to be to be useful for a specific task?

Today’s newsletter is about to get a bit nerdy - but given all the questions we’re seeing on social media about this - we’re here to help you feel more grounded in what’s important.

Let’s dig in.

First, simple tasks have a low Model Capability Threshold.

Imagine we have a simple task like having ChatGPT come up with a list of topics that are covered in a blog post. This task is pretty easy, and there’s a good chance you wouldn’t notice a huge difference between GPT-3.5 and GPT-4. That’s because the model only needs to be somewhat “capable” in order to be good at that task.

In contrast, a more complex task like creative writing has a higher Model Capability Threshold.

If you’ve tried asking GPT-4 to write things for you - it’s abundantly clear how much better it performs than GPT-3.5. That means the AI model will need to be more powerful or capable for it to do well at that task.

Back to Meta’s model - even if the version of LLaMA they release is not as good, it might be good enough to do most of the tasks people and businesses need the model to do to create value for their companies. AKA it might pass most tasks’ Model Capability Threshold.

And once “open source” models (aka free and fully customizable) pass the Model Capability Threshold, they become way more valuable than their “closed source” alternatives (like GPT-4).

The real thing stopping businesses from using LLaMA more seriously so far has been that it’s not commercially licensed, meaning you’d face serious risk of penalties and fines if you were caught using it for commercial or monetary gain.

But if that’s no longer an issue, things are about to get interesting.

Related: Rachel’s latest insights on whether companies need to fine tune their own LLMs

USE CASE DEEP DIVE

Summarize anything in Notion

Notion has been quietly rolling out big AI features - and this week, we’ve been loving their new summarize feature.

Write as much as you want, or copy/paste articles on the page.

Then / command + summarize and you can quickly and easily generate a summary - all without leaving Notion.

JUST DROPPED: MEMBER RESOURCES 🔥 

🔒️ Premium Only: Go from 0 to 100 on Prompt Engineering

There’s a big difference between a good prompt and a great prompt for complex tasks.

We’ve just refreshed our prompting resources on everything you need to know to go from prompt beginner to prompt hero - below are the new & improved prompt engineering chapters you now have access to.

And remember, if you have more specific questions - ask AIxChat 🙌 

  1. Start with Intro to prompt engineering,

  2. Then dive into prompting basics (the MASTER method).

  3. From there, your next highest leverage will come from learning prompt formatting (from simple to advanced), prompt chaining, and few shot prompting.

  4. We’ve talked about building a Prompt Library to share with your team. Learn about how to get started with prompt templating, the foundation to building a good prompt library.

  5. And answer the age-old question of whether prompt engineering is actually worth learning (we believe yes) and where the state of the art is at with AI-assisted prompting

Have an AI question? Chat with AIxChat, your friendly AI consultant trained on all the knowledge in The AI Exchange 🤖 

LINKS

For your reading list 📚

AI regulation is coming in strong…

AI adoption is charging forward…

And if you're really nerdy...  

That's all!

We'll see you again on Thursday. Thoughts, feedback and questions are much appreciated - respond here or shoot us a note at [email protected].

... and if someone forwarded this email to you, thank them 😉, and subscribe here!

Cheers,

🪄 The AI Exchange TeamNew ChatGPT Features Leaked