- The AI Exchange
- Google's AI Still Needs Prompt Engineering
Google's AI Still Needs Prompt Engineering
What we learned from looking into Gemini's "fake" demos
Welcome to another edition of the best damn newsletter in AI.
This free newsletter is designed to keep you ahead of the curve and open your mind to using AI in your work and business.
Digging deep in AI for work or AI Operations? Take a look at our membership.
Our #1 goal is to be useful. So please shoot us an email 📩 if you have questions or feedback, and especially if you implement something we share!
Here's what we're covering today:
Google’s Gemini reality check
Why prompt engineering and AI literacy should be your takeaways from the whole kerfuffle
Mistral AI is blowing up. Watch them.
Let’s get to it! 👇
Google's Gemini: A Reality Check on AI's Capabilities
Google's latest AI reveal, Gemini, had us all in awe. The demo video showed us an AI that seemed to understand video content at a depth. Plus Google touted Gemini can beat GPT-4 at many benchmarks.
But as it turns out, the reality was a bit more... edited.
The Reality Behind the Flashy Demo
The demo video showed Gemini interpreting a moving cup game, a drawing of a duck vs a rubber duck and more.
But in their full release where they detail the demos, we found this series of images of hands as a game of rock, paper, scissors to illustrate exactly the shortcomings.
What the video didn't show was the extra prompting needed to get Gemini to this point. For example, to detect that these photos were of rock paper scissors, the AI needed the context of "Hint: it's a game" to make the connection.
Google’s release blog post: https://developers.googleblog.com/2023/12/how-its-made-gemini-multimodal-prompting.html
The Key Takeaways: 👇️
The tech is advancing quickly, but it’s still hard
Google’s demo is just one example in a sea of fancy demonstrations and lackluster tools and product experiences behind it. That’s because this technology is hard.
You need to be a discerning customer
Building up your AI Literacy is about building your bullshit detector for fancy demos vs what is realistically possible. The deeper you understand AI and its capabilities, the quicker you can test it to learn the limitations of whatever tool or AI model you are working with.
Prompting is still essential
Prompting, and more specifically, sharing the right context with an AI model so that it can successfully complete the task, will continue to be important even as these models become more powerful.
Sure, the bar of what they can understand without much context will continue to rise; but many real world use cases will need the “Hint: it’s a game”.
For your reading list 📚
We'll see you again soon! Thoughts, feedback and questions are much appreciated - respond here or shoot us a note at [email protected].
🪄 The AI Exchange Team