35% of People Report Secret Usage of AI

Just because AI isn't allowed, doesn't mean its not being used

Welcome to another edition of the best damn newsletter in AI.

This free newsletter is designed to keep you ahead of the curve and open your mind to using AI in your work and business.

Digging deep in AI for work or AI Operations? Take a look at our membership.

Our #1 goal is to be useful. So please shoot us an email 📩 if you have questions or feedback, and especially if you implement something we share!

Here's what we're covering today:

  • The shocking stat out of Retool’s recent State of AI report

  • Where to start on developing an AI policy

  • Microsoft’s Content Credentials as a Service & More

... and if someone forwarded this email to you, thank them 😉, and subscribe here!

Let’s get to it! 👇


35% of People Report Secret Usage of AI

The latest State of AI study is out, and it's a wake-up call to some businesses that AI isn't just a trend, it's one that companies are adopting heavily and fast.

In the report, Retool covers:

  • 66% of respondents companies have at least one AI use case live

  • And of those that've implemented, 96% say it is useful

  • Amongst the investments so far, 45% say they still aren't investing enough in AI

There's lots of good data, but we're going to cover one point that stood out to us the most.

And that's secret AI usage.

Only 54% of people reported being encouraged to use AI at work. Even more startling, 35% confessed to using AI secretly, both within and outside of company policies.

Let's talk about why secret use of AI is worse than out in the open…

The Secret Use of AI: Playing with Fire?

Using AI under the radar is like sitting on a ticking time bomb. Without the right checks and balances, a business is left wide open to some serious risks. We're talking potential data breaches, non-compliance with regulations, and ethical dilemmas.

Think about it:

  • Data breaches: Without a formal AI policy, employees can accidentally spill sensitive company data.

  • Non-compliance: If AI is used in ways that break the rules like customer data protection, you could be looking at fines or a tarnished reputation.

  • Ethical dilemmas: Misusing AI could lead to unethical outcomes, like biased decision-making or privacy invasion.

The Need for an Official AI Policy

Every company needs to roll out an official AI policy, like yesterday. This policy should spell out how AI can be used, who can use it, and what safety nets need to be in place to protect data and ensure ethical use.

Here's a quick rundown of what your AI policy should cover:

  • Clear guidelines on AI use: This should detail what tasks AI can be used for, and who gets the green light to use it.

  • Data protection measures: Your policy should lay down the law on how data is to be handled when using AI, including data privacy and security protocols. A simple heuristic is the Reddit Rule: if you wouldn't post it anonymously on Reddit, don't put it in an AI tool without clear understanding of their data policy.

  • Ethical guidelines: The policy should also tackle ethical considerations, like making sure AI is used in a way that's fair and unbiased. A simple ethics litmus test? If the usage may show up on the cover of the New York Times, would you be proud of it?


The 5 Best GPTs for Work

Custom GPTs are exploding, and we wanted to highlight our top 5 that we’ve seen so far:

We are releasing guides and hands-on support for anyone who wants to build custom GPTs for their business. Get on the list to be notified here.


For your reading list 📚

AI is revolutionizing industries and valuations...

New AI tech on the horizon...

  • Nvidia's new AI chip, the HGX H200, is a beast with 1.4x more memory bandwidth and 1.8x more memory capacity than its predecessor.

  • Humane has launched the AI Pin, a $699 wearable that connects to AI models, offers voice messaging, real-time translation, and more.

AI in the world of politics and cybersecurity...

  • Google's taking legal action against scammers using AI to spread malware. They're tricking users with a fake Bard AI service download, which actually steals social media credentials.

  • Microsoft is stepping into the political ring with Content Credentials as a Service, a tool to combat deepfakes and boost cybersecurity in elections. Let's hope it's enough.

If you’re really nerdy...

  • Ghost, backed by OpenAI, says Language Model Models (LLMs) can solve self-driving car woes, but experts aren't so sure.

That's all!

We'll see you again on Thursday. Thoughts, feedback and questions are much appreciated - respond here or shoot us a note at [email protected].

... and if someone forwarded this email to you, thank them 😉, and subscribe here!


🪄 The AI Exchange Team