A new AI that's too powerful to release to the public. Should you be worried?

4/24/2026

This week, the headlines were full of a story about a new AI model called Mythos, built by Anthropic — the same company that makes Claude, the AI that helped write our books.

The headlines were dramatic. "Can hack nearly anything." "Too powerful to release." And it's true that Anthropic has decided not to make this particular model publicly available — an unusual step that tells you something about how seriously they're taking its capabilities.

So what's actually going on? And should you be concerned?

Here's the plain-English version.

Mythos has turned out to be extraordinarily good at finding hidden flaws in software — the kind of flaws that, if discovered by the wrong people, could be exploited to cause serious damage. Anthropic found thousands of these flaws in widely used systems, including operating systems and web browsers. Most of them had gone undetected for years.

That sounds alarming. But here's the other side of it.

Anthropic isn't releasing Mythos to the public precisely because they take these risks seriously. Instead, they're making it available to a small group of major technology companies — Microsoft, Amazon, Apple, and others — specifically so those companies can use it to find and fix the flaws before anyone with bad intentions can exploit them. They're using the same tool that could cause harm to prevent that harm from happening.

It's a bit like a locksmith who discovers a new way to pick a lock. The responsible thing to do isn't to publish the technique for everyone to read. It's to quietly tell the lock manufacturers so they can make better locks.

That's what's happening here. And it's a reminder that the people building AI aren't oblivious to its risks — in many cases, they're the ones working hardest to manage them.

Our book Staying Safe covers the AI risks that directly affect everyday life — and what you can do about them.