Start now →

Anthropic Using ‘Fear-Based Marketing’ to Promote Claude Mythos: Sam Altman

By Callan Quinn · Published April 23, 2026 · 4 min read · Source: Decrypt
RegulationSecurityAI & Crypto
Anthropic Using ‘Fear-Based Marketing’ to Promote Claude Mythos: Sam Altman
NewsArtificial Intelligence

Anthropic Using ‘Fear-Based Marketing’ to Promote Claude Mythos: Sam Altman

The OpenAI chief said warnings about rival Anthropic’s AI model may be exaggerated, as its cybersecurity capabilities draw scrutiny.

Callan QuinnBy Callan QuinnEdited by Stephen GravesApr 23, 2026Apr 23, 20264 min read
OpenAI CEO Sam Altman. Image: Decrypt/Village Global
OpenAI CEO Sam Altman. Image: Decrypt/Village Global
Create an account to save your articles.Add on GoogleAdd Decrypt as your preferred source to see more of our stories on Google.

In brief

OpenAI CEO Sam Altman pushed back against growing alarm over rival Anthropic’s powerful new AI model Claude Mythos, suggesting the company is using “fear” to market the product.

Speaking on the Core Memory podcast hosted by tech journalist Ashlee Vance, Altman argued that the use of “fear-based marketing” was geared towards keeping AI in the hands of a “smaller group of people.”

“You can justify that in a lot of different ways, and some of it’s real, like there are going to be legitimate safety concerns," Altman said.

“But if what you want is like 'we need control of AI, just us, because we’re the trustworthy people', I think fear-based marketing is probably the most effective way to justify that."

Altman added that while there are valid concerns about AI safety, "it is clearly incredible marketing to say: 'We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100 million. You need it to run across all your stuff, but only if we pick you as a customer.'"

He noted that it was “not always easy” to balance AI’s new capabilities with OpenAI’s belief that the technology should be accessible.

Anthropic’s Claude Mythos

Anthropic’s Claude Mythos model, revealed last month, has drawn intense attention from researchers, governments and the cybersecurity industry, particularly after testing suggested it can autonomously identify software vulnerabilities and execute complex cyber operations. The model is being distributed only to a limited set of organizations through a restricted program.

The rollout reflects a broader divide in the AI industry over how powerful systems should be deployed, with some companies emphasizing controlled access and others arguing for wider distribution to accelerate innovation and understanding of the technology.

Mythos has become a focal point in that debate. The model’s capabilities have been framed by Anthropic as both a defensive breakthrough—allowing faster detection of critical software flaws—and a potential offensive risk if misused. Early this month, it identified hundreds of vulnerabilities in Mozilla’s Firefox browser during testing and has also demonstrated the ability to carry out multi-stage cyberattack simulations.

Anthropic has restricted access to the system via Project Glasswing, granting select companies including Amazon, Apple and Microsoft the ability to test its capabilities. The company has also committed significant resources to supporting open-source security efforts, arguing that defenders should benefit from the technology before it becomes more widely available.


Security experts warn that the same capabilities that allow Mythos to identify vulnerabilities could also be used to exploit them at scale. Tests by the UK’s AI Security Institute found the model could autonomously complete complex cyber operations.

The model has also exposed limitations in existing AI evaluation systems, with Anthropic acknowledging that many current cybersecurity benchmarks are no longer sufficient to measure the capabilities of its latest system.

That said, a group of researchers claimed last week they were able to reproduce Mythos’ findings using publicly available models.

Despite calls within parts of the U.S. government to halt use of the technology over concerns about its potential applications in warfare and surveillance, the National Security Agency has reportedly begun testing a preview version of the model on classified networks. On prediction market Myriad, owned by Decrypt's parent company Dastan, users put a 49% chance on Claude Mythos being released to the wider public by June 30.

Altman suggested that rhetoric around highly dangerous AI systems may increase as capabilities improve, but argued that not all such claims should be taken at face value.

“There will be a lot more rhetoric about models that are too dangerous to release. There will also be very dangerous models that will have to be released in different ways,” he said. “I’m sure Mythos is a great model for cybersecurity but I think we have a plan we feel good about for how we put this kind of capability out into the world.”

Altman also dismissed suggestions that OpenAI is scaling back its infrastructure spending, saying the company would continue expanding its computing capacity despite shifting narratives.

“I don’t know where that’s coming from… people really want to write the story of pulling back,” he said. “But very soon it will be again, like, ‘OpenAI is so reckless. How can they be spending this crazy amount?’”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.
This article was originally published on Decrypt and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →