

Anthropic is sending out a warning that its artificial intelligence model is sophisticated enough to undo decades of research.
The company operates Claude, the AI chatbot that has been ripped off and turned into a free, public model, and is hoping to get together with a consortium of tech companies to button up the security measures ahead of its release.
'It has found vulnerabilities, and in some cases crafted exploits.'
Anthropic's Mythos model of Claude AI will only be available to 40 select companies to be used for the power of good, the company claims.
It represents "the starting point for what we think will be an industry change point, or reckoning, with what needs to happen now," said Logan Graham, head of Anthropic's vulnerability testing team.
The company fears that its new AI model is so good at finding cracks in cybersecurity that it must only be shared with companies it deems capable and responsible enough to prepare for possible attacks when Mythos goes public.
"This model is good at finding vulnerabilities that would be well understood and findable by security researchers," Graham said. "At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them."
RELATED: How to power the AI race without losing control
Samyukta Lakshmi/Bloomberg/Getty Images
Anthropic will reportedly commit up to $100 million in credits for the project, meaning the amount of money it would typically charge for such a volume of its chatbot's usage.
Labeled Project Glasswing, the initiative to shore up cybersecurity will grant Mythos access to handpicked companies chosen largely from Big Tech like Amazon, Apple, Google, and Microsoft. The group is rounded out by internet infrastructure and cybersecurity giants like Broadcom, Cisco, CrowdStrike, Nvidia, and Palo Alto Networks, along with financial titan JPMorgan Chase and key open-source nonprofit the Linux Foundation.
This is not the first time an AI company has warned its product is too dangerous for the public, and looking back, readers can gauge whether or not Claude may be as dangerous as its creators purport it to be.
In 2019, OpenAI sent out a warning ahead of its release of GPT-2, claiming that its capabilities — now vastly eclipsed by later models — could be used to mass-produce propaganda or misleading text.
As Wired reported at the time, OpenAI said GPT-2 was too risky to be released to the general public.
RELATED: Claude, Anthropic's AI assistant, slammed by Elon Musk for anti-white responses to simple prompts
Claude has been in the news for alleged missteps, leaks, and accidental postings throughout the past year, and while it may not be a household name yet, it has raced its way through the tech sector as a go-to for "agentic" work building software, apps, and even companies.
In addition to its model being open-sourced and used by the general public for free, the company has been noted for "accidental" postings of its own code.
Anthropic "accidentally uploaded a file to a public repository that's just meant to help developers understand how to use their product" and "exposed some of the source code of Claude," reporter Aaron Holmes explained recently.
Proprietary information was further leaked in another alleged accidental posting, this time through a blog draft that revealed "internal source code."
The company seems poised for consistent marketing battles, both willing and unwilling, from its high-stakes lawsuit against the federal government labeling it a supply chain risk to the blowback it has received from putting a woman closely linked to the cultish Effective Altruism movement in charge of its AI's "Constitution."
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
.png)
1 hour ago
2















English (US)