Anthropic Study: AI Models Are Highly Vulnerable to 'Poisoning' Attacks

5 hours ago 2

A recent study by Anthropic AI, in collaboration with several academic institutions, has uncovered a startling vulnerability in AI language models, showing that it takes a mere 250 malicious documents to completely disrupt their output. Purposefully feeding malicious data into AI models is ominously referred to as a "poisoning attack."

The post Anthropic Study: AI Models Are Highly Vulnerable to ‘Poisoning’ Attacks appeared first on Breitbart.

Read Entire Article