News

Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
Researchers are trying to “vaccinate” artificial intelligence systems against developing evil, overly flattering or otherwise ...
Using two open-source models (Qwen 2.5 and Meta’s Llama 3) Anthropic engineers went deep into the neural networks to find the ...
Malicious traits can spread between AI models while being undetectable to humans, Anthropic and Truthful AI researchers say.
On Friday, Anthropic debuted research unpacking how an AI system’s “personality” — as in, tone, responses, and overarching ...
New Anthropic research shows that undesirable LLM traits can be detected—and even prevented—by examining and manipulating the ...
Anthropic revealed breakthrough research using "persona vectors" to monitor and control artificial intelligence personality ...
AI models can often have unexpected behaviours and take on strange personalities, and Anthropic is taking steps towards ...
Anthropic’s use of books without permission to train its artificial intelligence system was legal under US copyright law, a judge ruled. Above, Anthropic CEO Dario Amodei in May. AP US copyright ...
Alsup ruled that Anthropic's use of copyrighted books to train its language learning model, or LLM, was "quintessentially transformative" and did not violate "fair use" doctrine under copyright law.