News

At its core, the case is simple: a small group of authors has challenged Anthropic for violating their copyright. Their ...
By empowering Claude to exit abusive conversations, Anthropic is contributing to ongoing debates about AI safety, ethics, and ...
Claude, the AI chatbot made by Anthropic, will now be able to terminate conversations – because the company hopes that it ...
While an Anthropic spokesperson confirmed that the AI firm did not acquire Humanloop or its IP, that’s a moot point in an ...
Anthropic’s Claude is getting a side gig as a tutor. The company has launched new modes for its two consumer-facing platforms ...
Artificial intelligence company Anthropic has revealed new capabilities for some of its newest and largest models. According ...
The AI race is close, with many companies vying for users’ attention, but developers seem to largely agree that Anthropic has ...
Claude won't stick around for toxic convos. Anthropic says its AI can now end extreme chats when users push too far.
A new feature with Claude Opus 4 and 4.1 lets it end conversations with users with "persistently harmful or abusive ...
Anthropic empowers Claude AI to end conversations in cases of repeated abuse, prioritizing model welfare and responsible AI ...
In a move that sets it apart from every other major AI assistant, Anthropic has given its most advanced Claude models the ...
The Claude AI models Opus 4 and 4.1 will only end harmful conversations in “rare, extreme cases of persistently harmful or ...