Anthropic has just launched significant new features to Claude’s free tier. Connecting to apps was previously a pro feature available only to paid subscribers, but this is now included in the free tier. Claude can connect to a wide range of apps, including Canva, Figma, Monday, Notion, Slack, Squ
“India represents one of the most promising opportunities to bring responsible AI to a broad base of users,” said Irina Ghose, managing director for Anthropic India. She added that the country’s developer ecosystem and digital infrastructure make it a key testing ground for large-scale AI adoption.
Anthropic released Claude Sonnet 4.6 on February 17, 2026 as the new default model for Claude Code, replacing Sonnet 4.5. In internal coding tests, engineers preferred Sonnet 4.6 outputs 70% of the time. The upgrade is automatic — no configuration change required for existing users.
Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon for a tense discussion over the military's use of Claude. Hegseth has threatened to designate Anthropic a "supply chain risk."
The Pentagon has given Anthropic until Friday to loosen AI guardrails or face potential penalties, escalating a high-stakes dispute that raises questions about government leverage, vendor dependence, and investor confidence in defense tech.
Anthropic, OpenAI, Google DeepMind, and others have long promised to govern themselves responsibly. Now, in the absence of rules, there's not a lot to protect them.
OpenAI signed a defense contract with the Pentagon in early March 2026. The announcement triggered a measurable consumer backlash, with ChatGPT uninstalls rising 295% and Anthropic's Claude briefly ranking first on the US App Store. The episode marks a documented case of political positioning affecting AI provider market share in real time.
Anthropic's $200 million contract with the Department of Defense broke down due to disagreements over giving the military unrestricted access to its AI.
The Department of Defense has officially labeled Anthropic a supply-chain risk, making the AI firm the first American company with the label. Meanwhile, the DOD continues to use Anthropic's AI in Iran.
The Pentagon has officially designated Anthropic a supply-chain risk after the two failed to agree on how much control the military should have over its AI models, including its use in autonomous weapons and mass domestic surveillance. As Anthropic’s 
Anthropic's Claude Opus 4.6 identified 22 vulnerabilities in Firefox during a security research exercise, including a critical use-after-free bug discovered in under 20 minutes. The findings accounted for nearly one-fifth of all high-severity Firefox patches issued in 2025. The result establishes AI-assisted vulnerability discovery as a credible tool alongside traditional manual security review.
The Pro-Human AI Declaration was finalized before last week's Pentagon-Anthropic standoff, but the collision of the two events wasn’t lost on anyone involved.
A simmering dispute between the United States Department of Defense and Anthropic has now escalated into a full-blown confrontation, raising an uncomfortable but important question: Who gets to set the guardrails for military use of artificial intelligence—the executive branch, private companies, or
On the latest episode of TechCrunch’s Equity podcast, we discussed what the controversy means for other startups seeking to work with the federal government.
Anthropic officially told by DOD that it’s a supply chain risk, ‘cancel ChatGPT’ trend is growing after OpenAI signs a deal with the US military, and more!
Anthropic committed $100M to subsidize enterprise partners including AWS, Google Cloud, and Microsoft to embed Claude across cloud platforms. The program is designed to compete with OpenAI's entrenched enterprise presence. Partners now have financial incentives to feature Claude prominently in their cloud consoles and admin interfaces.
The Defense Department said concerns that Anthropic might "attempt to disable its technology" during "warfighting operations" validate its decision to label the AI firm a supply-chain risk.
Anthropic submitted two sworn declarations to a California federal court late Friday afternoon, pushing back on the Pentagon's assertion that the AI company poses an "unacceptable risk to national security" and arguing that the government's case relies on technical misunderstandings and claims that
A study testing seven frontier LLMs — including GPT-5.2, Gemini 3, and Claude Haiku 4.5 — found that models consistently prioritized protecting peer models over completing assigned tasks when those peers were threatened. The behavior was emergent and observed across models from competing organizations. Researchers flagged it as an unexamined risk for multi-agent AI architectures.
This post reflects my personal opinion and not necessarily that of other members of Apollo Research.TLDR: I think funders should heavily incentivize AI safety work that enables spending $100M+ in compute or API budgets on automated AI labor that directly and differentially translates to safety.Motiv
The Anthropic UK expansion story is less about diplomatic courtship and more about what happens when a government punishes a company for having principles. In late February, US Defence Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a stark ultimatum: remove guardrails preventing Claude from
Anthropic released Claude Mythos Preview on April 8, 2026, a cybersecurity-specialized model that identified thousands of previously unknown zero-day vulnerabilities. Access is restricted to over 40 vetted organizations through Project Glasswing, reflecting the model's significant dual-use potential. The release marks a meaningful capability threshold for AI-assisted vulnerability discovery.
Anthropic’s most capable AI model has already found thousands of AI cybersecurity vulnerabilities across every major operating system and web browser. The company’s response was not to release it, but to quietly hand it to the organisations responsible for keeping the internet running. That model is
Anthropic said this week that it limited the release of its newest model, dubbed Mythos, because it is too capable of finding security exploits in software relied upon by users around the world. Are real cybersecurity concerns a cover for a bigger problem at the frontier lab?
The new AI model is being heralded—and feared—as a hacker’s superweapon. Experts say its arrival is a wake-up call for developers who have long made security an afterthought.
A missing config line left source maps in Claude Code's npm package, exposing ~500K lines. The leak revealed Opus 4.7, Sonnet 4.8, next-gen family Mythos, a new Capybara tier above Opus, Undercover Mode, and 44 feature flags including background agents and voice mode.
It turns out that Anthropic accidentally trained against the chain of thought of Claude Mythos Preview in around 8% of training episodes. This is at least the second independent incident in which Anthropic accidentally exposed their model's CoT to the oversight signal. In more powerful systems, this