News

Anthropic’s developers recently upgraded the AI model Claude Sonnet 4 to support up to 1 million tokens of context, thereby ...
Anthropic has expanded the capabilities of its Claude Sonnet 4 AI model to handle up to one million tokens of context, five ...
Anthropic upgrades Claude Sonnet 4 to a 1M token context window and adds memory, enabling full codebase analysis, long ...
Anthropic has upgraded Claude Sonnet 4 with a 1M token context window, competing with OpenAI's GPT-5 and Meta's Llama 4.
Claude Sonnet 4 can now support up to one million tokens of context, marking a fivefold increase from the prior 200,000, ...
Anthropic's Claude Sonnet 4 supports 1 million token context window, enables AI to process entire codebases and documents in ...
The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit.
To account for the extra computing power required for large requests, Anthropic will increase the cost for Claude Sonnet 4 ...
Dan Shipperin Vibe CheckWas this newsletter forwarded to you? Sign up to get it in your inbox.Today, Anthropic is releasing a version of Claude Sonnet 4 that has a 1-million token context window. That ...
Claude Opus 4.1 scores 74.5% on the SWE-bench Verified benchmark, indicating major improvements in real-world programming, bug detection, and agent-like problem solving.
Claude Sonnet 4 has been upgraded, and it can now remember up to 1 million tokens of context, but only when it's used via API ...
A new report today from code quality testing startup SonarSource SA is warning that while the latest large language models ...