SunBrief#76: ️ Anthropic drops Opus 4.7

OpenAI runs your computer, Google brings AI Mode to Chrome, and Anthropic launches Claude Design.

Smarter with AI banner

 

Welcome to the SunBrief

Today in SunBrief 🌞

  • Learn the AI Skills That Matter Now

  • Claude Opus 4.7 Launches With Tripled Vision and Deeper Coding

  • OpenAI Codex Expands Beyond Coding

  • Stock Updates

  • Qwen3.6-35B-A3B Goes Open-Weights With Strong Agentic Coding

  • AI Highlights of the Week

  • Too Important to Miss

Learn the AI Skills That Matter Now

Sponsored

A thousand AI startups just became obsolete.

Anthropic’s Managed Agents now let you build powerful AI agents without backend headaches or heavy infrastructure. What used to take months can now be done in days.

The real edge is knowing how to use this well.

Outskill’s 16-hour AI Mastermind helps you learn practical AI use cases, automations, and workflows so you can work smarter, move faster, and stay valuable in 2026.

Live sessions: Saturday & Sunday, 10 AM–7 PM EST
Bonus for attendees: $5,000+ in AI resources

Claude Opus 4.7 Launches With Tripled Vision and Deeper Coding

Anthropic’s latest Opus model improves long-running software engineering work and adds tighter security controls ahead of a broader Mythos release

Anthropic released Claude Opus 4.7 as a general-availability upgrade focused on hard software engineering, higher-resolution vision, and new cybersecurity guardrails while keeping pricing the same as Opus 4.6.

Key Points:

  • Big coding upgrade: Opus 4.7 improves on Opus 4.6 for advanced, difficult engineering tasks, with better instruction-following, long-run consistency, and self-verification before reporting results.

  • Better vision: Supports higher-resolution images (up to 2,576px on the long edge), helping with dense screenshots, diagrams, and pixel-precise workflows like computer-use agents.

  • Cyber safeguards rollout: Opus 4.7 is the first model to ship with new automated blocks for high-risk cybersecurity requests, acting as a real-world testbed before wider Mythos-class releases.

  • Cyber Verification Program: Anthropic is inviting legitimate security professionals (pentesting, red-teaming, vuln research) to apply for verified access.

  • New controls + features: Adds an xhigh effort level, API task budgets (beta), and Claude Code updates like /ultrareview plus expanded auto mode for longer runs with fewer interruptions.

  • Migration note: A new tokenizer may increase token counts (~1.0–1.35× depending on text), and higher effort levels can generate more reasoning tokens, so teams may need to re-tune prompts and budgets.

Why It Matters:
Opus 4.7 pushes Anthropic’s agentic coding performance forward while showing how top labs are pairing capability upgrades with stronger cyber safety infrastructure, especially as governments and enterprises get more sensitive to AI-enabled attack risks.

What defines the next phase of frontier AI?

Login or Subscribe to participate in polls.

OpenAI Codex Expands Beyond Coding

OpenAI upgrades Codex with computer control, deeper workflow tools, plugins, and memory for ongoing work

OpenAI shipped a major Codex update that turns it from a coding assistant into a broader agent that can operate your computer, work across more tools, and support tasks across the entire software development lifecycle.

Key Points:

  • Computer-use mode: Codex can now see, click, and type with its own cursor (initially on macOS), letting it test apps, iterate UI changes, and work in tools without APIs.

  • Native web workflow: Adds an in-app browser where you can comment directly on pages to guide precise frontend edits and iterations.

  • Image generation built in: Codex can use gpt-image-1.5 to generate and refine visuals for mockups, UI, product concepts, and games.

  • More integrations: Adds 90+ plugins (skills, app integrations, MCP servers) for tools like Jira (via Rovo), CI/CD, Git platforms, Microsoft tools, and more.

  • Deeper dev setup: Better PR review flows, multiple terminals/files, SSH to remote devboxes (alpha), rich previews for PDFs/docs/slides/sheets, plus a summary pane to track plans and artifacts.

  • Long-term continuity: Automations can reuse threads to preserve context, schedule future work, and continue tasks over days/weeks; memory (preview) helps Codex remember preferences and learn from prior actions.

Why It Matters:
Codex is shifting from “write code faster” to “move the whole project forward,” combining computer control, tool integrations, and persistent context so developers can plan, build, review, test, and iterate in one agent-driven workspace.

Would you trust Codex to control your computer for dev work?

Login or Subscribe to participate in polls.

Stock Updates

Qwen3.6-35B-A3B Goes Open-Weights With Strong Agentic Coding

Alibaba’s Qwen team open-sources a 35B MoE model (3B active) that competes with much larger coding models

Alibaba’s Qwen team released Qwen3.6-35B-A3B as open weights, positioning it as a highly efficient Mixture-of-Experts (MoE) model that delivers agentic coding performance close to (or rivaling) bigger dense models while activating only ~3B parameters per token.

Key Points:

  • Sparse MoE efficiency: The model has 35B total parameters but uses only ~3B active, aiming for strong performance at lower inference cost.

  • Agentic coding jump: Big gains over Qwen3.5-35B-A3B on agent-style coding benchmarks (SWE-bench family, Terminal-Bench, Claw-related evals), and it’s positioned as competitive with models like Gemma4-31B.

  • Multimodal + “thinking” controls: Supports multimodal inputs and offers thinking / non-thinking modes, plus preserve_thinking for longer agent runs.

  • Strong vision-language results: Qwen claims the model performs especially well on spatial intelligence and solid document/OCR benchmarks, punching above its weight.

  • Multiple ways to use it: Available via Qwen Studio, Alibaba Cloud Model Studio API (as qwen3.6-flash), and open weights on Hugging Face/ModelScope for self-hosting.

  • Plugs into agent tools: The team highlights compatibility with agentic coding setups like OpenClaw, Qwen Code, and even Claude Code via an Anthropic-compatible endpoint.

Why It Matters:
This is part of a bigger trend: smaller-active-parameter MoE models are getting good enough to run serious coding agents cheaply. By shipping open weights plus agent-friendly API features, Qwen is trying to make “high-agency coding” accessible without needing frontier-scale compute.

What does Qwen3.6 really signal?

Login or Subscribe to participate in polls.

AI Highlights of the Week

  • Google Brings AI Mode Into Chrome

    Google is bringing AI Mode into Chrome, letting users browse pages side by side with AI help without switching tabs.

    It can also pull in recent tabs, images, and PDFs for richer answers, turning Chrome into a more seamless AI workspace.

  • Anthropic Launches Claude Design
    Anthropic has launched Claude Design, a new AI tool that turns prompts into prototypes and visual assets.

    The move pushes Anthropic deeper into design, where Figma has long dominated, while linking the workflow more closely with Claude Code and its wider product ecosystem.

  • Google Unveils Gemini 3.1 Flash TTS
    Google has launched Gemini 3.1 Flash TTS, a new text-to-speech model with more natural, expressive audio. It is rolling out in preview across the Gemini API, Vertex AI, and Google Vids.

    The model adds audio tags for finer control over tone, pace, and delivery, while supporting 70+ languages. Google also says all generated audio is watermarked with SynthID.

  • Anthropic Puts Claude Code on Autopilot
    Anthropic is bringing Routines to Claude Code on the web, letting users automate tasks that run on a schedule, via API calls, or from GitHub events.

    The feature turns Claude into a more persistent cloud-based coding agent, able to keep working even when your laptop is closed.

Too Important to Miss

Last Week’s Poll Result

  • Who looks strongest in the AI race right now?

    Anthropic → 34.69%
    Meta is back in it → 28.57%
    OpenAI → 26.53%
    Google → 10.20%

  • Should frontier AI models with major cyber capabilities face strict rollout limits?

    Yes, absolutely → 66.67%
    Maybe, depending on the model → 20.00%
    No, that would slow innovation → 13.33%

  • Was Anthropic justified in flagging OpenClaw-style usage as suspicious?

    Maybe, but the enforcement looked messy → 40.91%
    No, this looks anti-developer → 36.36%
    Yes, heavy malicious use is different from normal chats → 22.73%v

Feedback

We’d love to hear from you!

How did you feel about today's SunBrief? Your feedback helps us improve and deliver the best possible content.

Login or Subscribe to participate in polls.

Know someone who may be interested?

And that's a wrap on today’s SunBrief!

Reply

or to participate.