🌟 Vasilij’s Note
This week I tested OpenCode against Claude Code side by side for three workflows. OpenCode hit 95,000 GitHub stars and everyone's asking: should we switch? The answer isn't about which tool is "better"—it's about calculating what you actually need. Most consultancies haven't done the maths on their current setup, let alone compared alternatives. Meanwhile, Ollama 0.14 changed the game by enabling local Claude Code execution. The real question isn't OpenCode vs Claude Code. It's whether you've calculated ROI on any coding agent before deploying it.

In Today's Edition:

This Week in Agents | What Changed

  • OpenCode overtakes Claude Code as fastest-growing AI coding agent – 95,000 GitHub stars, multi-provider flexibility across OpenAI, Anthropic, Google, and local models → Creates deployment options for consultancies evaluating vendor lock-in risk, but adds decision complexity around model selection. OpenCode

  • Ollama 0.14 adds Anthropic API compatibility – Claude Code now runs locally via Ollama, keeping source code on-premises → Governance teams can finally approve AI coding tools for client work because data never leaves your network. Performance trade-off exists but "good enough" beats "not allowed." Ollama

  • OpenClaw creator joins OpenAI – Peter Steinberger is joining OpenAI to develop accessible agents, whilst OpenClaw moves to a foundation model, remaining open and independent → Validation that agent tooling is consolidating around practical deployment over theoretical autonomy. Peter Steinberger

Top Moves - Signal → Impact

  • Microsoft enables Claude as default Copilot subprocessor
    Anthropic now operates as Microsoft subprocessor under Product Terms and Data Protection Addendum. EU/EFTA/UK tenants have Claude disabled by default due to data residency constraints where Anthropic processing excluded from EU Data Boundary.

    → Enterprises must verify data governance posture immediately. Claude becomes embedded in Microsoft 365 ecosystem whether organisations planned for it or not. For EU/EFTA/UK organisations using Microsoft Copilot, verify data residency constraints now.

  • Goldman Sachs deploys Claude to automate accounting and compliance
    Major financial services firm taps Anthropic's Claude to automate accounting and compliance roles, signalling enterprise-scale adoption in regulated industries.

    → Validation that AI agents ready for high-stakes, compliance-heavy workflows. Professional services firms competing with large consultancies need enterprise-grade governance now, not later.

  • AI-powered browsers reshape content consumption
    Google's AI mode, ChatGPT's Atlas mode, and Microsoft's Copilot sidebar gaining traction. Features bypass efforts to block AI crawlers—users can ask devices to explain, summarise, or translate whatever's on screen.

    → Search referral decline accelerates. Firms relying on SEO for lead generation need direct audience relationships and conversational interfaces, not just content optimisation.

Upskilling Spotlight | Learn This Week

"The 2026 Guide to Coding CLI Tools: 15 AI Agents Compared"
Outcome: Comprehensive comparison covering Aider, Claude Code, Cursor, and 12 others. Understand terminal-based AI agent trade-offs around autonomy, model flexibility, and pricing. Learn which tools have largest deployed user base (Aider: 39K+ stars, 15B tokens/week) and when to choose free vs paid options. Tembo

"Best AI Coding Agents for 2026: Real-World Developer Reviews"
Outcome: Developer-focused evaluation criteria beyond raw capability. Learn how teams judge cost-effectiveness ("which tool won't torch my credits?"), net productivity (entire workflow, not isolated moments), and context management quality. Understand why 85% of developers now use AI coding tools regularly. Faros AI

Maker Note | What I built this week

This week I tested OpenCode against Claude Code and Claude Code plus Ollama for three client workflows: proposal assembly, code refactoring, and documentation generation.

Decision: Claude Code API for production speed and reliability. Claude Code plus Ollama for sensitive client data where contracts prohibit external APIs. OpenCode for teams with technical capability to manage multi-provider optimisation. Each has clear use cases when you calculate properly.

OpenCode just hit 95,000 GitHub stars and overtook Claude Code as the fastest-growing AI coding agent.

Operator’s Picks | Tools To Try

OpenCode CLI
Use for multi-provider flexibility across OpenAI, Anthropic, Google, and local models.

Standout: Three interfaces (terminal, desktop, VS Code) with built-in agents.

Caveat: Setup complexity is higher than marketed, requires technical capability to optimise model selection.

OpenCode Zen
Pay-as-you-go model gateway eliminating API key management. Use for teams needing cost control without infrastructure overhead.

Caveat: Adds latency vs direct API calls, introduces third-party dependency.

Ollama 0.14 with Claude Code
Use for client projects where contracts prohibit sending code to external APIs.

Standout: Source code never leaves your machine, and governance teams can approve.

Caveat: Performance gap vs cloud Claude, requires decent GPU/M-series Mac.

Deep Dive | Thesis & Playbook

OpenCode just hit 95,000 GitHub stars and overtook Claude Code as the fastest-growing AI coding agent. But with Ollama 0.14 giving Claude Code local model support, the comparison just got more interesting.

This week I tested both tools side by side to break down the real cost differences for consultancies and show you exactly when each makes sense for your workflows.

On Paper

  • OpenCode positions itself as the multi-provider solution, supporting OpenAI, Anthropic, Google, Azure, and local models via Ollama. You get three interfaces to choose from: a terminal CLI, a desktop application, and a VS Code extension. Built-in agents handle specific coding tasks like refactoring, debugging, and test generation. OpenCode Zen offers a pay-as-you-go gateway that eliminates API key storage, though it adds another intermediary layer.

  • Claude Code takes the opposite approach. It uses Anthropic's models exclusively—Sonnet 4.5, Opus 4.5, and Haiku 4.5—with direct API integration and predictable pricing. The recent addition of Ollama 0.14 support now enables local execution, addressing the governance concerns that previously blocked adoption in compliance-heavy industries. Native integration with the Claude.ai ecosystem means tighter coupling but less flexibility.

  • The cost structures differ fundamentally. OpenCode's provider switching enables cost optimisation if you're willing to actively manage model selection. Claude Code API offers predictable Anthropic pricing with no optimisation overhead. Claude Code plus Ollama requires hardware investment upfront but eliminates ongoing API costs. OpenCode Zen adds gateway fees to the pay-as-you-go model.

  • Privacy considerations matter differently depending on your deployment. OpenCode with cloud providers sends data to your chosen provider's servers. OpenCode Zen adds an additional intermediary handling requests. Claude Code API processes data through Anthropic's infrastructure. Both tools running via Ollama execute fully locally with zero data leaving your premises.

In Practice

  • I calculated real costs for a 30-person consultancy with typical monthly usage covering proposal drafting, code reviews, documentation, and refactoring. Claude Code API runs £450-600 per month with predictable costs, 500ms-2s response times, 30 minutes of onboarding, and zero ongoing maintenance. It's best for production workflows requiring speed and reliability.

  • OpenCode with cloud providers costs £300-450 monthly with optimization, though this requires 2-4 hours of initial setup configuring API keys, selecting models, and choosing interface preferences. You'll spend 1-2 weeks optimizing model selection patterns. Response times match Claude Code when you're not switching providers, but the overhead of choosing which model for which task adds friction. It's best for teams with technical capability wanting cost optimization.

  • Claude Code plus Ollama runs £150-200 monthly when you amortize hardware costs. Response times are 3-10x slower depending on hardware configuration. Initial setup takes 4-8 hours including hardware verification and model testing. Ongoing maintenance involves model updates and performance tuning. It's best for compliance-constrained projects where data cannot leave premises.

  • The performance trade-offs are real. Cloud Claude Code delivers fastest response times, highest reliability, and requires internet connectivity. OpenCode with cloud providers offers similar speed but adds complexity around provider switching that can break workflow continuity when context must transfer. Local Ollama runs 3-10x slower but works completely offline with total data control.

Issues / Backlash

  • OpenCode's multi-provider flexibility creates genuine decision fatigue around which model to use for which task. Setup complexity gets underestimated in marketing materials. Model switching can break workflow continuity when context must transfer between providers, and being community-driven means less enterprise support compared to Anthropic's backing of Claude Code.

  • Claude Code faces criticism for vendor lock-in to Anthropic's models and pricing. There's no native multi-provider support without the Ollama workaround. Enterprise features require Team or Enterprise plans, and the Ollama integration remains experimental with performance compromises that some teams find unacceptable.

  • Both tools running through Ollama integration face hardware requirements that aren't trivial—32GB RAM minimum with GPU strongly recommended. Model quality lags frontier cloud models by 6-12 months. Ongoing model management creates overhead as teams must update, test, and validate. Performance variability based on hardware configuration means what works well on one machine performs poorly on another.

  • Neither tool solves the core problem: calculating ROI before automating. Both require governance frameworks before production deployment. Tool proliferation without strategic deployment creates application sprawl, and teams adopting without workflow mapping end up wasting money on unused capabilities.

My Take (What to do)

Startup (15-40 staff):

Start with Claude Code API. Your constraint is capacity, not cost. Partners doing proposal assembly, client reporting, and delivery oversight benefit from predictable, fast responses without setup overhead. Calculate partner time saved multiplied by £150-200 hourly rate times 52 weeks, then subtract £600 monthly times 12 plus setup hours. Only proceed if payback is under six months.

Don't deploy OpenCode yet. The optimization overhead consumes partner time better spent on client work. Multi-provider flexibility sounds attractive but introduces decision fatigue that reduces adoption. Don't deploy Ollama unless you have specific compliance requirements that justify setup investment—most startups don't have contracts prohibiting external API usage yet. Focus on one high-frequency workflow that happens 10+ times weekly and consumes 30+ minutes each time. Prove value there before expanding.

SMB (50-120 staff):

Your challenge isn't capacity—it's standardization and cost control. Use Claude Code API for client-facing work requiring highest quality and speed, particularly for partners and senior consultants where response time impacts utilization. Consider OpenCode for technical teams comfortable with model switching, specifically for non-client work where cost optimization justifies complexity. Pilot Claude Code plus Ollama only if you face regulated industry requirements mandating data residency, have technical capability to manage local infrastructure, see volume justifying £5,000-10,000 hardware investment, or have client contracts explicitly prohibiting external API usage.

Calculate total cost of ownership including setup time, ongoing maintenance, and opportunity cost of technical staff managing infrastructure versus delivering client work. Assign one delivery lead as "AI tool owner" managing portfolio and preventing sprawl—make it explicit part of an existing ops team member's responsibilities, not an additional role. Establish basic governance with centralized approval for new tools, audit logs, and monthly reviews adjusting portfolio based on actual usage and ROI.

Enterprise (150-250 staff):

You need enterprise-grade governance regardless of tool choice. Your decision is strategic, not tactical. Deploy Claude Code API for standardized workflows with predictable costs and vendor support, teams requiring SLAs and enterprise support contracts, and integration with Microsoft Copilot ecosystem after verifying data residency. Deploy Claude Code plus Ollama for sensitive client data requiring on-premises processing, regulated industries with strict data residency requirements, and high-volume repetitive tasks where hardware investment pays back in 12 months.

Avoid OpenCode unless you have a dedicated platform team managing AI infrastructure, multi-provider strategy is explicit in your AI roadmap, or vendor diversity is a compliance requirement (which is rare). Establish governance framework before deploying with approval workflows for new tools and models, data classification policies defining what AI can access, audit trails logging every AI invocation with user, timestamp, and data accessed, quarterly reviews adjusting portfolio based on evidence, and tested rollback procedures for compromised or underperforming tools.

Focus on vertical use cases protecting margin: proposal automation preserving partner capacity, delivery analytics surfacing utilization risks early, client reporting maintaining quality whilst reducing cycle time. Not horizontal tools that spread value thinly without visible bottom-line impact.

How to Try (15-minute path)

Testing Claude Code:

  1. Sign up at claude.ai/code with your existing Claude account (2 min)

  2. Install the CLI tool following platform-specific instructions for Mac, Windows, or Linux (3 min)

  3. Run test task: "Generate project status report from sample Slack messages and Asana tasks" (5 min)

  4. Measure time to completion, output quality, and ease of use (5 min)

Success metric: Report generated in under 5 minutes with actionable formatting

Testing OpenCode:

  1. Install via npm install -g @nicepkg/opencode-cli (2 min)

  2. Configure your provider by adding an API key for Anthropic, OpenAI, or Google (3 min)

  3. Run the same test task: "Generate project status report from sample data" (5 min)

  4. Compare output quality, response time, and setup complexity versus Claude Code (5 min)

Success metric: Determine if multi-provider flexibility justifies the added complexity

Testing Claude Code plus Ollama:

  1. Install Ollama from ollama.com/download (2 min)

  2. Pull the model: ollama pull claude-3-5-sonnet (3 min)

  3. Configure Claude Code to use local endpoint by setting ANTHROPIC_API_URL environment variable (2 min)

  4. Run the test task, noting latency versus API but confirming offline functionality (5 min)

  5. Measure the speed trade-off versus data control benefit (3 min)

Success metric: Validate that data never leaves your infrastructure and decide if performance is acceptable

Decision framework after testing:

  • If one tool completes fastest with acceptable quality → Claude Code API

  • If cost savings justify complexity → OpenCode (requires technical capability) or Claude Code plus Ollama (requires hardware)

  • If privacy or compliance mandates local execution → Claude Code plus Ollama

  • If no clear winner emerges → Start with Claude Code API, pilot alternatives later when you have a usage baseline

Critical questions before any deployment:

  • What specific workflow will this automate?

  • How many times does this workflow happen weekly?

  • How long does it currently take?

  • What's the hourly cost of people doing it?

  • What's the annual cost baseline? (weekly time × 52 × hourly cost)

  • What's the setup cost? (hours × hourly rate)

  • What's the ongoing cost? (subscription + maintenance)

  • What's the payback period? (setup + annual cost vs baseline savings)

If you can't answer these questions, you're not ready to deploy any coding agent. Do the maths first.

Spotlight Tool | OpenCode

Purpose: Multi-provider AI coding agent eliminating vendor lock-in whilst maintaining development workflow flexibility.

Edge: Three interfaces (CLI, desktop, VS Code), provider flexibility, built-in agents for common tasks, optional pay-as-you-go gateway removing API key management.

Provider flexibility: Anthropic, OpenAI, Google, Azure, local via Ollama
Three deployment options: terminal, desktop app, VS Code extension
Built-in agents: code refactor, debug, test generation, documentation
OpenCode Zen gateway: pay-as-you-go without infrastructure
Active community: 95,000 GitHub stars, rapid feature development

What did you think of today's email?

Let me know below

Login or Subscribe to participate

n8n – An open‑source automation platform that lets you chain tools like DeepSeek, OpenAI, Gemini and your existing SaaS into real business workflows without paying per step. Ideal as the backbone for your first serious AI automations. Try: n8n

Did you find it useful? Or have questions? Please drop me a note. I respond to all emails. Simply reply to the newsletter or write to [email protected]

Referral - Share & Win

AiGentic Lab Insights

Keep Reading