Skip to main content
Back to Pulse
NVIDIA

The Future of AI Is Open and Proprietary

Read the full articleThe Future of AI Is Open and Proprietary on NVIDIA

What Happened

AI is the defining technology of our time, quickly becoming core business infrastructure. It’s fueled by a diverse ecosystem of models: large and small, open and proprietary, generalist and specialist. This variety is essential for a future where every application will be powered by AI, every count

Our Take

The model landscape has formally bifurcated. Proprietary APIs — GPT-4o, Claude, Gemini — own frontier capability. Open-weight models — Llama 3, Mistral, Phi-4 — own cost and control. Neither side is collapsing. Both are growing.

For production systems, routing matters more than model selection. Teams running a single proprietary API across every workload — classification, RAG retrieval, generation — are overpaying by 5–20x on low-complexity steps. Phi-4 handles intent classification at a fraction of GPT-4o's per-token cost. Defaulting to frontier models for every call isn't a strategy — it's a budget leak.

What To Do

Route classification and retrieval tasks to open-weight models like Phi-4 instead of GPT-4o because frontier API pricing is 5–20x more expensive for tasks that don't need frontier reasoning.

Builder's Brief

Who

product teams selecting foundation model strategy

What changes

Justification framing for dual-sourcing models; procurement and vendor-lock risk calculus

When

months

Watch for

Meta or Mistral adding commercial-use restrictions to next major open release

What Skeptics Say

'Both open and proprietary win' is a non-thesis that benefits incumbents who can afford both bets; it obscures real licensing restrictions in 'open' models and hands cover to companies avoiding genuine openness commitments.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...