Quarterly Outlook
Q3 Investor Outlook: Beyond American shores – why diversification is your strongest ally
Jacob Falkencrone
Global Head of Investment Strategy
Investment Strategist
Nvidia and OpenAI signed a letter of intent to deploy at least 10 gigawatts (GW) of Nvidia systems, with Nvidia intending to invest up to USD 100 billion progressively as capacity comes online. Funding arrives in stages—USD 10 billion at signing, then tranches as each gigawatt goes live. The sites will run Nvidia’s stack—GPUs, networking, and software—to train and deploy OpenAI models. Initial deployments are planned on Nvidia’s Vera Rubin platform (next generation platform that delivers much higher compute and memory for training and serving large AI models).
This sits alongside OpenAI’s governance reset with Microsoft and a big Oracle compute pact. The broader build-out is massive: OpenAI–Oracle–SoftBank flagged USD 500B for 10 GW over four years; Oracle–Meta talks are near USD 20B; Meta Hyperion targets up to 5 GW backed by USD 29B financing; Microsoft added a multiyear Nebius deal and USD 6.2B of rented compute in Norway.
In short: capital becomes capacity, and capacity powers products.
Equity for compute: Nvidia’s data-center funding buys a stake in OpenAI, priority demand, and roadmap visibility. OpenAI gets secured GPU capacity for compute-heavy launches and more predictable costs.
Why Nvidia likes it: guaranteed orders, early sight of OpenAI’s roadmap, a seat at the table on future chip needs, and upside if OpenAI’s value rises.
Why OpenAI likes it: priority access to scarce GPUs, more predictable costs, and capacity ready for new, compute-heavy launches.
Risks: regulators may ask about preferential access (i.e., OpenAI could get better or earlier access to Nvidia gear than others). Delays can still come from power and permits, not chips.
What to watch: final terms, first site and grid approvals, and Nvidia’s delivery cadence into 2026.
Secure supply, shape demand
Nvidia swaps cash for a seat at the table. By taking equity, it nudges OpenAI toward its highest-end stack—GPUs, networking, and software—giving Nvidia clearer multi-year visibility on orders. It also earns early sight of OpenAI’s model and infrastructure plans, which helps Nvidia tune roadmaps and capacity. Vertical, simple, effective.
Match money to milestones
Funding unlocks as each site goes live. Cash out when a gigawatt powers up. That keeps timing risk lower around grid hookups, permits, and deliveries. If builds run on time, both sides keep the upside without front-loading the bill.
Keep the “AI factory” story intact
When pilots become real workloads, buyers standardise on one stack to reduce risk and downtime. With Nvidia that usually comes as a bundle: CUDA software, high-speed networking, and full systems. Once teams build models and tools around CUDA, switching gets costly—code, skills, and support all tie in. Bigger deployments deepen that tie-in, so demand grows with scale. Scarcity plus standardisation lifts pricing power and add-ons (networking, services) ride along. More scale, more stickiness, more pricing power.
At this scale, the hard part isn’t chips, it’s plugs. Ten gigawatts needs grid connections, substations, transformers, cooling, and specialist engineers. Utilities and data-center landlords sit on the critical path. If something slips, it will likely be power or permits, not silicon. At the same time, workloads won’t live in one place.
OpenAI’s mix with Oracle and Microsoft points to “multi-cloud,” meaning jobs spread across several providers. The winners are the ones that can deliver reliable power, dense racks, and fast networks wherever the work lands. As the spend grows, scrutiny grows too. Regulators will ask whether any partner gets faster queues, better pricing, or inside access that shuts out rivals.
For investors, the story may earn a premium, but execution is far more important than headlines. The equity link supports demand only if sites, power, and hardware arrive on time. Markets will trade progress, not press releases. Watch how Nvidia guides the 2026–2027 capacity ramp and how much networking and software ships with each system.
Missed delivery windows or slower gigawatt activation at OpenAI cut visibility fast and pressure multiples. Valuations can ripple across the stack: cloud and power names with signed orders could re-rate higher, while smaller peers face tougher unit economics if spend concentrates with a few, big platforms.
Deal finalization and phasing: definitive agreement terms, equity size per GW, and deployment schedule.
Power and permits: site announcements, interconnect approvals, and grid timelines for initial GW.
Partner mix: how workloads split across Microsoft Azure, Oracle OCI, and others as governance and contracts evolve.
This deal is about control—of compute, timelines, and demand. Nvidia links capital to capacity so OpenAI can scale, while reinforcing its AI-factory model across GPUs, networking, and software. The upside is clearer multi-year visibility and ecosystem pull; the risks are execution on power and sites, regulatory attention on vertical ties, and any delay to Vera Rubin.
Watch for the definitive agreement, first site and interconnect milestones, and Nvidia’s delivery guidance. Over the next quarters, markets will price cadence over headlines and judge one thing: does execution turn capex into compounding? Convert capital into capacity, and capacity into cash flows—that’s the long-run test.