Uber (NYSE: UBER) and NVIDIA (NASDAQ: NVDA) have entered a deep partnership to accelerate large-scale autonomous mobility. The plan is to bring Level 4-ready AI-Powered Robotaxis — vehicles capable of driving themselves with no human driver under defined conditions — directly into Uber’s ride-hailing platform.
This is not positioned as a small pilot. Uber aims to begin scaling a global autonomous fleet starting in 2027. Over time, the companies aim to target up to 100,000 Level 4-ready vehicles on Uber’s network.
The idea is simple for the rider: open the Uber app and, alongside a human-driven ride option, eventually see an option to book an autonomous vehicle powered by NVIDIA’s AI stack.
Uber and NVIDIA are trying to solve the most challenging part of self-driving at scale: not just making a demo car drive, but making tens of thousands of vehicles operate safely in real cities, under real traffic rules, with real passengers.
The core approach is to merge two strengths:
To accelerate this loop, Uber and NVIDIA are building a joint AI data factory powered by NVIDIA Cosmos, NVIDIA’s world-scale platform for developing foundation models. Cosmos curates and processes enormous amounts of real-world and synthetic driving data, helping train increasingly capable autonomous driving models. NVIDIA will also use DGX Cloud for heavy AI training and large-scale simulation.
The goal is continuous improvement: Uber’s ride data feeds training; NVIDIA’s models learn from it; improved models are deployed back into the vehicles.

The backbone of the plan is NVIDIA’s new DRIVE AGX Hyperion 10 platform.
It’s a production-ready compute and sensor architecture designed to make any vehicle “Level 4-ready.” Automakers can adopt it as the baseline electronics, compute, and perception hardware for their upcoming AI-Powered Robotaxi, vans, shuttles, and autonomous trucks, instead of trying to build everything from scratch.
Hyperion 10 includes:

Because Hyperion 10 is modular and validated, manufacturers can start with a known-good autonomous reference design rather than reinventing hardware, thermal, safety, and sensor placement for each new vehicle. That speeds rollout and lowers cost per vehicle.
For Uber, this matters. A standardized Level 4-ready platform means Uber doesn’t have to customize the app for every new autonomous partner deeply. The network can onboard different AI-Powered Robotaxi suppliers, as long as they align with NVIDIA’s Hyperion 10/DRIVE stack. More reliable when the autonomous systems are on the road.
Uber has worked on autonomy before, including running its own self-driving unit, but later chose not to keep building a complete hardware/software autonomy stack internally.
This new model is cleaner:

For Uber, the benefits are speed, lower R&D cost, and a realistic path to deploy at city scale without reinventing an entire automotive engineering pipeline.
For NVIDIA, it’s a way to cement its position not just in data centers, but in “physical AI” — AI acting in the real world, on roads, carrying passengers, moving freight.
This plan is not just Uber and NVIDIA working in isolation. Multiple automakers and autonomy firms are already aligning with NVIDIA’s Level 4 platform:

This matters because Uber’s network is essentially a marketplace. Suppose multiple vehicle makers and autonomy teams adopt a compatible NVIDIA platform. In that case, Uber can source autonomous supply from various vendors across numerous cities through a single app.
Scaling AI-Powered Robotaxi isn’t just a technical problem. Cities and regulators have to approve them.
To address that, NVIDIA has introduced NVIDIA Halos, a new safety and certification framework for physical AI systems such as autonomous vehicles.
Halos include:

Companies such as AUMOVIO, Bosch, Nuro, and Wayve are among the first involved. The lab is accredited by the ANSI Accreditation Board, which gives these audits regulatory weight.
In plain terms: NVIDIA is trying to answer the most significant policy questions up front — Who’s liable? Is the AI safe? Can it be trusted at the city scale? — so that governments are more comfortable letting thousands of Level 4 vehicles operate on public streets.
For riders:
The pitch is more reliable mobility — especially late at night, around airports, or in areas where human drivers are limited. The experience should feel like booking a regular Uber ride, except the vehicle may drive itself using NVIDIA’s AI.
For current human drivers:
Work doesn’t disappear instantly, but it changes. Over time, roles shift from physically driving to managing, overseeing, maintaining, charging, cleaning, and supporting fleets of autonomous vehicles in the field. The driver becomes the operator of the fleet, not the operator of the steering wheel.

For cities:
Benefits could include fewer human-error crashes, more consistent driving behavior, and potentially more efficient road use.
But cities will also have to solve new questions:
This is why the safety/certification layer (Halos) is not cosmetic. It’s a requirement for public approval.
The robotaxi industry has spent years stuck in pilot mode: a few cars in a geofenced downtown, lots of hype, no global scale.
The Uber–NVIDIA partnership is built to break out of that pattern using three levers:

Put simply, Uber brings demand and data. NVIDIA brings the AI brain and certified safety stack. Automakers bring the physical fleets. Regulators get an audit path and approve.
If all of that holds, booking an autonomous ride in the Uber app stops being science fiction and starts looking like a 2027+ product roadmap — not just for passengers in cities, but also for autonomous freight on highways.
The future here is not just “self-driving cars exist.”
It’s “self-driving supply exists at scale, on a platform people already use.” It’s a fleet-size ambition. at a time.