Navigation

© Zeal News Africa

Nvidia chips: Trump handed China a major advantage on AI | Vox

Published 10 hours ago14 minute read

Late on Monday night, July 14, 2025, the ninth richest man in the world broke some momentous news: The US government would allow him, Nvidia CEO Jensen Huang, to sell H20 processors to Chinese customers again.

To people following the Trump administration and its seemingly unending announcements and reversals of trade restrictions, this might not sound like much of a blockbuster. So President Donald Trump wanted to ban some trade and a CEO got him to change his mind (perhaps with some help from Chinese government promises on rare earth metals). What could be more classically Trump?

But there’s more going on here than just that. Huang was effectively announcing that a massive American effort, going on for nearly three years, to deprive China of the chips needed to build advanced AI systems, is over, or at best on pause. The high-tech chip export control regime built up and expanded by the Biden administration, and enforced largely intact by Trump’s team, now has a loophole in it large enough to drive a self-driving truck through.

The H20 chip only exists at all as part of Nvidia’s efforts to get around those export controls by designing something weaker than its flagship H100 chip, which has become an indispensable tool for training cutting-edge AI models and which the US still definitely doesn’t want Chinese firms like DeepSeek or Tencent to access.

That might seem like a reasonable compromise — let China have the weaker chip while holding back on the powerful one. The problem is that while the H20 is definitely worse than the H100 for some important tasks, for others it’s actually more powerful than its big brother, and comes at half the price. In April, enforcers at the Bureau of Industry and Standards in DC effectively figured out what Nvidia was trying to do with the H20 and informed the company that they couldn’t export the chip without a license, meaning in effect they couldn’t send the chips to China at all.

But Nvidia now says that Trump, having met personally with Huang, is promising to issue those licenses, which would enable Chinese AI companies to greatly accelerate model development and infrastructure buildout. And even though the H100 chips are officially still supposed to be off-limits, China may be able to get its hands on them as well.

About a week before Huang’s big news, reporters at Bloomberg broke the story of a massive data center construction project in Yiwu County in the Gobi Desert, encompassing multiple firms requiring over 115,000 high-end chips for training AIs from Nvidia. The project documents Bloomberg reviewed gave no indication of how the project would get those chips legally, the implication being that they wouldn’t — they’d illegally smuggle them instead. Now, thanks to Trump, they will likely be able to get the chips they need legally.

The Yiwu project and the H20 flipflop are both graphic illustrations of just how important advanced chips — often called simply “compute” by AI experts — have become, not just to AI development but to geopolitics writ large. They are the kind of things that the president of the United States takes meetings with private industry about.

That’s because compute is a very odd commodity. Advanced chips are overwhelmingly designed by one company (Nvidia), and manufactured by one company (Taiwan’s TSMC), using machines built by one company (the Netherlands’ ASML). The world can only produce so many, building out the fabrication plants (“fabs”) that produce them takes years, and past efforts to break those three companies’ dominance have almost always ended in failure.

The result is a scarce commodity that is unbelievably valuable in training the kind of advanced AI models that the US, China, and other leading powers view as indispensable for their economic and military futures.

If you’re not an AI nerd or a US-China watcher, this may all seem rather technical. But, to paraphrase Trotsky, even if you’re not interested in compute, compute is interested in you.

In many historical eras, there is a commodity collected and traded in vast quantities that recent technological progress has made invaluable, which comes to dominate economic life. In the 20th century, that commodity was oil, enabled by the rise of automobiles and planes. In the 19th century, it was arguably cotton, enabled by the development of the cotton gin.

You can make a strong argument that the defining commodity of the 21st century will be compute.

“Compute,” as a noun rather than verb, is how AI firms tend to refer to AI-optimized processors like Nvidia H100s or H20s. It is the magic ingredient of AI. You can, for any given data and any given team of researchers, make a much more useful and effective model if you simply throw more processing power at it.

One reason why so many observers believe progress on AI will continue to be rapid is that the amount of money being spent on compute continues to rise exponentially fast: Per the research group EpochAI, spending on training models grows by four- to fivefold every single year. For all the headlines about the nine-figure salaries commanded by top AI talent, most of that spending goes into purchasing and powering compute.

There are millions of high-end processors around the world, but the supply is meaningfully constrained. It’s not limitless. “Some people see compute as an abundant global commodity that’s impossible to control,” Erich Grunewald, a researcher at the Institute for AI Policy and Strategy focusing on chips and chip policy, told me. “I would view it more as a strategic resource, the way oil or steel production were in the past. You just want more of it. There’s no margin at which you don’t want more compute, at least none that’s been discovered yet.”

One sign of compute’s scarcity is the price: a single H100 runs about $25-30,000, and the most recent model from Meta/Facebook was trained on 32,000 of them. (Mark Zuckerberg bragged in early 2024 that he was going to acquire 350,000 in total, which would run over $8 billion.)

Nvidia has a near-monopoly on designing advanced AI chips, and primarily uses one chip manufacturer (TSMC), which, while always expanding, can’t produce as many as AI firms want. China has for years been trying to build out its own ecosystem of chip design and manufacture, and is still trying, notably through the company Huawei, but it remains behind; what’s more, Nvidia has a whole proprietary suite of software for programming its processors, and thousands of programmers trained to use that suite, while rivals like Huawei would have to convince coders to learn something entirely different.

The chips’ ongoing scarcity and the dominance in chip design and manufacture of US companies like Nvidia and companies located in close US allies, like Taiwan and the Netherlands, put the US in a strong position to control who gets what chips. And Washington has leveraged that position to try to limit China’s access to processors like the Nvidia H100. The Biden administration imposed export controls banning advanced chip exports to China in October 2023, and made them stricter in December 2024 and January 2025.

The Trump administration then rescinded the most recent tightening, but exporting H100s to China is still illegal. There are some indications the administration is beginning to crack down harder on firms in countries like Malaysia and Thailand, to which H100 shipments are legal, but that are sometimes used as waystations to get H100s to China. After Nvidia announced that it had designed the H20 chips to get around restrictions on existing chips like H100s, the Trump administration initially blocked the move, before the mid-July flip-flop.

The export controls have had real bite to date. Just listen to the Chinese AI firms they’re meant to hamper. “Money has never been the problem for us,” Liang Wenfeng, CEO of the Chinese AI leader DeepSeek, said in an interview with a tech publication. “Bans on shipments of advanced chips are the problem.”

But despite the controls, significant amounts of smuggling appear to be happening. Last year, New York Times reporters Ana Swanson and Claire Fu talked to a number of vendors in China hawking H100s and other advanced chips, with one providing evidence of an illegal delivery worth $103 million to a Chinese customer. Grunewald and fellow researcher Tim Fist tried to estimate the extent of smuggling for the think tank Center for a New American Security, and while there is massive uncertainty in their figures, they estimate that at least 10,000 and potentially hundreds of thousands of chips have been smuggled to China.

For its part, Nvidia has argued there isn’t any smuggling; CEO Jensen Huang told reporters earlier this year, “There’s no evidence of any AI chip diversion.” Every expert I have spoken with considers this view risible. There is obviously smuggling, and the question is one of scale.

If compute is a strategically essential resource, then it stands to reason that the US is going to invest heavily in defending its access and restricting that of adversaries, just as it has with other strategic resources. Right now, the US is part of a multilateral military effort meant to keep sea traffic through the Suez Canal and Red Sea open, in part because huge amounts of oil transit through there to our allies in Europe; at the same time the US is so serious about denying access to oil to countries like Russia, Iran, and Venezuela that the administration has threatened “secondary tariffs” against other countries that so much as buy oil from US adversaries.

With compute looking potentially even more strategically important than oil, similarly serious action to control who has access to it would seem to be in the offing. This was the rationale behind the very export controls that Trump is now weakening.

While Trump moves away from restrictions, many in Congress are going in the opposite direction. Their proposals may yet read like notes from an alternative universe where DC decided to take strategic control of compute seriously. But Trump has already reversed himself on export controls once, and many in his party are committed to disrupting Chinese access to compute. It’s worth, then, thinking about what next steps might look like in a world where Trump and his administration decide to take compute seriously and play for keeps.

When Rep. Bill Foster (D-IL), a physics PhD who worked on chip design during a job at Fermilab in Illinois, first floated the idea of an anti-chip smuggling bill in May, the scope was more ambitious than just tracking where chips were. Foster told Reuters his plan would involve not only tracking location, but also “preventing those chips from booting up if they are not properly licensed under export controls.” In other words, something like a remote kill switch for smuggled chips.

That idea, sometimes called “geofencing,” has the advantage of being self-enforcing, unlike simply verifying location. But it didn’t make it into the ultimate bipartisan bill, and faces extremely stiff resistance from both industry and many experts. The problem is that it would effectively create a backdoor kill switch in every advanced chip that could be used to shut it off.

In an ideal world, only trusted servers run by government agencies could send a signal for the chip to stop working. In the real world, anything can be hacked. It’s one thing for Nvidia to allow the US to track where its chips go; it’s quite another for it to subject all its customers to shutoffs that probably come from the government, but could very possibly come from a rogue actor.

“With location verification, what if it doesn’t work? We don’t know where the chip is and we’re back to status quo ante,” one expert, who asked for anonymity given the sensitivity of these discussions, told me. “If geofencing goes wrong, you have a script kiddie turning off a cluster and causing a geopolitical crisis.”

Sparked by reports of smuggling, the approach gaining the most steam is a relatively modest first step known as “location verification.” Proposed by an unusual bipartisan, bicameral group that includes Sen. Tom Cotton (R-AR) and Rep. Bill Foster (D-IL), the Chip Security Act would require that processors subject to US export controls include special firmware to respond to regular messages (“pings”) from servers in the US. By timing the delay between when the pings are sent and when the server receives them, the server could roughly estimate where in the world the processor physically is. In other words, if a chip is in China, or likely en route to China, the US would know about it.

Most existing processors from companies like Nvidia already include something similar. “Identity attestation” systems on these chips receive pings to servers confirming that they’re operational, and what kind of system they are. This is immensely useful if you’re a company operating a vast array of processors in a data center or other massive facility. Confirming that hardware is where you need it to be by physically checking is very costly; identity attestation pings are much easier and simpler.

But the very same technology could be used to confirm where in the world processors are. What’s more, because it’s necessarily a little imprecise, it can be done without revealing potentially sensitive information like precise data center locations that companies want to keep private.

Earlier proposals also included “geofencing,” which would act as a kind of remote kill switch for smuggled chips. But concerns that such an ability could be hacked by rogue actors have convinced many experts that geofencing would be unworkable. Without this fencing component, if a chip is smuggled to China and run there, the best-case result is that the US regulators know where it is, but they can’t shut it down.

What that would enable, though, is enforcement prioritization. “If you have, say, an entity in Singapore that you send half a data center’s worth of chips to, and they boot up, and suddenly they’re no longer pinging from Singapore — you know not to send the second half,” one source in Congress working on the bill explained. “It helps in creating a list of bad actors.”

This is especially important given that the Bureau of Industry and Security, the division at the Commerce Department responsible for stopping smuggling, is so poorly resourced. As of 2023, it only had about 350 agents tasked with examining trillions in transactions, and its time on AI had to compete with work enforcing major sanctions against Russia, Iran, and others. “We spend 100 percent of our time on Russia sanctions, another 100 percent on China, and the other 100 percent on everything else,” Matt Borman, the then-deputy assistant secretary of commerce for export administration, told the New York Times that year. The agency would clearly benefit from a tool that could tell it where to prioritize enforcement against smuggling.

To be clear, location verification is one of several ways to fight smuggling, and smuggling is only one of several ways to limit China’s access to compute. Chinese firms can also access processors at overseas data centers run by the likes of Amazon or Microsoft; US regulators tend to prefer this to direct sales of chips to China, because it means that the US can monitor exactly how the chips are being used via these US companies, but it’s still a major resource. The firm Huawei is also actively trying to build up a line of chips to rival Nvidia’s and eventually give the country a source of compute not subject to US interference. It remains quite behind, but the more pressure the US puts on other sources of compute, the more reason there is for China to subsidize Huawei and push firms to use Huawei chips and software.

News reports that Trump gave ground on H20s in exchange for Chinese exports of rare earth minerals further complicates the picture. The minerals in question, like gallium and germanium, are crucial in making semiconductors of all kinds, including AI chips. While China’s past restrictions largely bypassed TSMC’s main factories in Taiwan, they drove up the minerals’ prices and represented an indirect way China could squeeze the US on AI.

All of which is to say: the battle for compute remains incredibly high-stakes and both sides have powerful levers they can pull to influence the global supply of compute. Trump’s H20 move likely increased China’s and reduce the US’s share of that supply, even with the rare earth concessions. But there is still time for him to reverse course, and future deals and export controls could alter the balance dramatically.

It would be a mistake to view this as just one more Trump trade battle. While Trump’s general protectionism is based on fairly crankish economic views and misplaced nostalgia for 1970s America, the export controls were based on a strategic calculation, by diplomats in both parties, about the best way to ensure US control of a scarce global commodity. When Trump chickened out on high tariffs on Britain, that was good news for British consumers but mostly a regional story. When Trump chickens out on export controls on Nvidia chips, he made a decision with massive global ramifications, altering the likely future balance of US and Chinese power.

Compute is different, whether Trump treats it that way or not, and if Chinese firms start to catch up to US AI labs in the coming months, we will likely look back at the H20 decision as the start of the shift.

Origin:
publisher logo
Vox
Loading...
Loading...
Loading...

You may also like...