In @foreignaffairs.com, Anton Leicht and I wrote about how middle powers should navigate an AI revolution that threatens to leave them behind: www.foreignaffairs.com/guest-pass/r...
In @foreignaffairs.com, Anton Leicht and I wrote about how middle powers should navigate an AI revolution that threatens to leave them behind: www.foreignaffairs.com/guest-pass/r...
Middle powers can either find strategic niches backed by real leverage or bear AIโs costs while capturing few benefits. The latter outcome: two great powers barreling toward a technological revolution with most of the worldโs computing power and talent, leaving most of the worldโs citizens behind:
The US, meanwhile, should make bandwagoning as attractive as possibleโpromoting exports that deliver meaningful capabilities and implementing security standards that allow even sensitive frontier systems to be shared with allies.
Beyond access, middle powers need economic leverage: control irreplaceable inputs (eg ASML) or downstream deployment bottlenecks (eg robotics, manufacturing). Donโt sell strategic assets for short-term gains.
Three broad strategies to ensure that access: Bandwagoning with US/China for guaranteed access (risky if patron turns). Hedging between both powers (fails if world splits into blocs). Sovereignty through domestic capability (expensive, often leaves you stranded in second tier).
To avoid that outcome, middle powers need frontier access. Firms equipped with inferior AI risk being outcompeted. National defense will require systems as good as your adversaries.
Opting out isnโt an option either. The risks of AIโcybercrime, military deployment by adversaries, labor displacementโarrive whether or not the benefits do. For middle powers, suffering the costs of AI while missing the gains is the central danger.
Some middle powers are building local data centers or domestic model champions (eg France's Mistral). Neither solves dependency. Data center buildouts are expensive and need continuous updates from providers; no one outside the US and China is closing the frontier model gap.
Current access to frontier AI is fragile. Unlike stockpiled goods, AI requires real-time access to infrastructure controlled by a few Silicon Valley firmsโultimately subject to US export controls. China is building alternatives but remains behind for now.
Middle powers face three problems: (1) Access to frontier AI depends on Washington/Beijingโs whims (2) Theyโre exposed to AIโs harms regardless of whether they share in its benefits (3) They lack leverage to shape AIโs development or manage its consequences.
In @foreignaffairs.com, Anton Leicht and I wrote about how middle powers should navigate an AI revolution that threatens to leave them behind: www.foreignaffairs.com/guest-pass/r...
Today's Lawfare Daily is a Scaling Laws ep, with @utexaslaw.bsky.social, where @alanrozenshtein.com spoke to @samwl.bsky.social, Janet Egan & @petereharrell.bsky.social about the Trump adminโs decision to allow Nvidia and AMD to export AI semiconductors to China for a 15% payment to the U.S. govt.
Read @samwl.bsky.social and Nikita Lalwani on how AI advances could undermine nuclear deterrenceโand โencourage mistrust and dangerous actions among nuclear-armed statesโ:
Appreciated this balanced look at the impact of quote-unquote AI on nuclear deterrence - more of this, please.
Does AI present new nuclear risks? Yes.
Are there hard limits to AI's capabilities? Also yes. www.foreignaffairs.com/united-state...
"So long as systems of nuclear deterrence remain in place, the economic and military advantages produced by AI will not allow states to fully impose their political preferences on one another. "
@foreignaffairs.com
www.foreignaffairs.com/united-state...
Fascinating read. So many things combined in this article: deterrence theory, nuclear doctrines, AI development. Not cheerful but insightful.
www.foreignaffairs.com/united-state...
Full piece here; @carnegieendowment.org: www.foreignaffairs.com/united-state...
And there's no room for complacency. Rapid AI takeoffs could cross unforeseen thresholds. States should stress-test nuclear systems for AI-related vulnerabilities, build AI/nuclear expertise, and calibrate messaging about the stakes of the AGI race.
None of this is to say AI will pose no risks to nuclear stability. The moves states make to shore up their second strike capabilitiesโbuilding more weapons, reducing decision timelines, delegating authorityโmay be destabilizing and dangerous.
Nuclear deterrence will likely hold, and the coercive leverage that advanced AI affords states (against rivals with well postured nuclear forces) will thus face major limits.
Even with highly capable AI systems, states will struggle to be confident of simultaneous success against multiple legs of a nuclear triad, with limited data, limited options for testing, and no room for error.
Tracking launchers at scale is very challenging, the physics of missile defense are brutal, states will do everything they can to protect their command and control systems.
In all three domains, as we document, AI can likely help. But in all three domains, AI will also face serious constraints.
So could AI erode nuclear deterrence? Theoretically, yes, through three mechanisms: 1. Increased ability to track nuclear platforms (subs and road-mobile launchers); 2. increased ability to tamper with command-and-control systems; 3. improved missile defense.
The US economy is 15x Russia's and 1000x North Korea's, yet the US's influence over them is limited, to put it mildly.
Obviously AI will matter a lot. But unless it erodes nuclear deterrence, no matter how many economic/military advantages it may bring, states will face major constraints in dealing with nuclear armed adversaries.
An increasing number of analysts claim AGI will entirely transform international politics, giving a decisive strategic advantage to the state that possesses itโan advantage akin to complete military and political dominance.
In @foreignaffairs.com, Nikita Lalwani and I write about the idea that winning the AI race will give one state unchallenged global dominance. To do so, we argue, it would have to undercut nuclear deterrenceโno small feat. www.foreignaffairs.com/united-state...
The Trump administrationโs AI Action Plan and accompanying executive orders are friendly to companies and hostile to โwoke AI.โ
But these policiesโ effects will depend on their implementation, writes @samwl.bsky.social in @justsecurity.org: www.justsecurity.org/117765/asses...
The Trump administration has unveiled its most ambitious AI strategy to date, with the goal of achieving โunquestioned and unchallengedโ global dominance in AI.
@samwl.bsky.social (@carnegieendowment.org) unpacks the AI Action Plan - whatโs new, whatโs not, and whatโs next.
#AIActionPlan