Op-Ed: AI infiltrates US nuclear plants via unregulated back door

  • The U.S. has no regulatory agency policing AI usage in the the country's nuclear sector, unlike in Europe and China
  • Prioritizing rapid innovation over safety increases the risk of catastrophic failures in North America
  • Hyperscalers like Microsoft and Google oppose regulation, and will increase the nuclear safety risks of unregulated AI

Nuclear power is supposed to be one of the most stringently controlled industries in the world, but a regulatory loophole  actually, more of a yawning chasm — means that nuclear power stations in the U.S. are ubiquitously using unregulated artificial intelligence (AI) for everything from control systems to anomaly detection to autonomous robotic systems.

It’s yet another example of how far behind the rest of the world the U.S. is in its approach to the deployment of advanced technologies like AI, automation and predictive analytics.

How big a problem is the nuclear AI free-for-all? Several founding fathers of AI, including Stuart Russell and Nick Bostrom, have recently had Oppenheimer-style “Now I am become Death, the destroyer of worlds” moments, warning of the catastrophic consequences of unrestrained AI deployment — and unregulated AI in nuclear energy is exactly the kind of scenario they are concerned about.

All it takes is one AI agent to misinterpret reactor sensor data and reduce water flow to the cooling tank, and — presto — Fukushima meltdown or Chernobyl core-collapse-plus-steam-explosion.

Unlike Europe and China, which both have centralized, comprehensive systems of rubrics designed to allow AI innovation without compromising public safety, the U.S. has devolved the responsibility of defining AI governance to no less than 13 national agencies — from the Food and Drug Administration (FDA) to the Department of Transportation (DoT). And this number doesn’t include state and local government regulators.

It's a bizarre strategy, and has left the U.S. with no agency specifically tasked with AI oversight in the nuclear industry. The Nuclear Regulatory Commission (NRC), the main agency responsible for regulating the overall safety and security of civilian nuclear facilities and materials, has no regulations focused on AI. At all.

The reason for this nuclear logic bomb is cultural. China and Europe have chosen to prioritize safety over speed, but the U.S. has an innovation-at-all-costs approach designed to fast-track technology and foster competition with fewer upfront restrictions. But is “move fast and break things” really the way to go when the application is nuclear energy?

The fact is, today, we don’t even really know how and why the most advanced AI models make decisions—the so-called Black Box problem.

Things will only get worse as Microsoft, Google, Oracle, and Facebook sally forth into nuclear power, armed only with egotism and huge piles of money. The hyperscalers are vehemently opposed to any regulation and once they are in the nuclear game will absolutely employ their vast financial resources on legal battles to prevent restrictions on AI usage.

And don’t expect the U.S. government to step in to save us from the fallout of all this stupidity. It can’t even work out how to stop disturbed adolescents from shooting pre-schoolers with military-grade assault rifles. The chances that any US administration will do anything coherent about the nuclear AI free-for-all are ground zero.


Watch the video at FNTV.