AI

What the Trump win means for AI policy and regulation

  • Trump has promised to dismantle President Biden’s AI policy framework on "day one"

  • The new administration’s approach to AI policy would likely reflect Trump’s penchant for minimal regulation and decentralization

  • Decentralization could shift policy oversight to state and local governments, though Trump’s specific plans for a new AI framework remain unclear 

Donald Trump's return to the White House will signal a change in the nation’s artificial intelligence (AI) policy — he’s already promised to dismantle President Biden’s AI policy framework on "day one." But beyond that, the details are fuzzy. 

AI policy under Trump would likely reflect his core philosophy of minimal regulation and decentralization, several analysts told Fierce Network.

“It seems likely that Trump will pursue a less stringent regulatory environment for AI,” said Ritu Jyoti, GM and VP of AI and Data Market Research and Advisory Services at IDC.

AI regulation up in the air

The AI executive order current President Joe Biden signed in October 2023 centered around responsible AI development, placing guardrails around the technology’s use. Among other things, it called on developers to report on their models’ training and safety measures, and mandated that the National Institute of Standards and Technology (NIST) establish guidelines for detecting and mitigating model flaws — including biases. Biden’s administration also established the U.S. AI Safety Institute (AISI) under NIST to study AI risks.

The future of AISI is up in the air, as it could be defunded or dissolved without Biden’s EO in place. A coalition of AI advocates, including companies, nonprofits and universities, recently urged Congress to codify AISI before the end of the year to safeguard its continued existence.

Omdia Principal Analyst Bradley Shimmin argued that even if Biden's order is repealed on day one, it likely won’t undo everything that's happened since 2021. “I do not believe that day one we will be overturning the work we've already done, because a lot of that's been institutionalized already across the different agencies,” he said.  

Though Trump was previously in the White House, it's not necessarily true that his approach will be the same now as it was during his last term. AI policy circa 2019 and 2020 was very different than it is in 2024, due to the emergence of generative AI just two years ago. 

“We're in really specific, detailed weeds now,” said Adam Thierer, resident senior fellow of technology and innovation at R Street.

While Trump’s proposed replacement for Biden’s AI EO remains uncertain (the campaign was “very short on specifics,” Thierer said), his administration will likely pick and choose which elements of Biden’s framework to retain and which to throw away. That said, Trump’s past AI executive orders might provide some clues. 

During his first term, Trump launched national AI research institutes and directed federal agencies to prioritize AI research and development, focusing on civil liberties, privacy and values like trustworthiness in AI applications.

“I think the Trump administration will very much embrace AI as an opportunity, but it remains very unclear exactly what kind of policy framework they’ll use to do so,” Thierer said.   

‘Woke AI’ 

The Trump-Vance platform has argued that under Biden’s framework, NIST’s guidance supports “radical Leftwing ideas” and “woke AI safety standards” that infringe on free speech. The 2024 Republican Party Platform said it will put in place AI Development rooted in “free speech and human flourishing.”

"There will likely be a strong encouragement, if not demand, in this new executive order, to not engage in what some conservatives refer to as 'woke AI' efforts," Thierer said, "but now the question is, how does that translate into executive agency guidance that essentially flips the script from where Biden's been on the issue."

Some conservatives, including state leaders, are already pushing to change rules around speech-related algorithms, with cases from Texas and Florida reaching the Supreme Court. Those cases related to state laws that aim to regulate how social media platforms moderate content on their sites, essentially dictating how algorithms should handle certain types of speech. 

This year, a study published in peer-reviewed journal PLOS One found that the large language models (LLMs) powering GenAI models like OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude chatbots generally produce responses that lean toward left-of-center political views.

Why does that matter? Well, Trump might prioritize legislation focusing on free speech and transparency in model fine-tuning and training, Shimmin noted. “You might find some legislation that would require model makers to prove that the way that they are pretraining and fine tuning and aligning their models are not leaning toward a specific bias,” he said.

Who's in charge? 

Much remains to be decided, including who will lead AI policy in a Trump administration, and which agencies will oversee its implementation.

To date, AI policy has lived in the Office of Science and Technology Policy (OSTP), but Thierer said it could move to another locale. Oversight could be dispersed among several agencies, or the administration could appoint some sort of coordinator. A lot of AI policy fell to his Chief Technology Officer Michael Kratsios during Trump’s last term, and Ivanka Trump was also involved – “she was essentially that coordinator at the White House level for AI policy,” Thierer said. 

But it’s likely that Trump will appoint a new head of AI this time around, Jyoti said. She suggested that person could be Robert F. Kennedy Jr. or a Silicon Valley venture capitalist such as Marc Andreessen, “who was one of his most vocal backers and funders.” Elon Musk is a possibility as well.

Another looming question is whether a national framework will emerge or if a complex state-by-state, perhaps even city-by-city, patchwork will develop. State legislators have introduced over 700 pieces of AI-related legislation this year, with some focusing on niche areas like Colorado’s tiered, risk-based approach and California’s AI safety requirements.

The prospect of such a decentralized AI regulation would be “very, very different than the policy that the Clinton administration and Republicans worked on in a bipartisan way in the 1990’s for the internet and digital commerce, which was absolutely national in focus,” noted Thierer. It could be confusing and costly if there were dozens, if not hundreds, of different AI policies for the United States, he said. 

On the other hand, decentralized regulation could bring some benefits, like accelerating local investments into AI, Shimmin said. “We might start viewing AI as part of our infrastructure, similar to fire departments and police forces with localized policy,” he told Fierce. 

There's also a silicon component to the AI policy discussion. You can read more about that here on our sister site Fierce Electronics. 


Read our election coverage here.