The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
 
31
 
 

Data Centers, AI Rules, Chip Limits and OpenAI Talks Policy

DATE POSTED:January 15, 2025

With one week left in office, President Biden signed an executive order that sets aside federal lands for building of artificial intelligence (AI) data centers, with the full cost borne by developers of AI foundation — or frontier — models like OpenAI’s GPT-4o. AI model developers must also make sure there is a clean energy source for these data centers, as intense AI workloads are a notorious energy guzzler.

The latest order follows Biden’s October 2023 executive order setting out guardrails for powerful frontier or foundation AI models. It includes ensuring the government gets to evaluate AI systems before they are deployed in areas such as cybersecurity and other national security risks.

Biden also pledged to develop labeling and content provenance mechanisms, so consumers can tell which content is AI-generated. Trump issued the first executive order on AI in 2020, calling for its use in the federal government. Different states (California, Texas and others) also have their own AI rules.

UK and EU Regulations

AI regulations differ in the U.S. from the U.K. and Europe. The EU AI Act is a much more sweeping legislation that assesses AI applications based on three risk levels: unacceptable risk (similar to government-run scoring of individuals based on social standing), high-risk (resume-scanning AI tools that ranks job applicants) and what’s not banned or seen as high risk.

U.K. Prime Minister Keir Starmer announced on Monday (Jan. 13) an action plan to make Britain a leader in AI, including expanding its data center capacity for AI workloads. Starmer said formal AI regulations are coming. His predecessor Rishi Sunak unveiled an AI regulatory framework for existing regulators to follow.

AI Chip Export Controls

Biden also expanded on his 2022 and 2023 AI chip export controls meant to keep China and other adversary nations from getting their hands on AI hardware. This week’s new regulations carve the world into groups of haves and have-nots: 18 allies and partners will not have any restrictions at all, while buyers of smaller chip orders of up to 1,700 advanced GPUs in computational power get the greenlight as well. These are typically universities and research organizations.

However, more than 120 other countries reportedly face new rules in setting up AI computing facilities. Trusted entities include those based in countries that are close U.S. allies and are not headquartered in a “country of concern.” Those not based in allied countries can still buy up to 50,000 advanced GPUs per country. Biden also set rules that would keep secret an AI model’s weights from untrusted entities, among other security controls.

Nvidia Decries Chip Limits

The rules are expected to impact Nvidia, whose GPU chips have been the silicon of choice for training and inference of AI models. The company has a market share pegged at more than 80%.

Nvidia positioned itself for an AI revolution back in 2006; its GPUs were initially developed to handle gaming and other graphics-intensive applications. Co-founder and CEO Jensen Huang bet the company’s future on the pivot to AI, despite AI advances having stalled in past so-called “AI winters.”

Ned Finkle, Nvidia’s vice president of government affairs, decried Biden’s new rules. He wrote in a blog that the advance of AI globally is “now in jeopardy.” He said Biden’s “misguided” policy “threatens to derail innovation and economic growth worldwide.”

Finkle called Biden’s expanded export control rules as a “200+ page regulatory morass, drafted in secret and without proper legislative review.” Such regulatory action “threatens to squander America’s hard-won technological advantage,” he added.

Finkle lauded the first Trump administration for fostering an environment of AI innovation and said he “looked forward” to a return to his policies as the ex-president prepares to take his oath of office.

The Semiconductor Industry Association weighed in with its own statement. “We’re deeply disappointed that a policy shift of this magnitude and impact is being rushed out the door days before a presidential transition and without any meaningful input from industry.”

OpenAI’s Plan for US

As OpenAI CEO Sam Altman signaled plans to attend the inauguration of President Trump, his AI startup preemptively rolled out a blueprint to keep America at the forefront of AI development.

“We believe America needs to act now to maximize AI’s possibilities while minimizing its harms. AI is too powerful a technology to be led and shaped by autocrats, but that is the growing risk we face, while the economic opportunity AI presents is too compelling to forfeit,” according to OpenAI’s “AI in America” economic plan.

OpenAI’s vision hinges on the following beliefs:

  • Chips, data, energy and talent are keys to winning the AI race. The U.S. must be an attractive market for AI investments, with $175 billion in global funds waiting to be deployed. If not, these will go to China-backed projects.
  • A free market promoting free and fair competition to drive innovation. This includes freedom for developers and users to work with AI tools while following clear, common-sense standards to keep AI safe for all. The government must not use these tools to amass power and control citizens.
  • To ensure frontier model security, the federal government should develop best practices to prevent against misuse; limit the export of frontier models to adversary nations; develop alternatives to the patchwork of state and international regulations such as a U.S.-led international coalition.
  • The federal government should share its expertise about securing AI developers’ IP against threats, help companies access secure infrastructure, such as classified computing clusters to evaluate model risks and safeguards, create a voluntary pathway for companies that develop large language models to work with the government to define model evaluations, test models and exchange information to support companies’ safeguards.
  • While the federal government should lead on matters of AI related to national security, the states can act to maximize AI’s benefits for their own AI ecosystem. They can enhance experimentation with AI by supporting startups and smaller AI firms to find ways to solve daily problems.

The post Data Centers, AI Rules, Chip Limits and OpenAI Talks Policy appeared first on PYMNTS.com.