Image

OpenAI presents its most popular model of AI regulation in a brand new ‘blueprint’

OpenAI on Monday published what it’s calling an “economic blueprint” for AI: a living document that lays out policies the company thinks it can build on with the U.S. government and its allies.

The blueprint, which includes a forward from Chris Lehane, OpenAI’s VP of global affairs, asserts that the U.S. must act to attract billions in funding for the chips, data, energy, and talent necessary to “win on AI.”

“Today, while some countries sideline AI and its economic potential,” Lehane wrote, “the U.S. government can pave the road for its AI industry to continue the country’s global leadership in innovation while protecting national security.”

OpenAI has repeatedly called on the U.S. government to take more substantive action on AI and infrastructure to support the technology’s development. The federal government has largely left AI regulation to the states, a situation OpenAI describes in the blueprint as untenable.

In 2024 alone, state lawmakers introduced almost 700 AI-related bills, some of which conflict with others. Texas’ Responsible AI Governance Act, for example, imposes onerous liability requirements on developers of open source AI models.

OpenAI CEO Sam Altman has also criticized existing federal laws on the books, such as the CHIPS Act, which aimed to revitalize the U.S. semiconductor industry by attracting domestic investment from the world’s top chipmakers. In a recent interview with Bloomberg, Altman said that the CHIPS Act “[has not] been as effective as any of us hoped,” and that he thinks there’s “a real opportunity” for the Trump administration to “to do something much better as a follow-on.”

“The thing I really deeply agree with [Trump] on is, it is wild how difficult it has become to build things in the United States,” Altman said in the interview. “Power plants, data centers, any of that kind of stuff. I understand how bureaucratic cruft builds up, but it’s not helpful to the country in general. It’s particularly not helpful when you think about what needs to happen for the U.S. to lead AI. And the U.S. really needs to lead AI.”

To fuel the data centers necessary to develop and run AI, OpenAI’s blueprint recommends “dramatically” increased federal spending on power and data transmission, and meaningful buildout of “new energy sources,” like solar, wind farms, and nuclear. OpenAI — along with its AI rivals — has previously thrown its support behind nuclear power projects, arguing that they’re needed to meet the electricity demands of next-generation server farms.

Tech giants Meta and AWS have run into snags with their nuclear efforts, albeit for reasons that have nothing to do with nuclear power itself.

In the nearer term, OpenAI’s blueprint proposes that the government “develop best practices” for model deployment to protect against misuse, “streamline” the AI industry’s engagement with national security agencies, and develop export controls that enable the sharing of models with allies while “limit[ing]” their export to “adversary nations.” In addition, the blueprint encourages that the government share certain national security-related information, like briefings on threats to the AI industry, with vendors, and help vendors secure resources to evaluate their models for risks.

“The federal government’s approach to frontier model safety and security should streamline requirements,” the blueprint reads. “Responsibly exporting … models to our allies and partners will help them stand up their own AI ecosystems, including their own developer communities innovating with AI and distributing its benefits, while also building AI on U.S. technology, not technology funded by the Chinese Communist Party.”

OpenAI already counts a few U.S. government departments as partners, and — should its blueprint gain currency among policymakers — stands to add more. The company has deals with the Pentagon for cybersecurity work and other, related projects, and it has teamed up with defense startup Anduril to supply its AI tech to systems the U.S. military uses to counter drone attacks.

In its blueprint, OpenAI calls for the drafting of standards “recognized and respected” by other nations and international bodies on behalf of the U.S. private sector. But the company stops short of endorsing mandatory rules or edicts. “[The government can create] a defined, voluntary pathway for companies that develop [AI] to work with government to define model evaluations, test models, and exchange information to support the companies safeguards,” the blueprint reads.

The Biden administration took a similar tack with its AI Executive Order, which sought to enact several high-level, voluntary AI safety and security standards. The executive order established the U.S. AI Safety Institute (AISI), a federal government body that studies risks in AI systems, which has partnered with companies including OpenAI to evaluate model safety. But Trump and his allies have pledged to repeal Biden’s executive order, putting its codification — and the AISI — at risk of being undone.

OpenAI’s blueprint also addresses copyright as it relates to AI, a hot-button topic. The company makes the case that AI developers should be able to use “publicly available information,” including copyrighted content, to develop models.

OpenAI, along with many other AI companies, trains models on public data from across the web. The company has licensing agreements in place with a number of platforms and publishers, and offers limited ways for creators to “opt out” of its model development. But OpenAI has also said that it would be “impossible” to train AI models without using copyrighted materials, and a number of creators have sued the company for allegedly training on their works without permission.

“[O]ther actors, including developers in other countries, make no effort to respect or engage with the owners of IP rights,” the blueprint reads. “If the U.S. and like-minded nations don’t address this imbalance through sensible measures that help advance AI for the long-term, the same content will still be used for AI training elsewhere, but for the benefit of other economies. [The government should ensure] that AI has the ability to learn from universal, publicly available information, just like humans do, while also protecting creators from unauthorized digital replicas.”

It remains to be seen which parts of OpenAI’s blueprint, if any, influence legislation. But the proposals are a signal that OpenAI intends to remain a key player in the race for a unifying U.S. AI policy.

In the first half of last year, OpenAI more than tripled its lobbying expenditures, spending $800,000 versus $260,000 in all of 2023. The company has also brought former government leaders into its executive ranks, including ex-Defense Department official Sasha Baker, NSA chief Paul Nakasone, and Aaron Chatterji, formerly the chief economist at the Commerce Department under President Joe Biden.

As it makes hires and expands its global affairs division, OpenAI has been more vocal about which AI laws and rules it prefers, for instance throwing its weight behind Senate bills that would establish a federal rule-making body for AI and provide federal scholarships for AI R&D. The company has also opposed bills, in particular California’s SB 1047, arguing that it would stifle AI innovation and push out talent.

SHARE THIS POST