A Roadmap for Regulating Generative AI

Controls on chips, training, deployment key to containing bad bots.

An expert in containment strategies—the deep-state think tank Rand Corp.—has stepped forward to propose a regulatory framework for Gen AI.

“Artificial intelligence is advancing so rapidly that many who have been part of its development are now among the most vocal about the need to regulate it,” said Jason Matheny, CEO of Rand Corp., in an Op-Ed in the Washington Post this week.

“If an AI is stolen or leaked even once, it could be impossible to prevent it from spreading throughout the world,” Matheny warned.

Matheny noted that a version of LLaMa, a bot under development by Meta, was leaked online and the social media giant has been unable to remove it from the internet.

Matheny said the risk of the world being exposed to malevolent bots can be “substantially reduced” with oversight of three parts of the Gen AI supply chain: hardware, training and deployment.

By hardware, the Rand chief is talking about the AI-programmable microchips produced by Nvidia and AMD. Advanced AI models require thousands of these specialized chips, an expense the can run into hundreds of millions of dollars. The market is limited to the largest tech companies, including Amazon, Microsoft and Google, as well as a few government entities.

“Because the pool of buyers is so small, a federal regulator could track and license large concentrations of AI chips, and cloud providers—who own the largest clusters of AI chips—could be subject to ‘know your customer’ requirements so they can identify clients who place huge rental orders that signal an advanced AI system is being built,” Matheny said, in the Op-Ed.

Matheny said bot developers should be required to assess a model’s risky capabilities during training. Once training is complete, an AI model should be subject to “rigorous review” by a regulator or third-party evaluator before it is released to the world, he said.

“Expert red teams, pretending to be malicious adversaries, can try to make the AI perform unintended behaviors, including the design of weapons. Systems that exhibit dangerous capabilities should not be released until safety can be assured,” Matheny said.

In July, President Biden reach an agreement with AI tech leaders, including Amazon, Google, Meta and Open AI, on a voluntary set of guidelines they promised to abide by while they’re in the midst of an LLM “arms race.”

The voluntary agreement includes testing products for security risks and using watermarks to make sure consumers can spot AI-generated material. The companies also agreed to “conduct research on bias and privacy concerns.”

A report in the New York Times characterized the agreement as “an effort to forestall, or shape, legislative and regulatory moves with self-policing.”

In June, the European Union adopted the world’s first comprehensive set of regulations governing Gen AI.

The EU’s AI Act requires generative AI models to disclose the content that was generated by AI; bot makers must design the model to prevent it from generating illegal content; and summaries must be published of copyrighted data used for training.

The act stipulates that Europe will ban “unacceptable risk” AI systems, bots that engage in “cognitive behavioral manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behavior in children.”

Bots that practice “social scoring” will be banned—a way of classifying people based on “behavior, socio-economic status or personal characteristics.” The EU also has banned real-time and remote biometric identification systems, including facial recognition.

The EU law establishes eight domains that will require AI systems to be registered in an EU database, including operation of critical infrastructure, education, public services, law enforcement, legal services and border control.