Microsoft is calling for the US and other countries to establish their own government agencies dedicated to regulating AI.
Microsoft President Brad Smith made the recommendation today in a 41-page company report about reining in AI amid concerns the technology could disrupt society, whether it be through deepfakes or taking away jobs from humans.
Smith said some of the regulation can occur through existing agencies and courts; they simply need the expertise on applying current laws to AI programs. But he sees a regulatory gap when it comes to the smarter and potentially autonomous AI programs coming in the next decade.
“There will then be a need to develop new law and regulations for highly capable AI foundation models, best implemented by a new government agency,” Smith said.
To regulate more powerful AI systems, Smith sees an approach that not only involves new laws, but also issuing licenses that permit qualified data centers to run highly capable AI models. It’s an idea that Sam Altman, CEO of Microsoft partner OpenAI, also advocated for during a Congressional hearing earlier this month.
“As Microsoft, we endorse that call and support the establishment of a new regulator to bring this licensing regime to life and oversee its implementation,” Smith said in the report. Such licenses should be necessary to operate AI programs to run the critical infrastructure of a country, he added, forcing the operator to build backups that could allow humans to intervene in the event the AI program goes off the rails.
In proposing the regulations, Smith also implied the US government needs to avoid repeating the mistakes we saw with social media. “Little more than a decade ago, technologists and political commentators alike gushed about the role of social media in spreading democracy during the Arab Spring. Yet, five years after that, we learned that social media, like so many other technologies before it, would become both a weapon and a tool – in this case aimed at democracy itself,” he said.
However, the recommendations from Smith will likely benefit Microsoft, rather than impede the company’s research on AI. That’s because Redmond clearly has the resources to meet the high standards advocated in the report when many other smaller companies and startups might not.
Microsoft may also fear the prospect of even tighter regulations coming from the European Union, which is moving faster than the US to create rules regulating AI. On Thursday, OpenAI’s Altman told reporters he had “many concerns” about the EU’s AI Act, which is still being finalized, but could ban the AI use from certain practices.
“We will try to comply, but if we can’t comply we will cease operating (in the EU),” Altman said, according to The Financial Times.