Opinions

As AI Regulation Looms, What Does It Mean For Innovation?

Share this article

Share this article

Opinions

As AI Regulation Looms, What Does It Mean For Innovation?

Share this article

The development of AI has become a polarizing issue. It’s undoubtedly changing the world, but there are risks involved, prompting the need for lawmakers and trade groups to impose guardrails on safe use of AI. However, the nuances of regulation are also complex, with proponents and critics of AI becoming similarly polarized.

Will regulation make it too hard for the tech ecosystem to continue innovating in ways and at a pace that can deliver the kind of meaningful results that have proven so impressive to date? Or will regulation foster more competitive, responsible development that acknowledges and mitigates the potential risks of AI to users and society?

Since 2022, the pace of innovation in generative AI applications and use cases has been breakneck, leading to a sharp rise in adoption and integration into our daily lives. Since OpenAI launched its game-changing ChatGPT tool, the race has been on. Successive iterations of GPT with increasingly impressive capabilities have been accompanied by the development of competing chat offerings, including Anthropic’s Claude, Google Gemini, and Microsoft’s Copilot, as well as eye-popping developments in generative imagery and media.

The top AI technologies often take the form of public-facing apps themselves, but they’re also commonly used as building blocks integrated into other apps – nowadays, it feels like every digital interface boasts some kind of AI component to its functionality.

As rapid as the developments have been, the backlash has been equally dramatic. Lobbyists have pointed to numerous unanswered questions around issues such as data privacy, intellectual property rights, the weaponization of AI, and the opacity of algorithms that have been accused of creating biases. Add to the mix Microsoft’s stake in OpenAI, which has come under scrutiny for its perceived attempts to skirt competition laws, and it’s hardly surprising that regulators have responded hawkishly.

The Race to Become Rulemakers

Now, the AI industry is facing the prospect of imminent new regulation, and not everyone is happy about it. The EU is racing to become the first major jurisdiction to legislate with the EU AI Act, a move that could position it as a “global digital rule-maker” according to one think tank, while the White House has issued its own “Blueprint for an AI Bill of Rights.”

California is an intriguing case in point, home to both Silicon Valley and the global creative powerhouse that is Hollywood. The latter is firmly squaring off against AI development and appears to be winning over state regulators.

Such is the strength of feeling against California’s AI Bill that it led to Meta’s Head of AI denouncing regulation as stifling innovation. He was joined by none other than Marc Andreessen, when the veteran tech investor wrote an impassioned blog post advocating for low regulation and high freedom for AI development firms, using the argument that regulation only serves large companies with the resources to manage the compliance overheads it inevitably brings.

Balancing Risk and Complexity

Andreessen’s view tends towards the extreme, since many in the industry have recognized the need for some regulation. Andrew Ng, Head of DeepLearning.AI is one example of AI leaders calling for pragmatic rules.

“We should take a tiered approach to regulating AI applications according to their degree of risk,” he said. “Doing this effectively requires clear identification of what is actually risky (medical devices, for example, or chat systems potentially spewing disinformation). [Using] an AI model’s size, or the amount of computation used to develop the model that determines related risk [is] a flawed approach. ”

Is there a risk that such a case-by-case approach will introduce more complexity? Dell Technologies President John Roese believes that a certain amount of complexity in AI regulation is inevitable, telling the audience of an NYT Dealbook Summit that “AI has a dependency on the software ecosystem, on the data ecosystem. If you try to regulate AI without contemplating the upstream and downstream effects on the adjacent industries, you’ll get it wrong.”

However, Arik Solomon, CEO and co-founder of cyber risk and compliance firm Cypago, believes that firms can strike a balance between the complexity of regulation and the agility of innovation. “Regulating AI is both necessary and inevitable to ensure ethical and responsible use. While this may introduce complexities, it need not hinder innovation,” Solomon said. “By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.”

Promoting Fair Competition

AI policy proponents would certainly agree, and have plenty of other arguments up their sleeves. It’s possible that the US government’s seeming inability to present a model for global AI governance puts the country at risk of becoming a mere respondent to rules laid down by the EU – underscoring the view from the other side of the pond that the EU has an opportunity to become a rule-maker in AI.

In this sense, regulation plays a key role in supporting innovation and industry leadership rather than stifling it. Taking a light touch approach to regulation might be simply paving the way for more monopolies. Intriguingly, this is the same argument Marc Andreessen leveraged to argue against heavy regulation – that strict rules make it harder for smaller players to keep up, and as a result, Big Tech just gets bigger.

“Government regulation plays a key role to ensure that the AI playing field encompasses companies of all sizes from around the globe,” explains WEF Global AI Council’s Simon Greenman. “There is a race among AI companies to supply the chips, the computational power in the cloud, and the AI algorithms to power the tens of millions of applications that will use AI. AI favors size, with competition between Microsoft, Google, Amazon, and Apple to be the AI supplier of choice.”

Heavy regulation could indeed risk stifling innovation if it makes it too cumbersome for companies to operate, and particularly for smaller players to compete. Ultimately, concentrating power in the hands of a few large firms is bad for consumers since it limits choice and promotes poor corporate behaviors. Given the advantages of scale in AI and the capacity of the big AI firms to ensure compliance with sensible regulation, it should be possible for lawmakers to strike a balance that addresses the worst of the risks.

Ralph Tkatchuk is an e-commerce data security specialist and a long-time contributor to technology magazines.

Get news to your inbox
Trending articles on Opinions

As AI Regulation Looms, What Does It Mean For Innovation?

Share this article