UK Gov’t Takes Pro-Innovation AI Approach in Contrast to EU –

6 minutes, 58 seconds Read

There’s a global race going on with artificial intelligence at its center.


World’s Leading High-rise Marketplace

It isn’t just a race between multibillion-dollar, or trillion-dollar, private enterprises looking to commercialize and scale their AI systems and product innovations.

Rather, this race at least, is increasingly between governments and their regulators, as nations around the world seek to become the global leader in both the safe development and responsible deployment of AI.

But while establishing the golden global standard for AI regulation is at the top of every government’s wish list, there are as many ways to get there as there are nations around the world.

The United Kingdom — in contrast to its peer regulators in the European Union, who have unanimously endorsed the final text of the bloc’s Artificial Intelligence Act — is taking a decidedly “pro-innovation” approach to the question of AI regulation.

For its part, the EU’s AI Act takes a risk-based approach to regulating AI applications, and once enacted, will apply to every AI company that provides services to the EU, as well as users of AI systems located within the EU. The AI Act does not apply to EU providers for external countries.

In contrast to the EU’s risk-based categorization of AI systems, the U.K. government’s regulatory designs take an alternative approach by distinguishing between capability-based categorizations of AI systems along with outcome-based categorizations of AI risks.

The intent, per the U.K. government’s February response to the AI regulation white paper consultation, is to rely on sector-based regulation informed by five (unchanged) AI principles, rather than introduce AI-specific legislation.

Those five core principles are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

Read also: How to Think About AI Regulation

UK Eyes Innovation With AI-Enablement Regulatory Goals

The U.K. government’s cross-sector and outcome-based regulatory framework for AI is centered on the goal of transforming the U.K. into an “AI-enabled” nation, as well as a science and technology superpower, by the end of the decade.

“I believe that a common-sense, outcomes-oriented approach is the best way to get right to the heart of delivering on the priorities of people across the U.K.,” the Rt Hon Michelle Donelan MP, secretary of state for Science, Innovation and Technology said in a statement. “Better public services, high-quality jobs and opportunities to learn the skills that will power our future — these are the priorities that will drive our goal to become a science and technology superpower by 2030.”

In support of that mission, a multidisciplinary team tasked with undertaking cross-sectoral risk monitoring has been established within the Department for Science, Innovation and Technology (DSIT). Per the government’s report, the team will review cross-sectoral AI risks and evaluate the effectiveness of governmental and regulatory intervention.

The U.K. will also establish nine new AI research hubs that will serve as focal points for research and development in AI, with a focus on various sectors including education, policing and the creative industries.

The pace of progress in AI has been unlike any previous technology, and by pursuing a light-touch approach, the U.K. is distinguishing itself from the EU’s position on core questions surrounding how to regulate AI.

“The AI Act presents a very horizontal regulation, one that tries to tackle AI as a technology for all kinds of sectors, and then introduces what is often called a risk-based approach where it defines certain risk levels and adjusts the regulatory requirements depending on those levels,” Dr. Johann Laux told PYMNTS in an interview posted in August.

Per the U.K.’s published guidance, the government is committed to a context-based approach that does not contain what it refers to as “unnecessary blanket rules” that apply to all AI.

See also: Will AI’s Biggest Questions Find Their Answers This Year?

The Innovation-Regulation Tug of War

The way that nations approach AI regulation will have a foundational impact on their own domestic technology industries, particularly as AI continues to advance and companies — and their investors — weigh their options.

“[AI] is the most likely general-purpose technology to lead to massive productivity growth,” Avi Goldfarb, Rotman chair in AI and healthcare and a professor of marketing at the Rotman School of Management, University of Toronto, told PYMNTS in an interview posted in December. “…The important thing to remember in all discussions around AI is that when we slow it down, we slow down the benefits of it, too.”

The AI of today will likely be almost unrecognizable when compared to the AI of tomorrow — making the danger of overly stringent regulation not just one of the present, but a risk that could shut off a more productive future.

“We always overestimate the first three years of a technology and severely underestimate the 10-year time horizon,” Bushel CEO Jake Joraanstad told PYMNTS in an interview posted in December.

The U.K. is set to release further guidance around its AI framework this spring, including guidance on AI assurance to help businesses and organizations understand how to build safe and trustworthy AI systems, and a potential AI cybersecurity code of practice.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts