6.6 C
New York
Thursday, November 14, 2024

The best way to regulate AI might be not to specifically regulate AI. This is why

The best way to regulate AI might be not to specifically regulate AI. This is why

By Stephen King, Monash University
 

The new wave of artificial intelligence – so-called AI – is bringing with it promises as well as threats.

By assisting workers, it can raise productivity and boost real wages. By making use of large, underutilised data, it can improve outcomes in services including retailing, health and education.

The risks include deepfakes, privacy abuse, unappealable algorithmic decisions, intellectual property infringement and wholesale job losses.

Both the risks and the potential benefits seem to grow by the day. On Thursday Open AI released new models it said could reason, performing complex calculations and drawing conclusions.

But, as a specialist in competition and consumer protection, I have formed the view that calls for new AI-specific regulations are largely misguided.

Most uses of AI are already regulated

A Senate committee is about to report on the opportunities and impacts of the uptake of AI, and I helped draft the Productivity Commission’s submission.

Separately, the government is consulting about mandatory guardrails for AI in high-risk settings, which would function as a sort of checklist for what developers should consider alongside a voluntary safety standard.

Here’s my thinking: most of the potential uses of AI are already covered by existing rules and regulations designed to do things such as protect consumers, protect privacy and outlaw discrimination.

These laws are far from perfect, but where they are not perfect the best approach is to fix or extend them rather than introduce special extra rules for AI.

AI can certainly raise challenges for the laws we have – for example, by making it easier to mislead consumers or to apply algorithms that help businesses to collude on prices.

But the key point is that laws to control these things exist, as do the regulators experienced in enforcing them.

The best approach is to make existing rules work

One of Australia’s great advantages is the strength and expertise of its regulators, among them the Competition and Consumer Commission, the Communications and Media Authority, the Australian Information Commissioner, the Australian Securities and Investments Commission, and the Australian Energy Regulator.

Their job ought to be to show where AI is covered by the existing rules, to evaluate the ways in which AI might fall foul of those rules, and to run test cases that make the applicability of the rules clear.

It is an approach that will help build trust in AI, as consumers see they are already protected, as well as providing clarity for businesses.

AI might be new, but the established consensus about what is and is not acceptable behaviour hasn’t much changed.

Some rules will need to be tweaked

In some situations, existing regulations will need to be amended or extended to ensure behaviours facilitated by AI are covered. Approval processes for vehicles, machinery and medical equipment are among those that will increasingly need to take account of AI.

And in some cases, new regulations will be needed. But this should be where we end up, not where we begin. Trying to regulate AI because it is AI will, at best, be ineffective. At worst, it will stifle the development of socially desirable uses of AI.

Many uses of AI will create little if any risk. Where potential harm exists, it will need to be weighed against the potential benefits of the use. The risks and benefits ought to be judged against real-world, human-based alternatives, which are themselves far from risk-free.

New regulations will only be needed where existing regulations – even when clarified, amended or extended – are inadequate.

Where they are needed, they ought to be technology-neutral wherever possible. Rules written for specific technologies are likely to quickly become obsolete.

Last mover advantage

Finally, there’s a lot to be said for becoming an international “regulation taker”. Other jurisdictions such as the European Union are leading the way in designing AI-specific regulations.

Product developers worldwide, including those in Australia, will need to meet those new rules if they want to access the EU and those other big markets.

If Australia developed its own idiosyncratic AI-specific rules, developers might ignore our relatively small market and go elsewhere.

This means that, in those limited situations where AI-specific regulation is needed, the starting point ought to be the overseas rules that already exist.

There’s an advantage in being a late or last mover. This doesn’t mean Australia shouldn’t be in the forefront of developing international standards. It merely means it should help design those standards with other countries in international forums rather than striking out on its own.

The landscape is still developing. Our aim ought to be to give ourselves the best chance of maximising the gains from AI while providing safety nets to protect ourselves from adverse consequences. Our existing rules, rather than new AI-specific ones, ought to be where we start.The Conversation

Stephen King, Professor of Economics, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

This post was originally published on this site

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Stay Connected

156,494FansLike
396,312FollowersFollow
2,320SubscribersSubscribe

Latest Articles

0
Would love your thoughts, please comment.x
()
x