12
November
2021
|
14:02
Asia/Singapore

Regulating robots in the age of AI

From powering our carefully curated news feed on social media platforms, to the travel apps that we rely on to avoid heavy traffic and find our way – artificial intelligence (AI) has become a staple in our daily lives.

But much debate still surrounds the issue of AI regulation, given its potential to influence people’s decisions, as well as how it should be done and what exactly should be controlled.

“If you regulate early, then the cost of regulation is low. But the problem is that you don't know exactly what the harms that you are trying to stop are,” explained Professor Simon Chesterman, Dean of the NUS Law.

“The danger if you wait is the harms become clearer, but the cost also goes up.”

Prof Chesterman was speaking at the launch of his new book “We, the Robots?” that examines how current laws are dealing with AI and what other rules and institutions to govern AI are needed.

Several experts present at the hybrid event held on 2 November at NUS Law also shared their insights on the subject and the challenges hindering proper regulation.

What harm do you want to prevent?

To solve a problem, you must first define it. Only then can you identify a policy objective and develop the right form of intervention, said Mr Yeong Zee Kin, Assistant Chief Executive, Data Innovation & Protection, at the Infocomm Media Development Authority.

But one roadblock standing in the way is the lack of data from real-world use cases. “Things have gone poorly, and things have gone well. Having enough data actually gives us a better idea of what the real-world problems are and that’s a necessary first step,” he explained.

For Ms Arianne Jimenez, Privacy and Public Policy Manager APAC at Facebook, good regulation for AI should consider the risks involved. She believes rules should be aimed at controlling the greatest likely threats as opposed to preventing every single theoretical harm regardless of magnitude.

“So simply put, lower risks should be subjected to fewer regulations or less stringent requirements, but higher-risk uses of AI should be subjected to more stringent regulatory requirements,” she explained.

If there’s a problem, how do you fix it?

While AI systems are able to make decisions on their own, humans cannot be completely taken out of the loop, such as in critical systems that involve life and death, said Ms Sunita Kannan, Data & AI Solutions Lead Asia (APJ &ANZ) at Microsoft Headquarters.

“A machine will be able to take whatever we set it to and perform the entire process. But if there is something out of the blue in the entire process, that’s where the human needs to come in,” she noted.

It is also important that organisations keep an audit trail of the machine learning models they develop. Without one, the organisation runs the risk of being left with a “black box” when the developer leaves the job, she added.

And when it comes to how AI laws should be crafted, Professor Tanel Kerikmäe from the Tallinn University of Technology in Estonia suggests that it might be more effective for the rules to be sector-specific.

He shared that multiple attempts made in Estonia to regulate AI have failed, mainly due to differences in the legal language used by various legal branches, and discrepancies in how certain words are understood by the average citizen and by lawyers.

“Maybe we should not be idealists anymore where we regulate with just one set of rules,” he mused.

Is AI scary or beneficial?

According to the World Economic Forum's “The Future of Jobs Report 2020”, AI is expected to replace 85 million jobs worldwide by 2025, but the report goes on to say that it will also create 97 million new jobs in that same timeframe.

There have been cases of facial recognition technology wrongly identifying innocent people as criminals. At the same time, however, image recognition technology has helped doctors make more accurate diagnoses.

Professor Chen Tsuhan, Deputy President (Research & Technology) at NUS, acknowledges that there are unintended repercussions of employing AI.

Using the example of AI recommendation systems, the AI scientist explained that in the process of customising the user experience, they create echo chambers as they continuously push books, news and other forms of content that the user is interested in.

This could result in a more divided society as perspectives of individuals become narrower. But Prof Chen is optimistic that things can be turned around.

Recommendation systems, for instance, could be finetuned to encourage the user to consume content outside of his or her preference. Over time, this could open up the minds of people and bring society closer together.

“I truly believe AI can be used for good, and you are hearing this from an AI scientist,” he said with a smile.