The AI systems in check include those systems which are designed or used in a manner that manipulates human behavior,
The European Union (E.U.) is planning to limit the use of Artificial Intelligence (AI) in human society, by banning the use of facial recognition for surveillance or algorithms that manipulate human behavior, under its proposed regulations.
The proposal which got leaked online aims to set tougher rules for what they deem high-risk AI. This also includes the algorithms that are used by police and recruitment firms. The use of AI in military is free from all sorts of restrictions.
Companies that develop any of the prohibited AI system or conceal correct information about them, could face fines of upto 4% of their global revenue.
The plan to regulate AI has been on the cards for a while. Back in February 2020, the European Commission published a white paper, sketching plans for regulating so-called high-risk applications of AI.
The banned list of AI systems includes those systems which are designed or used in a manner that manipulates human behaviour, opinions or decisions, thereby making a person to form an opinion or behave in a particular manner. AI systems used for indiscriminate surveillance applied in a generalised manner are also prohibited. Systems used for social scoring and exploiting information or predictions and a person or group of persons in order to target their vulnerabilities are also not allowed.
In a series of tweets, Europe’s Policy Analyst – Daniel Luefer tweeted,
After going through the full 80+ pages of this leaked draft of the @EU_Commission's regulatory proposal on AI, here are some initial comments:
— Daniel Leufer (@djleufer) April 14, 2021
– First, this leak is of a draft from January, so it's likely, and hopeful, that the draft has significantly progressed since then https://t.co/c9tUWyF6Xb
He further tweeted,
The most important inclusion is Article 4, on prohibited applications of AI.
— Daniel Leufer (@djleufer) April 14, 2021
Civil society has been advocating for red lines of applications of AI that are incompatible with human rights, so it's encouraging to see an attempt to tackle that…HOWEVER…
Tweeting on high-risk systems, he tweeted,
We also have serious concerns about the restrictions applied to high risk systems for which it lists the "need for data sets to be high quality, have human oversight and transparency, as well as be “robust.”
— Daniel Leufer (@djleufer) April 14, 2021
But checks on data quality & robustness will not be enough in all cases
Another tweet read,
How does this address high risk systems which have huge potential for harm & are also based on flawed premises?
— Daniel Leufer (@djleufer) April 14, 2021
E.g. assessing a job applicants suitability for a role by analysing their emotion from facial movements?
We need enforcement of high scientific standards
However, the checks on data quality and robustness will not be enough in all cases. For systems deemed to be high risk, member states need to apply far more oversight and must deploy assessment bodies to test, certify and inspect these systems.
Systems that fall in high-risk category include systems establishing priority in the dispatching of emergency services, systems determining access to or assigning people to educational institutes, recruitment algorithms, systems that evaluate credit worthiness, systems facilitating individual risk assessments and crime-predicting algorithms.
Leufer said that the proposals should be expanded to include all public sector AI systems, regardless of their assigned risk level because people typically do not have a choice about whether or not to interact with an AI system in the public sector.
The European Commission is also proposing that all high-risk AI systems must have a kill-switch that can immediately turn off the AI system. The official unveiling of the proposed guidelines will happen next week.