European Union is on the move to  implement rules for applying artificial intelligence and they are setting the fines at up to 4% for specifically prohibited use-cases. This is according to a leaked draft of AI regulation.
The decision to regulate AI has always been there as European Commission had previously published a white paper in which they had laid down plans for regulating high-risk artificial intelligence applications.
Some of the departments identified as a high risk included recruitment and energy, among others. Their focus is on compliance for high-risk AI applications as they may arise.
Why All the Fuss?
The European Commission wants to improve public trust in AI using a system of balances increased in EU values and compliance checks to encourage human-centric and trustworthy AI.
Even AI applications makers will have to adopt codes of conduct to help encourage the voluntary application of mandatory requirements.
Also, these regulations ensure there are measures supporting AI development in the bloc. These will push Member States to start regulatory sandboxing schemes where SMEs and start-ups get priority for support to test and develop AI systems before releasing them to the market.
What Constitutes High Risk AI?
Any industry planning to use artificial intelligence will be scrutinized to determine whether a specific use-case is considered high-risk. If so, they will need to do a mandatory, pre-market compliance assessment.
Classifying whether an AI system is high-risk will be determined by its intended use, including the conditions and context of use. The process will take two steps to determine whether the AI system can cause harm, and in such cases, the severity of that harm and its probability of occurrence.
Examples of harms from AI systems include damage of property, injuries or deaths of people, major disruptions to essential services provision, systematic adverse impacts on society, and adverse financial impacts, among other factors.
Some of the high risks applications identified and discussed include vocational and educational training institutions systems, recruitment systems, creditworthiness assessment, emergency service dispatch services, and taxpayer-funded benefits allocation systems, among many others.
To get approval under the legislative plan, these systems will need to meet all the EU-set compliance requirements. Other requirements include security and accuracy in the performance of these systems.
Banned Biometrics and Practices
Some AI practices are prohibited under Article 4 of the discussed laws per the EU leaked draft. These include the general use of social scoring applications and mass surveillance systems, which could encourage discrimination.
Any AI systems that manipulate human behaviour, opinions, or decisions to a detrimental end are also under the banned category. The same applies to systems that generate predictions by using personal data to target vulnerable persons.
AI systems likely to bring serious implications for personal safety will also undergo a higher level of regulatory involvement during the compliance process. The conformity assessments are undergoing and AI systems need to undergo new conformity assessments whenever they apply any changes affecting the system compliance or whenever the intended use of these systems change.
As artificial intelligence continues to become popular with most industries, there is a need for its regulation. What the EU is aiming for is to protect the end user from exploitation, which is a great move. Compliance is paramount in all sectors, and AI is no exception.