The European Commission (EC) launched plans to control artificial intelligence (AI). The special rulebook comes with bans on any practice that influences people through concealed techniques beyond their understanding and any use of AI for mass surveillance by law enforcers and government social scoring practices.
Although many privacy advocates have praised the move, and welcomed it as a step in the right direction, the new rules have an impact on businesses as they could change how they operate. Many businesses have embraced using AI and machine learning technologies as digital transformation continues to be experienced worldwide. With the new rules, companies might be forced to carry out risk assessments or keep reviewing AI systems.
How the Rules Affect Businesses
At the big level, the new rules aim for high-risk AI, including self-driving cars, facial recognition, and other AI systems used in the financial industry. In such areas, anyone using AI will have to do a risk assessment and mitigate any dangers.
They will also need to log activity to record and trace AI decisions, train the system using high-quality data sets, have detailed documentation of the system to prove their compliance with the law, have the right human oversight measures, offer clear and sufficient information to users, and ensure that they maintain a high level of security, robustness, and accuracy.
The EC is striving to prevent any overflow by mandating businesses to assess their AI systems to determine their risk before they can commercialize them on the European market. Where evaluation methods are vague, organisations are required to assess them and abide by regulatory guidance for compliance regarding any probability of harm and the severity of a risk.
Companies will also need to consider the notion of explicability of models as they will be required to detail how an algorithm functions at the beginning of decision making both to users and the authorities. Companies have always done data governance to manage any collected and stored data. However, they now need to evolve these policies to include AI, which is why EC is proposing a new AI regulatory framework that differs from but also complements GDPR.
What Challenges Can Businesses Anticipate?
The main challenge for most organizations is going to be pinpointing risks for all AI systems and adhering to the established assessment processes consistent for each one of them.
Another challenge for companies will be ensuring that all these efforts don’t slow down AI development and its successful integration in important business processes, especially considering how much still needs to be done in this space. To safely navigate the new regulatory environment, businesses will need to create an organized and centralized approach with governance and auditability features.
The good news is that the new rules don’t compromise innovation. EU Member States will still get the chance to promote innovation by offering SMEs and start-ups the freedom to test and create AI systems with fewer regulatory constraints before commercialisation.
Considering the future of AI regulation, businesses should view it as a way of eliminating negative side effects as they embrace AI systems. If European and UK companies dedicate themselves to adopting a smart strategy around AI risk and governance, they kill two birds with one stone – comply with the new regulations and also enjoy all the AI benefits to nurture innovation and business growth.