Updated: Aug 15
Guest author Tomer Gal, Founder and CEO of OpTeamizer, discusses the various roadblocks facing the EU when approaching this complex topic
New technologies have always provoked excitement accompanied by fear, and such reactions are even stronger when talking about artificial intelligence. The worst-case scenarios are indeed frightening, from manipulation, bias and discrimination to fraud, loss of privacy and the takeover of the world.
The European Union took far-reaching steps in privacy protection when it conceived and implemented the GDPR laws, which have become a de facto global standard, binding even on Israeli companies.
The EU is now determined to deal with a much bigger and more challenging beast: an attempt to curb artificial intelligence and establish clear rules for its use.
The EU's Artificial Intelligence Act is a proposed law that aims to formulate and operate a regulatory and legal framework common to all types of AI, and it is the world's first comprehensive regulatory effort in this direction.
If successful, it will have a sweeping impact like that of the GDPR. The seriousness of the EU's intentions is evidenced by the fact that it will impose heavy fines on companies in violation; companies like Amazon and Google can attest to how painful such fines are.
The regulation applies to all non-military sectors and to all types of AI. It doesn't grant rights to individuals but rather seeks to regulate the conduct of AI providers and of the entities that use such systems in professional settings.
Of course, one shouldn't ignore the vast benefits offered by AI. Regulation strives to find a balance between such benefits and risks like invasion of privacy. The rationale behind identifying terrorists using facial recognition and arresting them before they succeed in carrying out a mass attack is clear.
However, do we, as a civilized society, want to invade people's privacy to the point of labeling the entire population's faces? This is one of the EU's dilemmas.
The proposed law divides AI applications into three categories. First, applications and systems that create unacceptable risk that will therefore be banned altogether, such as state-run social credit systems of the type operated in China.
Second, high-risk applications that will be subject to specific legal requirements, such as computer vision-based scanners for ranking job applicants and tools for rating credit. Third, applications that will neither be subject to regulation nor expressly banned nor labeled as high-risk.
The EU's goal of regulating AI to minimize the risks associated with its systems and applications is commendable, but implementation will likely face a variety of technological hurdles.
For example, a startup that has given access to municipal security cameras may claim that it doesn't store information and doesn’t recognize faces. How will the EU know that it indeed doesn't recognize faces, track people, or sell the information to third parties?
Another example is related to the difficulty of enforcement between countries. A company develops an AI product that collects information from cameras and uploads it to a server somewhere in the world.
It may be legal in some countries while in others it may violate state law. In such a case, the EU would face a challenge and perhaps be forced to define a set of rules and restrictions for companies that use the product within its borders.
But even with such a definition, checking and enforcing the compliance of every company and product will be extremely challenging.
The biggest technological challenge is the requirement that datasets used to train the models be error-free and that any user be able to understand how the application works. The datasets used by applications are enormous and take thousands of man-hours to check for errors.
Moreover, neural networks can be so complex that even their creators don't quite understand how they come to their conclusions. Such research domain is called Explainable AI, and it requires a lot more investment into researching the model.
The new regulation also deals with disagreements between EU countries on various ethical issues. Thus, for example, there's a debate over which applications are to be defined as completely unacceptable.
Some countries like Germany seek to ban the use of facial recognition in the public sphere altogether, while France supports applications aimed at ensuring public safety.
Some countries think the regulation may hinder innovation, while others feel it's too lenient and fails to adequately protect citizens. A question that hasn't yet been decided is whether EU citizens will be able to file complaints and demand compensation in cases where they believe AI has harmed them.
AI is expected to work wonders in several areas such as healthcare, financial services, retail, entertainment, and more. Regulating its use is of the highest interest to all governments, national economies, businesses and, of course, consumers. It's the human mind that will have to set the limits and rules. We can't force AI do it for us.
Author Tomer Gal is the CEO and Founder of OpTeamizer, a top Israeli AI consulting firm.