AI Act - we stay in the tail of the world at our own request
We could laugh at the meme about the EU's regulation-driven economy if it were just a meme. But since it's the bitter truth, we're left laughing through tears at best.
I don't deny that many of the EU laws that have recently come into force were needed. In legal and security circles, DORA, NIS 2, and the AI Act have been referred to by all cases for months. All of these acts force organizations and member countries to take specific actions to improve organizational resilience in the face of growing instability in international relations. I will focus today only on the latter.
In a nutshell, the "noble" tenets of the AI Act can be described in five sub-points:
Ensuring security and fundamental rights,
Supporting trustworthy artificial intelligence,
Addressing high-impact artificial intelligence threats,
Harmonization of regulations across the EU,
Promoting innovation and competitiveness.
The problem, as usual in such cases, is unfortunately not in the assumptions, but in the way the regulations were written. And they were written in such a way as to effectively make them difficult to interpret and implement not only at the national level, but also at the level of specific companies operating on AI. So the assumptions described above will remain, for the most part, just assumptions, not facts.
I am extremely fortunate to work with great specialists in various fields, but even so, the amount of work we have incurred in analyzing regulations and adapting the corporate environment to them can be counted in thousands of man-hours. Not everyone is so fortunate, so I will try in this article to tell what pains we faced and how we dealt with them as an organization.
However, the AI Act, although right in its assumptions and close to my heart, in addition to organizational problems, will also cause major limitations in the ways artificial intelligence can be used in many fields, and this will cause a further disconnect between the European Union and the US or China in the field of competitiveness and innovation. To this problem I devote the second part of the article.
Act 1 of the drama - a shot in the knee for entrepreneurs and local governments
"Specialists argue" is not just a joke in this case. More than 20 people were involved in the preparation of the organizational framework for the implementation of the regulations in our company, almost half of them lawyers of various specialties. In addition to those involved in language models, application and infrastructure management, analysts, engineers and security. How many companies in the European market can afford the luxury of cutting a few hundred man-days out of their schedule in, say, the fourth quarter of the year? More than 99% of them are micro, small and medium-sized enterprises. Naturally, they will have proportionally fewer processes and tools to analyze, so this number will decrease by one or two zeros. But the obligation will not disappear just because we employ few people. And if we don't hire a lawyer, we'll have to pay an outside law firm, and then we'll have to comply with the guidelines anyway. The same, by the way, applies to local governments. A small minority of local governments and territorial units will have the authority to analyze for themselves whether the AI tools they use are prohibited or carry a high risk.
Here, however, I will try to help by sharing the experience of the past months - perhaps after reading the solutions we have implemented, some of you will be able to relate them to your own backyard.
AI Act introduces concept of banned AI systems
What will you not be allowed to do with artificial intelligence as of February 2, 2025?
Cognitive behavioral manipulation
Exploiting people's weaknesses
Social Scoring
Biometric categorization
Real-time biometric identification
It sounds legalistic because it was written by lawyers. My impression is that with little cooperation from business and technology practitioners. Translated into business language, a banned AI system will be one that meets any of the following criteria:
It uses techniques to influence the subconscious or deliberately manipulate people, leading to changes in their behavior and decision-making (e.g., purchasing or credit) that may harm them or others.
It exploits people's vulnerabilities, such as age, disability, or social or economic hardship, in ways that significantly affect their behavior and can cause them or others serious harm.
It evaluates individuals based on their behavior or personal characteristics (known as social scoring) in a way that may lead to unfair treatment in other contexts (than those in which the data was collected).
It judges individuals out of proportion to their behavior.
It is used to assess the risk associated with individuals to predict whether a person is likely to commit a crime, based solely on profiling1 or analysis of their personality traits and characteristics, without assessing objective facts. (E.g., assessing the risk of committing fraud based on a photo, residence or country of origin.)
It creates or expands a facial recognition database by collecting images from the Internet or CCTV cameras for no apparent purpose.
It assesses people's emotions at work and does so for a purpose other than medical or safety considerations.
It uses biometric data to categorize people based on their race, political views, religion, sexuality or sexual orientation. (Biometric data is personal data obtained through special technical processing that relates to a person's physical, physiological or behavioral characteristics, such as a facial image or fingerprints.)
It uses real-time biometric identification in public spaces for law enforcement purposes. (Such use of AI is allowed only when necessary to find victims, prevent threats, or track down suspects.)
These are all obviously needed regulations and valid assumptions in the context of civil liberties and fundamental rights. However, the regulations are structured in such a way that they impose the same obligations on both those who produce AI systems and their users. So what are organizations to do in such a situation?
I won't just leave you with my complaining, below are a handful of tips on how to deal with this challenge.
Inventory of applications and processes.
This is an absolute must. If you know what your employees are using, you'll be able to track down those items they perhaps shouldn't be using. If you've never done this, start now. The regulations take effect on February 2.
Determination of business owners.
Every tool in a company should have its owner. The head of the team that uses it, the person who bought it, the main user, the creator. There are many possibilities and they depend on the characteristics of your company and your operations. But you should always know who to talk to about the topic, and it should be one person who you know is responsible for it.
An interview with the owners.
You don't have to do everything yourself and understand in detail every process in the company. Once you have determined the owner, ask him if his tool meets the requirements imposed by the file. If you don't know what to ask, simply ask the questions bulleted in the paragraph above. There's a good chance this will be enough. Of course, if you adapt them to the specifics of your own business.
If you identify prohibited uses, don't panic.
It's true that the regulations will take effect any minute now. But few European countries already have the appropriate national authorities that will be able to inspect and fine you immediately. However, don't delay. If you are indeed using AI in a prohibited way, modify the system so that it no longer fulfills these hallmarks. If this is not possible, prepare an exit plan for such service.
This cannot be a one-time activity.
Asking about the services you currently use will give you an idea of the current situation. But what will happen in a month or a year? After all, you're not going to be making a pilgrimage around the company every now and then and checking every application. And the landscape is very dynamic, with more and more systems having some sort of AI component, and that number will only grow. Plan a process in which you verify such systems even before they are deployed in production, such as in the testing or onboarding phase.
Act 2 of the drama - with good intentions is hell paved.
All that I wrote about above leads to a sad conclusion - we are casting a pall over ourselves. Although, to reiterate, I consider these regulations morally right and proper, my pragmatic side is agitating and rebelling against such a stance. It is a poor consolation to exclude certain areas, such as scientific research, defense or criminal prosecution from this law. This is not because I am in favor of surveillance. Quite the contrary. However, given that European countries are far behind the US or China when it comes to AI development, imposing an additional hoard on ourselves voluntarily seems like the actions of a madman. In view of the fact that no one among Europe's current ruling liberal elite will probably even think of withdrawing from these regulations, I foresee only two paths for us: the more likely one and the less likely one.
The less likely one is to go down the road of inflexibility with the US and China. That means real restrictions and penalties for those who break the new law and treating non-EU players equally with European ones. The EU is still a market of nearly half a billion people and still one of the wealthiest parts of the world, so a morsel that no global player can pass up. The maximum penalty for violating the AI Act's regulations on banned systems is 7% of annual global turnover. For Meta, for example, this could mean a $9 billion penalty for each proven violation. For TikTok, almost $8.5 billion. These are not sums that anyone can move past. However, the treatment of tech giants so far, both by the courts of the member states and the CJEU, rather make one doubt such a course of events. I have been led to believe that this is a less likely path by observing how the EU has for years looked through its fingers at what ads are displayed to users, how discriminatory the algorithms are, or what criminal abuses occur on these platforms.
I described the more likely path in the title of the article. I find it hard to imagine that today's European elites will be able to take up the gauntlet thrown down by Donald Trump. Already today we are seeing a festival of sycophancy and glow to the president and the oligarchs behind him. Restrictions on domestic entrepreneurs will not be followed by even-handed restrictions, but relief for tech giants. Donald Trump has announced that he will defend the interests of U.S. corporations with all available means, and the new head of DOGE, Elon Musk, has identified not China, but the European Union as the main villain for the United States. I don't see any mainstream leader in Europe right now willing to take up the challenge and build their own capabilities in this regard. I see a willingness to continue to cut coupons from individual countries' good relations with the United States and China. Even if these good relations ceased to exist and even at the cost of covering up reality. I also see a complete inability and unwillingness to pursue a common European foreign policy.
I see perhaps the biggest step on Europe's path to becoming an open-air museum for the rich Chinese.
Profiling means the automated processing of personal data with the purpose of assessing certain characteristics of an individual. This includes analysing or predicting aspects such as work performance, financial situation, health status, personal preferences, interests, reliability, behaviour, location or movement.