The European Parliament approved the Law on Artificial Intelligence
March 13, 2024
0
At the beginning of 2023, the European Union through the European Commission stated, their plans to regulate artificial intelligence quite strictly. At the time, and even more so
At the beginning of 2023, the European Union through the European Commission stated, their plans to regulate artificial intelligence quite strictly. At the time, and even more so after they knew in June that they already had a first draft ready, it seemed possible that approval would come before the end of 2023. In the end, it was not possible, but the urgency of establishing this legal framework ensured that, apart from any the surprise is that everything is ready and that it is adopted before the end of the current term of the European Parliament, which, as you already know, will be renewed with the elections in June next year. And the key step is of course the one we experienced today approval of the new law on artificial intelligence by the European Parliament.
Before we get into the substance of what this new standard stipulates, it is important to clarify that After its approval, some steps are still pending. The first is that the regulation must undergo a final review by lawyer-linguists, after which it must be approved by the European Council (a formality taken for granted) and finally published in the Official Journal of the European Union. (BOE intra-community space to understand). Once this publication is drafted, the new regulations will enter into force 20 days later, although several adaptation options are set out.
In general, and as we have already seen in many other standards, a period of 24 months is set for the common standard to allow companies to adapt their products and services to the new legal framework. However, and in the line of urgency that the European institutions have communicated from the beginning in relation to the regulation of artificial intelligence, different “velocities” have been set, among which stand out what refers to prohibited practices, for which only an adaptation period of six months is established. However, the greater laxity in obligations for high-risk systems, which is 36 months, is striking.
In the communication issued by the European Parliament, there are several points, especially this regulation of artificial intelligence in the European Union (if you want, you can get the full text here), in a standard that, at least in its statement of intent, aims to guarantee the basic rights of citizens of the European Union but at the same time to support its adoption and use in the interest of European competitiveness.
The first has to do with one of the most troubling aspects reckless use of artificial intelligencewhich includes uses such as
«Biometric categorization systems based on sensitive features and untargeted extraction of facial images from the Internet or CCTV images to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (based only on profiling a person or assessing their characteristics) and artificial intelligence that manipulates human behavior or uses intelligence will also be banned. people’s vulnerability«.
It was from the beginning one of the points on which the EU, through its institutions, wanted to introduce particularly strict limits and as indicated earlier, here is the shortest period for mandatory compliance with the new standard, six months from its effective application, so without any surprises we can expect that this type of use of AI will no longer be possible.
The European Parliament approved, yes, some exceptions to the rule at this point. In particular, the use of this type of tools by law enforcement authorities is mentioned in a set of specific circumstances that are defined in the standard and that are subject to a set of guarantees, such as judicial authorization, and their use is limited in time and geographically. Among the examples cited are both the search for missing persons and the prevention of terrorist actions.
I mentioned earlier long periods in relation to what the Regulation considers to be high-risk systems, and which are collectively subject to critical systems consideration, but expands them to include others that do not normally receive such consideration, but must also be specifically protected due to the potential impact they may have on society. Specifically, the text defines them as such «given its significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law«. For them, the following is set out in summary:
«Such systems must evaluate and mitigate risks, keep records of usage, be transparent and accurate, and provide human oversight. Citizens will have the right to file complaints against AI systems and receive clarifications about decisions based on high-risk AI systems that affect their rights.
For their part, the text also includes systems of artificial intelligence for general purposes (GPAI), among which we can count many of the generative models that are so present and popular today. Regarding them it is stated that They must be completely transparent in both their training and their operations. This includes, for example, compliance with European Union copyright regulations, for which they must publish reports detailing the content used in the training.
It is also established for the content generated with them, specifically audiovisual (images, audio and videos). obligation to clearly mark them as such. This measure is undoubtedly aimed at combating deepfakes, which have recently become popular for spreading fake news and intentionally incorrect information.
In addition, the text imposes additional obligations for what it considers to be “more powerful GPAI models“and that as a result of his abilities”may pose systemic risks«. In this case it is necessary to perform “model evaluation, systemic risk assessment and mitigation, and incident reporting«.
So in summary, we see that the legislation that was approved by the European Parliament today focuses on privacy, security, transparency and risk assessment and prevention. However, We will have to wait for the legal review of the text (with any resulting corrections), to carry out an in-depth reading that will allow us to assess its full scope in more detail.
Donald Salinas is an experienced automobile journalist and writer for Div Bracket. He brings his readers the latest news and developments from the world of automobiles, offering a unique and knowledgeable perspective on the latest trends and innovations in the automotive industry.