May 9, 2025
Trending News

Bells for cats and dragons: the challenge of AI regulation

  • May 12, 2023
  • 0

Although artificial intelligence (AI) is by no means new, the dizzying acceleration of its development that we have experienced in recent months has made it a protagonist unprecedented

Although artificial intelligence (AI) is by no means new, the dizzying acceleration of its development that we have experienced in recent months has made it a protagonist unprecedented disruption. Governments around the world are announcing measures to limit and control the development of AI and its tools, butIs it possible to regulate artificial intelligence? We take a close look at the difficult balance between fostering innovation and protecting our fundamental rights and values.

When we study history, we often resort to the linear abstraction of evolution and imagine a society where technological innovation they are introduced bit by bit and the benefits are so clear that their adoption does not cause much trauma. The reality is, as almost always, very different.

Before we continue, an illustrative example: in the 19th century, doctors did not recommend traveling by train. Freud, the neurologist and “father” of psychoanalysis, even said that it affects mental health, and studies were published in prestigious media warning that “The human body was not designed to travel faster than 45 kilometers per hour.”. A search on the Internet (or in AI) is enough to find examples similar to the car, electricity, television or the printing press.

We may not be so far from what those who saw the mighty “Iron Horse” crossing the Midwestern plains thought, a technological leap of such incomprehensible and terrifying magnitude. And surely at that moment they felt a cold sweat and the need to limit, control and regulate what would change their lives forever.

The possibilities of artificial intelligence are staggering and widely discussed. But as with any disruption, and even more so on this scale, the move is not without risk:

  • Risks associated with work: Automation based on artificial intelligence will replace certain professions and millions of workers. This could cause economic inequalities and job retraining problems.
  • prejudices and discrimination. Artificial intelligence trained with data that contains biases such as racial or gender discrimination can lead to discriminatory decisions in areas such as human resources, medicine or finance.
  • Privacy and security. Training AI requires a huge amount of data which, if not handled properly, can have serious consequences for those affected.
  • Lack transparency, especially with very complex deep neural network models that can be difficult for humans to understand. Serious ethical and legal issues can arise if AI decisions cannot be explained.
  • control and super intelligence. In a scenario where AI advances sufficiently, an intelligence capable of surpassing us could develop. How could we control it? What mechanisms should we put in place to prevent this?
  • Treatment, crowd control and disinformation. Artificial intelligence can be a powerful tool for generating false or confusing content that can be used to manipulate opinions, decisions or carry out cyber attacks.

Speed ​​and legislation are concepts that rarely go together, especially in structures as complex as the European Union. However, our legislators were the first to start work on preparing the basis for the future regulation of artificial intelligence in the European Union.

Almost two years ago, when ChatGPT did not exist and metaversion was about to revolutionize the world, the European Commission already proposed a draft regulation for artificial intelligence law. Among other things, it provides a classification of available tools based on their level of risk, from low to unacceptable. The voting that is going on at the time these lines are published is already wet paper.

This is of course not the only EU initiative, but one of the most important. As part of what it defines as an active policy, strategies and ethical guidelines have been approved, millions of euros in funds have been allocated and international cooperation for the development of AI initiatives is encouraged.

Let’s put the numbers at this speed: According to data released by Intel, AI training has grown over the past year a hundred million times faster than Moore’s Law, the mythical postulate that says processing speed would double every two years. before it was a sign of what was expected.

Achieving these speeds requires fuel, and behind this exponential growth are huge sums of dollars, euros or yuan. IDC estimates that global AI spending this year will reach 98,400 crores dollars, with a CAGR of 28.4% from 2018 to 2023. According to Research and Markets, the global AI market is expected to reach 190,000 crores dollars by 2025. Astronomical numbers that explain why we’re going so fast.

In an attempt to gain some perspective, a few months ago a group of 1,000 people, including scientists, engineers, intellectuals, businessmen, politicians and big names in the world of technology, signed an open letter calling for a six-month suspension of development. the largest artificial intelligence projects, given the “profound risks to society and humanity” they can pose without adequate control and management.

If time is an issue, space is an issue: of course AI does not understand geographical boundaries and that a legal basis valid only in Europe not only solves almost no problem, but can represent a significant ballast in the innovation race. An aspect, let’s remember, in which we are very far from the lead.

In general, the proposed legislation focuses on issues such as data protection, transparency, human oversight capacity and liability arising from the use of artificial intelligence and tools derived from it. Of course, there are also very complex ethical implications that escape the precision of 0s and 1s to delve into the social sciences, psychology, or philosophy.

Politics is in a hurry. Artificial intelligence is seen as an opportunity but also as a threat to the establishment and specifically one that will be very difficult to control. The 2023 Artificial Intelligence Index, created by Stanford University and published a month ago, suggests that the West and some Asian countries (especially those that contribute some data) are immersed in a race to legalize artificial intelligence. Of the 127 countries analyzed, 31 already have at least one law regulating artificial intelligence.

Two worlds, two bells, two ways to control the AI

If we draw a line on the eastern border of the European Union, we can draw two clearly distinguished scenarios:

To the west, on the other side of the Atlantic, they realize the importance does not hinder the development of technology that may be key to the future of the U.S. Washington has published some principles to consider when developing AI-based tools and solutions.

The document expresses the need to develop safe and effective solutions that respect the privacy of the data they use, explain how they work and always allow for intervention if necessary.

In parallel, the US Congress is working on comprehensive legislation on artificial intelligence, the development and passage of which is complex because it would require full agreement between Democrats and Republicans.

in the halls Brussels fragmentation is chosen, adjusting legislation to protect aspects such as data (the trigger for the ChatGTP ban in Italy) or regulating the intellectual property of images, videos and music that AI can create.

If we add to this bloc the G7 (with countries as relevant as the United Kingdom, Canada or Japan), we have a geopolitical scenario with many common points and the protection of its citizens as a common denominator, but also with deep differences that think big is still something more than a utopia.

In addition, these countries are some of the world’s leading economies and have enormous power and influence, but we must not forget that many things have changed in recent years: there’s a dragon in the room or why no one wants to talk about China.

The Asian giant, as it has been for centuries, continues their own rules. For China, the development of artificial intelligence is an opportunity that they cannot miss, but at the same time, it is a serious threat that they must control. In his case, the limits are clear: AI can go exactly where political power ends, but never beyond.

To achieve this, they are going to combine regulatory experiments (conducting tests in certain areas or provinces, for example to develop autonomous driving), a commitment to standards and, if necessary, stricter regulation.

Currently, China’s focus is not on comprehensive regulation, but on regulations that address specific issues in an agile manner and with the necessary flexibility to adapt to real-time changes. Let’s not forget that we’re talking about a country that took 48 hours to restrict access to ChatGTP to the entire population.

China he can’t afford to be left out in the AI ​​battle, but she’s also aware that she can’t do it alone. It needs the innovation – and hardware – necessary to achieve it, and it doesn’t hesitate to jump through the hoops that are necessary… for now.

Talking about global AI regulation without China (or Russia) doesn’t make much sense. We also don’t know how far we are or what will happen in the future, but a simple analogy with the fight against climate change can help us refine our crystal ball.

At this point, we have to take into account issues such as the diversity of approaches and priorities, how fast technologies are advancing, the difficulties in actual implementation (especially in the area of ​​cooperation) and especially the different economic and competitive interests of each territory.

Going back to our imaginary line, we have the eastern region where technological development outweighs any other factor. if possible, as long as it does not compromise the political. In addition, they come from companies with a greater acceptance of tracking and control for obvious reasons. There is concern in the West about how citizens can be protected while implementing “controlled” restrictions on AI, guaranteeing its transparency and establishing clear sectoral regulations.

Is it necessary to regulate AI?

Artificial intelligence, still in the almost embryonic state it is in today it’s not just any technology. As happened with the Internet, we are far from knowing the consequences of its development in the coming years and how it will affect us, but we can only wait to find out.

For the first time in human history, we are facing progress that is coming to improve or, who knows, replace, our most valuable assetthe one that allowed us to get to where we are: the intelligence.

Laws are a way to regulate our coexistence. External checks on our human will that make life easier for us in society. If there are machines that will behave like humans, does it make sense for them to be subject to a regulatory framework?

The challenge for the European Union is huge: to create a regulatory environment that protects citizens but does not put us at a disadvantage compared to the rest of the world and enables competitiveness, innovation and the attraction (or retention) of specialized talent. Maybe it’s not so much a problem to put on a rattle, but rather take care of the cat and control the dragon.


Source: Muy Computer

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version