April 27, 2025
Trending News

Why we urgently need a “GDPR” for artificial intelligence

  • April 3, 2023
  • 0

Is there anything else in our digital society that is not related to IT? Every organization today is directly or indirectly an IT company. Even a modern car

Is there anything else in our digital society that is not related to IT? Every organization today is directly or indirectly an IT company. Even a modern car is actually a computer on four wheels. Brands everywhere rely on huge databases of customer data. And we are increasingly seeing that smart solutions with AI are being developed on this basis. But without an ethical framework for this type of technology, the impact it will have on our society and our lives will spiral out of control.

In the supermarket we trust that the food on the shelves is safe and meets certain health standards. The same trust should also be the driving force when choosing our software and IT services. From a cybersecurity and data protection perspective, we already assume that our personal data is treated correctly. For example, companies are not allowed to pass on data to third parties without permission. And we also expect that there is enough security in place so that data doesn’t just end up on the street.

There are currently various legal frameworks around the world that monitor the protection of the data we share. The GDPR in Europe is undoubtedly one of the most comprehensive directives in this area. But while cybersecurity and privacy are both important concepts for building trust, they will no longer suffice for the foreseeable future. The more data is used to develop software and applications with AI, the more transparency is needed. How does an algorithm arrive at a specific prediction or decision? As long as we do not know this, we run the risk of disadvantage and discrimination against certain people in society.

A racist algorithm

Although AI offers great potential to simplify our lives, we should not be blind to the fact that it also has pitfalls. An example of this is COMPAS, an AI-based software that can predict which criminals are most at risk of recidivism when they return to society. American judges use the findings to support their decisions on sentences and bails. Usually the technology is very accurate, but erroneous results are inevitable. And when something goes wrong, it turns out that the algorithm often distinguishes by origin. According to the AI ​​tool, black convicts are up to 50% more likely to reoffend than white criminals.

What the algorithm is doing in the example above is a copy of what is happening in society. Unfortunately, racial prejudice has been an integral part of our society for centuries. However, the effects of software-generated distortions can be much greater. Especially if you automate this software. From people who don’t take out a home loan to people who are at high risk of getting sick from insurance policies… In order to solve this, it is important that an algorithm is transparent enough to avoid user bias.

Transparency and trust through ethical framework conditions

Fortunately, the transparency that was missing in the COMPAS software example is present in many other applications. Big tech companies even develop ethical frameworks that serve as the basis for the solutions they develop. Meanwhile, governments and the European Union are also busy creating an ethical breeding ground for AI. To oversee this, organizations are increasingly appointing ethics committees, staff members who oversee the impact of the technology. Such a panel ideally consists of individuals with a minimal background in ethical principles and who also have sufficient technological knowledge to identify, assess and resolve risks.

Artificial intelligence will be able to make better and better predictions, but its consequences remain very unpredictable. However, the responsibility should not lie solely with the corporate world. As users, we also need to be aware of the impact that an algorithm can have. Our entire society needs to understand the role AI plays in our lives.

When users understand what is safe and ethical, they can also take that into account when making technology decisions. Just as we select the best health products in the supermarket, such an attitude will ensure that technology manufacturers are transparent and take the wishes and requirements of their customers even more into account.

So the solution to the problem is three-sided. Just like GDPR, governments need to develop an ethical framework for AI and security; Companies must appoint ethics committees to ensure all products comply with existing guidelines and have a positive impact on society; and we as users need to gain a better understanding of AI so that we can make better decisions. If we get this right, everyone will soon be benefiting from the potential of AI.

This is a contribution by Pablo Ballarin Usieto. He is one of the speakers at Cybersec Europe 2023. On April 19th and 20th he will discuss the ethical impact of new technologies on our lives during a keynote in Brussels.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version