Under the Digital Services Act, the European Commission forces 19 tech giants, including Amazon, Google, TikTok and YouTube, to disclose their artificial intelligence (AI) algorithms. Asking these companies – platforms and search engines with over 45 million EU users – to provide this information is a much-needed step to make AI more transparent and accountable. It will make life better for everyone.
Artificial intelligence is expected to affect every aspect of our lives, from health to education, what we watch and listen to, and even how well we write. But AI also generates a lot of fear, often revolving around a godlike computer that has become smarter than we are, or the risk that a machine tasked with performing a harmless task will accidentally destroy humanity. More pragmatically, people often wonder if AI will make them redundant.
We’ve been there before: machines and robots have already replaced many factory workers and bank tellers before they’re done. But AI-powered productivity gains come with two new challenges: transparency and accountability. And if we don’t seriously consider the best way to solve these problems, everyone loses.
Of course, we are already used to being evaluated by algorithms. Banks, like insurance companies or cell phone companies, use software to check our credit scores before offering us a mortgage. Car sharing apps check if we’re polite enough before offering us a car. These scores use a limited amount of information chosen by people: your credit score is based on your payment history, and your Uber rating is based on how previous drivers have treated you.
Black box ratings
But new AI-based technologies collect and organize data without human control. This means that it is much more difficult to hold someone accountable or to truly understand what factors were used to arrive at a machine-generated rating or decision. What if you start to realize that when you apply for a job, no one calls you back or that you are not allowed to borrow money? There may be something wrong with you somewhere on the internet.
You have the right to be forgotten in Europe and to ask online platforms to remove false information about you. But if it comes from an uncontrolled algorithm, it will be difficult to find out what information is wrong. Most likely, no one knows the exact answer.
If the errors are bad, the accuracy can be even worse. For example, what if you let an algorithm look at all available data about you and evaluate your ability to repay the loan?
A high-performance algorithm can conclude, other things being equal, that a woman is a member of a discriminatory ethnic group, a resident of a poor neighborhood, someone with a foreign accent, or someone who is not “handsome”. ” is less reliable.
Research shows that such people can expect to earn less than others and are therefore less likely to pay off their loans – the algorithms “know” that too. While there are rules that prevent people at banks from discriminating against potential borrowers, a stand-alone algorithm might think it’s okay to charge those people more money to borrow money. This kind of statistical discrimination can create a vicious circle: If you have to pay more to borrow money, you may have trouble making those high payments.
Even if you prevent the algorithm from using data about protected properties, it can draw similar conclusions based on what you buy, movies you watch, books you read, and even the way you write and jokes that make you laugh. However, algorithms are already used to check job applications, evaluate students, and assist the police.
cost of accuracy
Aside from equity concerns, statistical discrimination can hurt anyone. For example, a study of French supermarkets found that when employees with a Muslim name work under a biased manager, the employee is less productive because the manager’s bias becomes a self-fulfilling prophecy.
A study in Italian schools shows that gender stereotypes affect academic performance. When a teacher believes that girls are weaker in math and stronger in literature than boys, the students adjust their efforts accordingly, and the teacher is right. Some girls who could be great mathematicians or boys who could be great writers might end up choosing the wrong profession.
When people participate in decision making, we can measure bias and correct it to some extent. But it’s impossible to hold unmoderated algorithms accountable if we don’t know exactly the information they use to make their decisions.
If AI is to truly improve our lives, transparency and accountability will be key, ideally even before algorithms are incorporated into decision making. This is the purpose of the EU Artificial Intelligence Act. And so, as often happens, EU rules can quickly become the global standard. For this reason, companies should share business information with regulators before using it for confidential applications such as hiring.
Of course, such an arrangement involves striking a balance. Big tech companies are seeing AI as the next big thing, and innovation in this space is now a geopolitical race. But innovation often happens when companies are able to keep some of their technology private, so there’s always a risk that too much regulation will hinder progress.
Some believe that the absence of major AI innovation in the EU is a direct result of stringent data protection laws. But many of the potential economic benefits of developing AI could still backfire if we don’t hold companies accountable for the consequences of their algorithms.