Study sheds light on AI’s dark side
- April 7, 2023
- 0
Today, artificial intelligence is touted as a panacea for almost every computer problem, from medical diagnostics to driverless cars to fraud prevention. But Vern Glaser of Alberta Business
Today, artificial intelligence is touted as a panacea for almost every computer problem, from medical diagnostics to driverless cars to fraud prevention. But Vern Glaser of Alberta Business
Today, artificial intelligence is touted as a panacea for almost every computer problem, from medical diagnostics to driverless cars to fraud prevention. But Vern Glaser of Alberta Business School does it “quite extraordinary” when AI fails. In his latest study, “Algorithms Rule, Values Can Extinguish,” Glaser explains how human values are often forced into AI efficiency and why costs can be high.
“If you don’t try to think actively with value implications, it will have bad consequences,” he says.
Glaser cites Microsoft’s Tey as an example of bad results. When the chatbot was introduced to Twitter in 2016, it was withdrawn within 24 hours after trolls trained it to throw racist remarks.
Then came the 2015 robo-debt scandal, where the Australian government used artificial intelligence to detect overpayments for unemployment and disability benefits. However, the algorithm believed that each discrepancy indicated an overpayment and automatically sent notification letters requesting refunds. If someone didn’t respond, the case was handed over to the debt collector. By 2019, the program had identified more than 734,000 payments worth A$2 billion (CAD 1.8 billion).
“The idea was that by removing human judgments shaped by prejudices and personal values, the automated program would make better, fairer and more rational decisions at a much lower cost,” says Glaser.
But he says the human consequences are dire. Parliamentary reviews found a “fundamental lack of procedural justice” and described the program as “inducing significant emotional trauma, stress and embarrassment, incredibly debilitating to those affected”, including at least two suicides.
Glaser says that although artificial intelligence promises to provide enormous benefits to society, we are now starting to see its dark side as well. In a recent Globe and Mail column, Lawrence Martin points to the dystopian possibilities of artificial intelligence, including autonomous weapons that can fire without human supervision, cyberattacks, deep fraud and disinformation campaigns. Former Google CEO Eric Schmidt warned that artificial intelligence could easily be used to create deadly biological weapons.
Glaser bases his analysis on the concept of “technique” proposed by the French philosopher Jacques Ellul in his 1954 book The Technological Society, according to which every field of human activity is defined by the imperatives of efficiency and productivity.
“Ellul was very forward thinking,” says Glaser. “His argument is that when you go through this engineering process, you create this mechanical world that you naturally devalue and your values are essentially reduced to efficiency.
“It doesn’t matter if it’s AI or not. AI is, in many ways, probably the best example of that.”
To guard against the “tyranny of technology” in AI, Glaser recommends the following three principles. First, recognize that because algorithms are mathematical, they rely on “proxies” or digital representations of real phenomena. For example, Facebook evaluates friendship by the number of friends a user has or the number of likes their friends’ posts receive.
“Is that really the measure of friendship? It’s the measure of one thing, but whether it’s truly friendship is another question,” says Glaser, adding that the intensity, nature, nuance, and complexity of human relationships can easily be overlooked.
“When you digitize facts, you’re actually representing something as a number. And when you have this kind of operation, it’s easy to forget that it’s just a simplified version of a broader concept.”
For AI developers, Glaser recommends strategically incorporating human intervention into the decision-making algorithm and creating evaluation systems that consider multiple values.
“People have a tendency to make a decision-making algorithm once and let it go,” he says, but it needs close and constant surveillance to prevent the ugly potential of human-valued AI from unfolding.
In other words, AI is a reflection of who we are at our best and worst. Without a good and careful look in the mirror, the latter can take over.
“We want to make sure we understand what’s going on so the AI doesn’t control us,” he says. “It’s important to remember the dark side. If we can do that, it can be a force for the public good.”
Source: Port Altele
As an experienced journalist and author, Mary has been reporting on the latest news and trends for over 5 years. With a passion for uncovering the stories behind the headlines, Mary has earned a reputation as a trusted voice in the world of journalism. Her writing style is insightful, engaging and thought-provoking, as she takes a deep dive into the most pressing issues of our time.