May 1, 2025
Trending News

Can autonomous cars decide to kill the driver so they don’t run over a pedestrian group?

  • July 24, 2022
  • 0

cars for a long time be able to travel independently Technologies that make it possible are being worked on While companies like Tesla are pioneers in this area,

cars for a long time be able to travel independently Technologies that make it possible are being worked on While companies like Tesla are pioneers in this area, traditional automakers are also making serious investments in this area. However, technology is not the only problem that autonomous vehicles face.

The issue of how autonomous vehicles will make decisions has been at the center of discussion for some time. In general “If he has to hit someone, does the car hit the baby or the elderly person” While such discussions are well known, there is still an important dilemma. mind-blowing: Do autonomous vehicles protect their passengers in all situations, or can they sacrifice their passengers for the greater good?

How do you determine who will live?

Let’s talk about the scenario used in these discussions. An autonomous vehicle is alone on the road. As he turns the corner, he suddenly encounters a large group of people. The vehicle is in this condition your passenger should it be protected, or should it bump into the wall to minimize the number of deaths or injuries? What would you say if you were the passenger of the vehicle?

from Tolouse Business School Jean-François Bonnefon He is a researcher who has been involved in moral and ethical debates on this topic with his own article. In this research, as autonomous vehicles increase, autonomous vehicles must make such decisions. probabilities Don’t make such decisions their frequency will also increase. The decisions vehicles will make in these situations will also play an important role in increasing vehicle usage. According to the researchers, policymakers and manufacturers should use psychologists to determine how autonomous cars will behave. applied ethical studies He has to clear the way for that.

Even people’s decision is not always the same.

Amazon’s online public funding/research tool under investigation at Mechanical Turk The results of the study were discussed. Here, different scenarios were presented to the participants. One of these scenarios was the one we mentioned above. In addition, similar scenarios were presented in which the number of passengers in the vehicle or the age of the passengers differed among the participants.

The results weren’t all that surprising: In general, people tried to save the lives of others. giving up the driver’s life but one small detail stood out. People only made this choice if they weren’t drivers themselves. Another notable point is that while it is morally right that 75% of respondents drive off the road, the same participants only 65% he thinks that tools really should be programmed that way. In general it can be said that the opinion of the participants that autonomous vehicles should act to reduce the fatality rate in a possible accident is dominant.

There is also the paradox of autonomous vehicles.

MIT Technology Review For example, if we look at sources, we see that the opinion that self-driving cars are safer than human drivers is dominant. This brings us to a new dilemma. The fact that fewer people prefer smart cars because they can endanger their drivers means that more people have accidents in traffic.

that autonomous vehicles will be the future of public transport almost no problem and it could change the concept of travel on a global scale. Still, there are several obstacles to overcome for autonomous vehicles. Bringing together artificial intelligence and ethics will be one of the most important.

In fact, this topic has been covered a lot in science fiction.

Here Android becomes a robot, an autonomous vehicle, a smart home vacuum cleaner, in general, all these systems act by adhering to the fundamental robot law. On the other hand, this robot law was not made at the Bilmemnere Artificial Intelligence Summit. Actually legendary science fiction writer Isaac AsimovThese are the three laws that . These laws are as follows:

  1. A robot may not injure or allow a human being.
  2. A robot must obey the commands of a human, as long as they do not conflict with the first rule.
  3. A robot must protect its own existence as long as it does not conflict with the first and second rules.

As can be seen, our initial theory cannot find a place for itself in the robot law. The solution to such situations is a 1985 science fiction story by Asimov. Robots and EmpireIt has been answered in . zeroth law According to this law, also called the law, advanced robots will prevent all of humanity from harming even one human being.

How do you think an autonomous vehicle should behave in this scenario?

Source: Web Tekno

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version