May 1, 2025
Trending News

Artificial intelligence needs to be protected where it poses a serious threat to privacy – analyst

  • January 16, 2024
  • 0

AI is a tool and should be treated like any other tool; that is, protection measures and requirements should be put in place where it poses a serious

Artificial intelligence needs to be protected where it poses a serious threat to privacy – analyst
AI is a tool and should be treated like any other tool; that is, protection measures and requirements should be put in place where it poses a serious threat to human privacy and security.

Ihor Rozkladai, deputy director of the Center for Democracy and the Rule of Law (CEDEM), stated this during the presentation of a guide to artificial intelligence tools for non-governmental organizations at Ukrinform.

“AI is a tool and I agree that it should be treated like any other tool; In other words, I agree that protection measures should be taken and requirements should be put into practice when it poses a serious threat to privacy, security and mainly human life. “And we certainly shouldn’t give him the final decision-making function,” he said.

Ihor Razkladai
Ihor Razkladai

Razkladai noted that artificial intelligence could be a helpful tool to capture images of criminals on the network, for example.

“But it is not artificial intelligence that should decide that this is a criminal and should be shot. This reasonable, sufficiently balanced approach should allow us to interact normally with this phenomenon. The analyst believes that this (artificial intelligence – ed.) should certainly not be feared and prohibited, but … believes that it is necessary to work on this and constantly conduct risk analysis.

Frachesca Fanucci, general counsel of the European Center for Non-Commercial Law (ECNL), said there could be different levels of risk to people depending on the purpose and context of use of this technology.

Frachesca Fanucci
Frachesca Fanucci

“One of the first risks is false positive or negative identifications. The so-called “false positives”. We must always remember that the algorithm never returns the final result, but only the probabilities (indicates – ed.) ECNL’s legal advisor said: “This can happen, for example, on the street, in the square “It means there is a risk of mistaken identity in situations where there are large numbers of people being checked collectively, or where there are lots of cars,” he said.

According to him, Ukraine is currently under martial law and there are international standards for the protection of certain human rights in such situations.

“These withdrawals must be exceptional and temporary and justified only by the nature and urgency of the situation. So, for example, if these cameras (with facial recognition function) were installed due to war, they need to be removed after the war is over or a new risk assessment can be used to do so,” Fanucci believes.

Lay believes that technologies like these cameras should be very tightly regulated.

“Just like medical data, biometric data should be limited as much as possible, with maximum logging. There should be a very limited retention period for these videos and things like that, otherwise it’s the road to China, where every cop looking at you already knows everything about you.” said.

According to Ukrinform, the company OpenAI lifts ban on using artificial intelligence for military purposes.

Source: Ukrinform

Leave a Reply

Your email address will not be published. Required fields are marked *