May 9, 2025
Trending News

“Half of cybersecurity teams have no say in AI adoption and development”

  • October 28, 2024
  • 0

Almost half of companies do not involve security teams in the implementation and adoption of AI solutions. AI plays a role in cybersecurity by improving detection, response, and

“Half of cybersecurity teams have no say in AI adoption and development”

Almost half of companies do not involve security teams in the implementation and adoption of AI solutions. AI plays a role in cybersecurity by improving detection, response, and endpoint security.

Only 35 percent of cybersecurity professionals say they are actively involved in developing their company’s AI policies. However, almost half (45 percent) of security professionals are not involved in developing or implementing AI solutions at all. This is according to ISACA’s 2024 State of Cybersecurity Report, launched in Dublin last week.

Safety afterwards

More than 1,800 cybersecurity experts were surveyed about the role of security and AI. It is noticeable that there is a large gap between the introduction of AI projects and the involvement of security experts. AI therefore runs the risk of repeating past mistakes.

It is now clear that security must be a priority right from the start when developing and introducing software. Adding security later to an existing solution rarely works optimally. But in the case of AI, security seems to have been pushed back into second place. This poses foreseeable risks.

ISACA is committed to digital trust in the professional context and uses the survey to signal that there is still a lot of work ahead of us. Cyber ​​threats are becoming increasingly complex and sophisticated. It is therefore important to also involve security specialists in AI projects. The role of AI in security itself can also be important, but is only accepted by a minority.

AI and security

Companies that combine AI and security do so primarily to improve their threat detection and response capabilities. 28 percent of those surveyed say that such projects are underway. Endpoint security is another popular area for AI use (27 percent).

Companies are also trying to address the shortage of security professionals by automating routine tasks. 24 percent of those surveyed say they use AI for this. Then there is a minority (13 percent) who use AI to detect fraud.

challenges

On the sidelines of the research, ISACA sees four important trends around digital trust in the near future. AI-driven threats are number one. ISACA primarily refers to generative AI, which allows criminals to easily generate very convincing phishing emails. According to the organization, deepfakes and personalized attacks will become more common in the future.

The second major challenge is the gap between the number of security professionals needed and the amount of talent available. AI can close this gap somewhat, but the rapid digital transformation of European companies is, on the other hand, exacerbating the problem.

The third trend that ISACA points to is increasing regulations that companies must comply with. The AI ​​Act and NIS2 are the most important examples of this. All of this culminates in a fourth trend: the role of the cybersecurity specialist is becoming the key to future success, because all of these challenges are related to security.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *