May 5, 2025
Trending News

Google equips AI models against hacker attacks

  • June 12, 2023
  • 0

With the introduction of the Secure AI Framework, Google supports companies in protecting their AI models from hackers. Secure AI Framework (SAIF) is a new technical guidance from

Google equips AI models against hacker attacks

Google I/O 2023

With the introduction of the Secure AI Framework, Google supports companies in protecting their AI models from hackers.

Secure AI Framework (SAIF) is a new technical guidance from Google. Among other things, it contains a number of suggestions on how companies’ AI models can better protect against hackers.

Be SAIF

The system must ensure that hackers cannot steal a neural network’s training data and code. SAIF is also intended to make it more difficult for cyber attackers to compromise an AI model.

The power of suggestions

The proposals are divided into six collections.

Number one emphasizes the importance of adapting existing cybersecurity to artificial intelligence, including anti-SQL injection software. This is an attack in which hackers use bogus commands to attack a database in order to steal data.

The second suggestion concerns the detection of potential threats. According to Google, it’s better to proactively search for malicious AI content; Administrators should implement additional procedures instead of relying solely on their cybersecurity systems.

Proposal number three addresses the question of how AI can strengthen this cybersecurity. According to Google, AI can simplify complex tasks like analyzing code or malware. The tech giant emphasizes that human control is always important.

The fourth proposal recommends harmonizing controls across platforms to always have the best security for different AI applications.

Number five focuses on faster feedback for AI systems and the different steps to keep them learning. This keeps the systems up to date.

Finally, number six says to examine the end-to-end risks that a company may face when using AI.

Additionally

Additionally, Google recommends security teams regularly take stock of the AI ​​systems employees use and implement a standard in the tools they use to do their jobs.

All suggestions and further information can be found in the report.

For security reasons, Google recently added biometric data to the password manager. For companies looking for additional help with generative AI, the tech giant also launched new consulting offerings a few days ago.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *