Rapid developments in generative artificial intelligence (GenAI) have drawn attention to robust security and governance measures in organizations. While the specter of AI often conjures up dramatic scenarios that could lead to the downfall of humanity, the more immediate and manageable risks concern data management, security and ethical decision-making.
There is a lot at stake when companies adopt GenAI. These systems are able to generate new texts, images, audio files or codes on command. The security challenges of these technologies differ from traditional cybersecurity risks in that they are new and sometimes produce unpredictable results. However, experts emphasize that these challenges do not have to be insurmountable. Risks can be effectively limited through strategic governance.
The rapid development of AI requires vigilance
In 2023, companies invested an estimated $19.4 billion in GenAI solutions. Projections are for $151 billion by 2027. Large Language Models (LLMs) are used in many sectors, including tourism, insurance, financial services and manufacturing. Without proper management, companies risk facing issues such as production distrust, ethical issues, copyright infringement, and privacy violations.
John Pescatore, director of security at the SANS Institute, highlights the importance of a comprehensive governance model and emphasizes the need for continuous management rather than a set-and-forget approach. Effective governance can be achieved relatively quickly. Additionally, these types of models can be easily adapted over time as technology and business needs change.
Eight essential steps to success
To successfully secure GenAI deployments, the following steps are important:
- Define your AI/LLM scope and data management: Specific goals must be set for each GenAI system. It is important that data protection regulations are adhered to.
- Implement strict data hygiene: Companies should establish detailed processes for cleaning, enriching, and validating training data. This is necessary to prevent data leaks or data breaches.
- Create robust data security: Sensitive data must be adequately protected and proper agreements must be made about who has access to the data and models. This way you can minimize the attack surface.
- Continuous monitoring: Monitor the behavior of all AI models and their associated data feeds to detect malicious activity or breaches of sensitive data.
- Limit bias: It is difficult to completely eliminate bias, but organizations should still do everything they can to reduce it. Bias can have a major impact on decision-making, for example in lending or recruiting.
- Rework incident response: The scripts you previously created are not adapted to AI. That’s why they need to be completely revised.
- Maintain flexibility: When developing guidelines, keep in mind that you will have to adapt them to the technological development of AI.
- Ensure consistency: Ensure consistent AI governance and security across the organization to prevent each department from doing its own thing.
Securing AI is a team effort
If an organization wants to create a GenAI policy, it must also make decisions about the types of LLMs that should or may be used, whether they are created from scratch, custom-built, or pre-existing. It is crucial to consider the separation and subdivision of data for different departments. For this reason, security experts advocate for separate LLMs for individual departments and business units to reduce the risk of widespread data breaches.
Supply chain risk management is also an integral part of a comprehensive GenAI governance plan.
Wytze Rijkmans, regional vice president Tanium
Supply chain risk management is also an integral part of a comprehensive GenAI governance plan. Not only must suppliers’ governance and security protocols be assessed and monitored, they must also be aligned with the organization’s own practices.
Successfully implementing these measures requires a collaborative effort from security specialists and leadership, with an emphasis on security by design principles. The challenge is to ensure that security measures keep pace with the rapid adoption of GenAI technologies. As John Pescatore of the SANS Institute notes, security in these scenarios is often a few steps behind technological developments. Anyone who constantly has to catch up is in a weak position compared to potential opponents.
This is a post from Wytze Rijkmans, Regional Vice President of Tanium. Further information about the company’s services can be found here