Artificial intelligence (AI) has been more hyped than ever last year. Most companies are working on or at least experimenting with AI models to support their business. One important prerequisite is essential for every application: “trust”. To reap the benefits of an AI model, end users must have confidence in the results it generates.
Trust in AI starts with the data you use. In two previous articles you could read how to deal with the biggest stumbling blocks when working productively with data and how to strengthen trust in data through the right architecture (e.g. a data catalog). But the models we apply to the data must also inspire trust.
Most models are one flight recorder that produces results. Transparency and interpretability of these results are therefore crucial to ensure that people trust this black box. There are several easy ways to make AI models interpretable:
Natural language explanation
Suppose a model creates a chart and two people look at the chart. Then there is a good chance that both will interpret the results differently. However, only one interpretation is possible. After all, every model is based on mathematics and statistics. For most people, it is simply very difficult to read statistical results correctly. Fortunately, there are techniques to help understand the underlying logic of statistics.
Natural Language Explanation, a technique similar to applications like ChatGPT, presents this logic in a user-friendly way and in natural language. Research shows that accompanying text makes the results understandable and clearer for end users. Once they understand the results better, they automatically gain more confidence in the model.
CAgainst objective explanation
Research shows that providing counterexamples often helps people understand a particular explanation more quickly. We can also apply the same technique to AI models. If a model at the bank rejects a loan application, you could explain why the person in question cannot get a loan. But you can also go the other way around and describe when the loan would have been granted. For example, if the customer had a higher personal contribution or would pay off the loan with a slightly longer term. Especially now that AI is finding its way into more and more areas, techniques like counter factual explanation are useful for explaining the context of decisions to a broad audience.
Model cards
What is the scope of a project? What are the goals? And what are the risks? To display this information, we provide “model cards” to AI models. The concept comes from the risk area, where such a sample card is even mandatory as part of compliance. AI is now so ubiquitous that SAS is making the technology available for other applications. Especially now that the EU AI law and other regulations place greater emphasis on the importance of governance, sample cards are actually a must.
In any case, model cards are interesting for using AI models more safely. Finally, they make it clear what the model is intended for. A simple example: If you have built a model that produces images or photos, you should not expect it to also produce cartoons. Maybe that will work, but the model is simply not designed for that. This will help you understand when you can trust a model and when the results may be less reliable.
Focus on security
After all, trust in technology always goes hand in hand with security. When we think of analytics and AI, we immediately think of data and infrastructure security, but security can also be important to a model. Recent studies show that it is not theoretically impossible to reconstruct the original data on which a model was trained based on the data you input. Every time you enter data, the model generates a result. If you continue to do this consistently with the right AI model, that model should be able to retrieve the basic data in the black box.
Although there are no known examples of such malicious applications, we must of course take into account that it is possible. For example, it would be a tragedy for a bank if customer data were suddenly accessible via an online AI application despite all security measures. We would therefore do well to extend our privacy and security strategy to the level of the models we build. For example, by monitoring how often a model is called.
Turn trust into a mindset
From eliminating biases to techniques that increase trust in AI, at first glance it seems like these processes only slow down productivity and the use of analytics in the company. This may be the case at first, but in the long run your processes will run more and more smoothly. After all, the life cycle of a model goes beyond development.
If you set up the governance component correctly and include trust as a standard in your model development and monitoring strategy, you will have to put in less work at this later stage and therefore be able to work more efficiently with AI away. go. Not only will end users be more likely to trust your model’s decisions, but your business will also be better prepared for scenarios where something could go wrong.
Diploma? As AI makes its way into all areas of society, we need to encourage trust in technology more than ever. Many techniques that until recently were particularly important in the financial world are now also a must for other applications. That’s why all functionalities related to trust and governance are included as standard in the SAS software package. So that companies can benefit optimally and worry-free from the countless advantages that AI has to offer.
This article is a contribution from SAS. It is the third article in a series about working productively with analytics and was written by Véronique Van Vlasselaer (Analytics & AI Lead SWEE, SAS) and Mathias Coopmans (EMEA Technology Futures Lead Architecture & Cloud, SAS). The other parts of the three-part series can be found here.