AI: unknown, but not unloved despite a lack of trust
- October 25, 2023
- 0
Do you know how best to use AI in your job? Or what impact do AI tools have on the security of your company data? Anyone who says
Do you know how best to use AI in your job? Or what impact do AI tools have on the security of your company data? Anyone who says
Do you know how best to use AI in your job? Or what impact do AI tools have on the security of your company data? Anyone who says “no” is no exception. Almost no one offers training for humans and frameworks to keep the use of AI under control are still being developed. Although this does not contribute to trust in the technology, it does not stop its spread.
Seven percent of companies offer AI training for their employees. “93 percent don’t understand the problem of AI,” concludes Erik Prusch, newly appointed CEO of Isaca, although he is not surprised. “Today there are actually no AI professionals left. People are not trained and therefore do not understand the risks of the technology. And if you don’t understand the risks, how can you put controls in place to protect data or enforce compliance?”
93 percent do not understand the problem of AI.
Erik Prusch, CEO Isaca
The relationship between AI and digital trust is the focus of Isaca’s Digital Trust World Summit in Dublin. “The lack of knowledge and the impact on trust will be the focus in the coming days,” confirms Prusch.
This is necessary, the numbers confirm that. Isaca surveyed 2,300 professionals worldwide, including 334 in Europe. This already shows that there is little certainty about who can use AI and why. “29 percent of organizations allow the use of AI effectively,” says Chris Dimitriadis, “but 48 percent still use the technology.”
He’s not surprised. “New technologies are usually introduced on an ad hoc basis. We’ve also seen this with other trends, such as smartphones or shadow IT. This trend cannot be stopped. When new technologies become available, people want to use them.”
That doesn’t mean that simply using AI in your company is a good idea. Dimitriadis: “If you use AI in an uncontrolled manner, it can bring risks. We have already seen data leaks due to data being shared with AI in the cloud. Accuracy is also a risk. AI is not sophisticated enough to provide confidence in the results.”
Prusch specifies the problem: “Today anyone can go to ChatGPT and pass sensitive information from the company to the AI and get something in return.” These days there is nothing that keeps track of what data you send and what data you receive. This applies to all employees, not just cybersecurity specialists.”
“We are all behind the facts,” says Prusch. “AI is currently a systematic risk before it can become a systematic benefit.” This reality undermines people’s trust in technology. During trust expert Rachel Botsman’s keynote, a handful of people dare to raise their hands when she asks who ChatGPT trusts to write their emails. Financial advice is not trusted. Asking the question after the crowded masterclass on ChatGPT doesn’t help with confidence.
Trust is a relationship of trust with the unknown
Rachel Botsman, trust expert
“My definition of trust is a trusting relationship with the unknown,” says Botsman. “When you look at trust from this perspective, it is critical to taking risks and driving innovation. It allows you to take big leaps into the unknown. The more complex a system is, for example AI, the more unknowns there are and the more trust is required. AI today faces a knowledge, security and trust problem.”
According to Isaca, there are several solutions that together ensure a structurally better implementation of AI. Clear policy around AI is needed, but before this can happen, training is required. Dimitriadis: “We have to educate managers and the board about the potential, but also the risks. You have to understand it. A clear and accessible policy that is not too technical is then required. Everyone needs to understand the rules and guidelines.”
“Reducing risk is not the same as increasing trust,” Botsman clarifies. “But the lower the risk, the less self-confidence someone has to show to take a leap of faith. The same goes for transparency.”
Isaca can play a dual role in reducing the leap of faith. On the one hand, the global non-profit organization issues certificates to its members proving that they have acquired certain relevant knowledge. On the other hand, Isaca provides its members with supporting materials, such as detailed frameworks. Knowledge and clear guidelines ensure less risk and more transparency.
The information comes from 170,000 experts from various industries worldwide.
Erik Prusch, CEO Isaca
“With a new technology like AI, we first try to understand what is going on and what the impact is,” says Prusch. “We have more than 170,000 members worldwide. That’s more than 170,000 data points. The information comes from all these experts around the world and from different industries. Not many organizations have this scale.” He compares Isaca’s capabilities to distributed computing. “Several individuals contribute to a larger whole.”
Isaca will then start publishing articles and white papers and the first webinars will appear. “Our knowledge of what is new naturally evolves over time. As soon as we know enough, we will provide updates to our certificates. It may take a little longer, but we want to be sure that the graduates have the right knowledge.”
Armed with this knowledge, Isaca members can go to their companies. After all, Isaca consists almost entirely of volunteers who work together on knowledge and policy, but otherwise have a permanent job where these things are useful. Prusch: “More than ten thousand companies worldwide are supported by Isaca.”
Knowledge is one piece of the puzzle, one framework is another. Isaca is therefore working on a framework that should support the implementation of technology reliably and securely, without being slowed down by developments in the field of AI, for example. It Digital Trust Ecosystem Framework is a long-term effort, explains Rolf von Roessing from the Isaca board.
This framework is already in beta and is more comprehensive than, for example, the EU AI law. “Actually, the facts have already overtaken it before it is final,” says von Roessing. “We have the luxury of making suggestions rather than laws.” The framework also takes into account the emergence of new technologies. “Ultimately you have to learn to live with it and then you can do it,” he says. The focus of development is currently on controlling AI in the organization.
We see a demo of the framework based on the relationship between people, processes, technology and the organization. Von Roessing shows the connections between these points, which you can click on via the framework. Between People And organization For example, you can find the company culture. As you continue to click on them in the framework, you will discover very specific points to think about and based on which you can create guidelines.
“We will still have a lot of work to do to finalize everything,” von Roessing is aware. “We are now receiving feedback from our 170,000 members to further refine the framework.” When finished, it will be broadly applicable, and that is intended. Prusch: “It doesn’t help to only have domain-specific solutions. We need clear, cross-industry standards.”
With the right knowledge and standards, the future looks bright, according to those surveyed. 58 percent believe that AI will have a positive impact on society in general and 86 percent expect (slight) added value for their company. Almost everyone agrees that AI will increase human productivity.
At the conference, Isaca demonstrates its commitment to helping organizations worldwide ensure their technology implementation is thoughtful and secure. It’s a race against time, that’s clear to everyone, but the committed volunteers don’t lack optimism. Now it’s all about gaining and sharing knowledge.
Source: IT Daily
As an experienced journalist and author, Mary has been reporting on the latest news and trends for over 5 years. With a passion for uncovering the stories behind the headlines, Mary has earned a reputation as a trusted voice in the world of journalism. Her writing style is insightful, engaging and thought-provoking, as she takes a deep dive into the most pressing issues of our time.