The debate about artificial intelligence (AI) in the public sector is often characterized by extremes. AI is either presented as a revolutionary solution to all challenges or dismissed as impractical, overrated and unsafe. We believe that a responsible AI debate requires technology providers to acknowledge concerns and contribute to informed choices. Here are three security challenges we often hear about, and what you should ask before implementing new solutions.
One of the most critical issues we encounter in discussions about AI in the public sector is that AI poses a security risk and is not suitable for handling sensitive data. Many fear, with good reason, that AI systems could share information with other countries, be vulnerable to leaks, or be used for surveillance.
These are critical challenges, but they are not unique to AI and can be addressed properly with good choices. These challenges are not fundamentally different from those that the public sector has already addressed in its IT systems for decades. Responsible AI implementation therefore requires thorough risk assessments and clear data access protocols.
The technology must be developed in a way that complies with regulations for privacy and data processing, and ensures that the data remains within safe jurisdictions. Our platform, for example, has been run through the Norwegian Data Protection Authority's sandbox project on the safe and lawful processing of personal data and can be connected to personal emails. Although we use language models like OpenAI, we create our own closed environments so that the data is kept within Norwegian and European borders. Therefore, it is crucial to ask the right questions before using AI, so that the technology is implemented in a way that ensures both security and privacy.
Here are the questions that should be asked:
|
Another real concern is that AI is not 100% accurate and can generate information that does not match reality. This is known as hallucination. AI based on language models hallucinate because they do not understand reality, but only predict words based on patterns in training data, which can lead to them inventing incorrect information. This happens especially when the model lacks enough relevant data or tries to answer something it does not really “know”. When this happens in critical and sensitive tasks, the consequences can be serious.
For the public sector, it is crucial to understand where and how such errors can occur, and what measures can reduce the risk. An effective way to minimize this problem is to implement AI in a step-by-step process where each assessment occurs within clearly defined boundaries.
This gives better control over how AI makes decisions and reduces the likelihood of errors spreading in complex work processes. Another very effective method is to combine this with the use of threshold values. How confident does AI have to be in its assessment, to be allowed to take a certain action before it is sent to a human? With a platform like Duo where you can control this, you gain control over the process and thus minimize risk factors related to hallucination.
Here are the questions that should be asked:
|
Many AI solutions function a bit like a “black box” that you feed a question or task to, and receive a ready-made decision without being able to fully understand how it made its decision and be able to verify it. This creates challenges, especially in the public sector, where transparency and verifiability are crucial.
To ensure traceability and verifiability, an AI solution that provides full control is required, and this is exactly what our own AI solution Duo excels at. Duo gives you the ability to break down processes, follow the decision basis and implement checkpoints. When AI is uncertain, it can alert or involve a human decision maker. This provides safe automation with full traceability and full control over how safe the process has been.
These processes are critical, and success depends as much on technology choice as on the expertise to implement it in a secure and user-friendly manner.
Here are the questions that should be asked:
|
Concerns about AI and security in the public sector are justified. When the public sector handles large amounts of sensitive data and processes, technology must be approached with the utmost caution. The difference lies in how the technology is developed and implemented. There are critical differences between different types of technologies and implementations.
However, when security is put at the center from the start, AI can be a valuable tool that does not compromise security.
Are you curious about how AI can be used for automation?
Contact us for a no-obligation chat.