suomi.fi
Go directly to contents.
Good practices for Service Developers

Using AI responsibly

Be properly prepared

Identify security threats and opportunities

AI poses new challenges to security. It enables cybercriminals and hostile actors to enhance their operations, automate attacks and target them more precisely to detect vulnerabilities in target systems.

On the other hand, the analytics capabilities of AI make it possible to develop security systems that can scan and assess incoming traffic in a system more accurately than current procedures.

Every AI system that uses multiple data sources, is connected to other systems and impacts the activities and rights of organisations or people is an ecosystem operator. As such, each AI is responsible not only for its own reliability and security but for the reliability and security of the entire ecosystem. This is why, when designing a system, you have to include the perspectives of security by design and privacy by design.

Updated: 9/11/2023

Only a documented system can be audited

To make it possible to audit an AI system, it has to be documented. Errors and incorrect functions can only be fixed if documentation is detailed and up-to-date.

The European Union’s position on AI determines what an organisation has to document of their AI system:

  • the objective and purpose of the system
  • known capabilities and limitations of the system
  • the conditions under which the system is meant to operate
  • the expected accuracy of outputs that will achieve the objective set for the system
  • description of the training data related to the programming of the algorithm and its storage
  • the training data itself for high-risk AI systems.

However, this documentation can pose challenges in practice. The black box of algorithmic reasoning also makes it difficult to react to errors. How can you fix something when you don’t know how it works?

Updated: 9/11/2023

AI procurements may involve illicit production chains

It is not acceptable to collect training data for an AI system from dubious sources or with unethical methods, but we have to be just as critical with the production chain of the physical system.

You have to make sure to ask:

  • Where do the components come from?
  • Where do the raw materials used in them come from?
  • What kind of work was used to produce them and what were the working conditions?
  • How has the energy used throughout the production chain been produced?
  • Where does the power for running systems, training language models and data mining come from?

The complexity of AI systems may include illicit production chains that entail environmental damage and trampled human rights. Public authorities have a particular responsibility to ensure that their procurements comply with the strongest possible value base and transparency.

Updated: 9/11/2023

Learning systems are difficult to manage

Unlike hard-coded, rules-based software robotics, systems based on machine learning and deep learning are always striving to optimise the task or objective assigned to them. In a way, they are even expected to produce different outputs later on than when they were first introduced.

Should AI have the capacity to learn from its mistakes? The easy answer would be “yes”, but a self-learning system means a system that changes its own programming and algorithm and can drift outside the control of its designers and owners. In other words, do we want machines whose actions are unpredictable in new situations?

– AI Researcher Wendell Wallach, Yale University

Understanding the functioning of advanced AIs is extremely challenging, even for their developers. For this reason, foresight is difficult and controllability can also become a problem.

Updated: 9/11/2023

Are you satisfied with the content on this page?

Checklist