suomi.fi
Go directly to contents.
Good practices for Service Developers

Using AI responsibly

Pay attention to laws and recommendations

Public administration systems must be regulated

The information systems and digital services of public administration have to be implemented in the same way as all official activities: in compliance with laws and regulations, both national and EU ones.

From the perspective of technological change, the main task of the public sector is to create preconditions for the use of technologies in all of society in an ethically sustainable and safe manner.

–Futures review of the ministries 2022
Updated: 23/2/2024

AI legislation now and in the future

Among other things, Finnish and EU laws already provide for

  • the acceptable use of AI
  • digital services
  • equality and non-discrimination
  • privacy and data protection (GDPR)
  • the use of data.

In Finland, the Act on Automated Decision-making in Public Administration entered into force in May 2023, but it does not apply to learning systems, such as AI. 

The AI Act of the European Union was approved by the European Parliament in December 2023, and it will probably enter into force in 2024. When the Act enters into force, it will become the main piece of legislation regulating the requirements and use of AI systems in EU Member States.

Updated: 27/2/2024

Current Finnish legislation does not enable decision-making with machine learning systems

On what basis does a machine make decisions?

In cases such as automatic taxation, there is demonstratable cause and effect. A person has earned a certain amount of money for certain work during a certain period of time, and the tax decision is calculated directly on the basis of the Tax Administration’s rules.

Meanwhile the logic of AI is based on complicated statistical probabilities, making it difficult to establish a clear causal relationship. And in the case of a learning algorithmic system that can adapt its operations based on input, the challenge becomes fairly impossible.

For this reason, the current Finnish legislation on automated decision-making does not apply to learning systems – in practice AI – at all. Machine learning systems can be used to support work, also in administration, but not in decision-making, at least for now.

Updated: 27/2/2024

The AI Act sets requirements for high-risk systems

The draft of the AI act approved by the European Parliament, not yet in force, does not completely ban machine learning systems from making decisions in the administrations of EU countries, but the Act does hinder their use. High-risk AI systems include applications related to education, health and well-being, employment, benefits and transport – practically the entire field of public services.

High-risk systems need to have:

  • adequate risk assessment and mitigation systems
  • high quality of datasets supporting the system
  • documentation of operations to ensure the traceability of results
  • clear and adequate information for the user
  • appropriate monitoring by natural persons
  • high sustainability, security and detailed documentation containing all information about the system to enable authorities to assess system compliance.
Updated: 11/3/2024

The AI Act restricts the scoring and profiling of EU citizens

The ‘unacceptable risk' category in the EU AI Act prohibits AI scoring and profiling based on information about an individual's social behaviour, their socio-economic status or their characteristics as a person.

The Act does not completely prohibit data-based profiling. It is permitted if it is necessary to carry out a certain legal process or if the person has given their consent. For example, patient profiling based on a person's health data or assessing the ability to pay before a bank loan decision is made is permitted. However, consent is required for such uses.

It will be open to interpretation to what extent the risk rating model, together with the scoring and profiling constraints, will prevent automated decision-making based on machine learning in citizens’ discretionary matters. Perhaps this could only happen after a longer time has passed and the technology has matured.

Updated: 11/3/2024

Liability for acts in office becomes complex in a multi-actor system

AI systems in public administration will also be challenged by the statutory liability for acts in office. When problems arise, the question of ultimate accountability will often be difficult, as a system involves many actors and dimensions, such as

  • datasets, some or all of which may be produced by second parties
  • technical suppliers
  • programmers
  • persons responsible for the algorithm.
The emergence of accountability in algorithmic decision-making requires a connection between a public official and the decision-making process. In rules-based automation, an official can, if necessary, familiarise themselves with the operation of the system and thus the grounds for decisions that have been made.

Instead, in AI-based systems, even the developer of a system may not always be able to determine what the end result produced by the system is based on.

Government’s analysis, assessment and research activities: Algorithm as a decision-maker? (in Finnish)Opens in a new window.
Updated: 9/11/2023

AI can be an assistant in discretionary decisions

As current legislation does not apply to learning systems, it practically rules out decision-making by AI on discretionary matters in official activities. If a decision cannot be justified at the level required by legal protection, it must not be made at all.

However, it may still be possible to use AI as a tool for processing a discretionary matter if the final decision-making power remains with a natural person. Even then, it must be transparent and clear what has been processed with AI in the matter and how, and how AI outputs have been taken into account when making the decision.

Outside of decision-making can artificial intelligence be used as a support tool in certain cases, if the organization's policies and instructions allow it.

Updated: 13/11/2023

Deviating from regulations is only allowed in simulations

Innovative AI-based services in the Finnish public sector must not only take into account ethical considerations to ensure they are fair, unbiased, trustworthy, and accountable, they must also be designed in compliance with relevant municipal, national, and EU-based regulatory policies and frameworks.

AI researchers Nitin Sawhney and Ana Paula Gonzales Torres in a publication by Aalto UniversityOpens in a new window.

Current legislation can sometimes hinder or prevent the development of systems and services, even useful ones.

The only way to deviate from legislation is to do simulations of new types of services and operating models in so-called regulatory sandboxes where their impacts on legislation can be tested safely. Of course, the results of these tests may lead to legislative reforms if it is found that an otherwise safe and useful service is prevented by outdated regulation.

Updated: 27/2/2024

Don’t bend the law; change it

Especially internationally, digital giants consciously bend and test the boundaries of national and international laws. The public sector cannot do that.

Even in Finland, digital developers can feel frustrated with regulatory restrictions or a lack of clarity. If we feel that existing regulation restricts or hinders the development we want, we need to follow existing procedures to try to influence regulation proactively.

Updated: 27/2/2024

Review previous guidelines and recommendations

Since of the early 2010s, a large number of recommendations, declarations and checklists have been produced to support the ethical development of AI and robotics systems. They have been produced by international organisations and public administrations as well as educational and research institutions. This not only indicates that there is an identified need, but that a single document with recommendations is clearly not enough.

Over time, a critical problem has been identified in the documents: they give overall recommendations and describe goals but do not build enough bridges to concrete action.

It is easy to declare that you are committed to designing and producing ethically sustainable AI applications if you condense AI ethics into abstract lists of things you consider valuable. You easily end up with a steep gap between the general principles and what people are actually doing.

– Researchers Jaana Hallamaa and Karoliina Snell, University of Helsinki
Updated: 13/11/2023

Are you satisfied with the content on this page?

Checklist