suomi.fi
Go directly to contents.
Good practices for Service Developers

Using AI responsibly

Organise roles and guidance

AI cannot understand the operation of an organisation

An AI system is really just a data processing machine. Its operations are entirely dependent on the data input and the objectives, competence and accountable actions of the owner organisation.

An often-recurring concept of accountable AI development is human-in-the-loop. In practice, human-in-the-loop means that ultimately only a natural person can both monitor the operation of a system and approve the outputs of the system and their use.

The challenge of this principle is extending the use of AI systems to the processing of individual cases and perhaps also to actual decision-making over time.

Updated: 10/11/2023

Make sure that a human is in the driver’s seat

The European Union’s policy proposal on AI sets four concrete policy options for the human-in-the-loop principle:

  1. The output of an AI system may not be used until it has been inspected and evaluated by a human.
  2. The output of an AI system is implemented directly, but its impacts will be corrected by humans if necessary.
  3. The operation of a system is monitored in real time with a fast-activated “emergency stop” procedure.
  4. The system is designed to have threshold values above which it either limits its operation, stops completely or transfers control to a human.
Updated: 9/11/2023

Assess impacts also during production

It is typical for new technologies that their various impacts, especially unexpected ones, only become apparent over time. This means that monitoring the impact of an implemented system is just as important as proactive impact assessment.

Social media is one example. When Facebook started accumulating users, no one could have guessed that it and similar services would eventually affect presidential elections, referendums and people’s self-image, mental health and consumption habits in a few years’ time.

Documenting the impact assessment in production and reporting it publicly also increases user confidence. At the same time, it forces the organisation responsible for the system to carry out the assessment regularly.

Updated: 9/11/2023

When should you set up an ethics council?

Already in 2018, the European Commission recommended public sector organisations to have ethics councils, which are units that monitor and steer the ethical and accountable operations of robotics and AI.

Establishing an ethics group is recommended if

  • planned AI systems and their objectives include weighty ethical questions
  • management is genuinely committed
  • the group receives a clear mandate and assignment
  • the organisation is prepared to also receive critical input.

If all these conditions are not met, we recommend looking for other methods for assessing and ensuring ethics and accountability.

Some organisations may already have industry-specific ethics committees. In this case, you can consider whether they could also address the utilisation of AI.

Updated: 9/11/2023

Are you satisfied with the content on this page?

Checklist