suomi.fi
Go directly to contents.
Good practices for Service Developers

Using AI responsibly

Citizens have the right to the transparency of information

The clearest description is created in cooperation

The responsibility for having an understandable description of an AI service rests as much with the system supplier as the party procuring the system. Neither can avoid their part in the matter. The information explaining a system is always context-dependent, which is why the author of the description has to understand the target and user groups.

In fact, it is best if the information is produced in collaboration by the system supplier, the procuring unit and the procuring organisation’s communications. This ensures the accuracy of the technical information and the clarity of the message.

Clear communication also benefits the brand internally, especially in larger organisations where some members of staff may not have even known that an AI system was being procured, much less its objectives or features.

Updated: 9/11/2023

Include at least these in the description

Explaining is not the same as being understood. Even if we could transparently present the logical function of an AI to a person subject to a decision made by the AI, the person might feel like they got an explanation but still remain as confused as before.

Since the logic behind the decisions of an AI system is difficult to explain, it is important that the transparency of a service is promoted at least with clear information on

  • who is or which parties are responsible for the service
  • what purpose the service was made for
  • what information the user has to give the service
  • why the user must give this information specifically.

Focus on the clarity and comprehensibility of the communication as well as its style and tone. The aim is to be as inclusive as possible. Transparency also includes clear instructions to users on what to do in case they want to correct errors or submit a claim for rectification.

Updated: 9/11/2023

Do not make the algorithm available to users

Different contexts require different levels of openness. For example, you cannot let ordinary service users inspect your algorithm because

  • for the majority of citizens, the algorithm would be incomprehensible, probably confusing and would not promote user understanding and trust
  • even in the public sector, an AI system is likely provided by a company, and the algorithm is part of a trade secret
  • an algorithm that is publicly available for anyone would likely be threat to the security of the service.
Updated: 9/11/2023

Audit the algorithm with an external party and publish the main findings

Although algorithms cannot be fully opened to users, they still need to be reviewed. Accountable AI implementation includes system audits by an independent operator.

Keep the details of the audit strictly confidential between the system supplier, owner and auditor, but publish the main findings and any subsequent measures. Do so at a level that does not compromise the security of the system but gives users enough information to maintain trust.

Read more about auditing algorithms in this article on the German Marshall Fund websiteOpens in a new window..

Updated: 9/11/2023

Are you satisfied with the content on this page?

Checklist