Suomi.fi for Service Developers
Go directly to contents.
Good practices for Service Developers
Using AI responsibly

Checklist

These are the minimum considerations when your organisation wants to use AI responsibly.

Learn about data ethics

  1. Consider why your organisation wants to start using an AI system.
  2. Consider the responsibilities related to AI systems in your organisation.
  3. Assess the impacts of new systems and services in advance.
  4. Read the guidelines and recommendation documents for ethical AI.
  5. Discuss the values that steer the development and use of AI in your organisation. How are they realised in your work?

Read more about data ethics on the page Learn about data ethics.

Pay attention to laws

  1. Make sure that your organisation’s legal competence regarding national and international regulations on algorithmic systems is up to date.
  2. Consider how your system development and procurements will ensure the implementation of fundamental and human rights and compliance with the obligations laid down in the Non-Discrimination Act, for example.
  3. If there are legal uncertainties in an AI project or its impacts, use the regulatory sandbox procedure to examine the problem.
  4. Investigate your organisation’s official responsibilities in case of problems with the AI system.

Read more about AI legislation on the page Pay attention to laws and recommendations.

Assess the capabilities of your organisation

  1. Identify the skills, roles and jobs required by AI systems.
  2. Establish a data policy that defines how data is collected and processed in AI systems.
  3. Draw up criteria for AI systems in your organisation’s procurement guidelines. Set concrete requirements, for example on the auditability of algorithms.
  4. Objectively assess the appropriateness of different technological solutions.
  5. Think about how you will set objectives for your AI system. Who will define the objectives and with what kind of process? Will members of staff be able to influence the objectives?

Read more about charting your organisation’s capabilities on the page Assess the capabilities of your organisation.

Organise ethical practices

  1. Ensure that relevant decision-makers in your organisation understand ethical data and AI practices well enough.
  2. Appoint a person or group whose task is to ensure the data accountability of operations.
  3. Make sure that the needs of minorities and vulnerable persons are heard.
  4. Establish a policy by which your organisation assesses the ethical aspects of a data project and anticipates its impacts.
  5. Make sure that the use, outputs and impacts of your AI system are always ultimately assessed by humans.
  6. Monitor, assess and document the impacts of AI that is in production.
  7. Consider how your organisation assesses, manages or compensates for the environmental impacts of the system. Also note human rights issues in the production chain.

Read more about ethical practices on the page Organise ethical practices.

Get citizens involved

  1. Assess how your organisation’s system impacts people’s or communities’ activities, rights, access to services or benefits.
  2. Involve affected groups in development.
  3. Consider how to involve representatives of population groups whose participation may be challenging due to functional, linguistic or other characteristics.
  4. Seek information and experiences of different methods of inclusion.
  5. Communicate with participating citizens in a comprehensible and clear manner.
  6. Allocate enough resources to inclusion and participation processes.
  7. Inform the participating citizens afterwards of their impact on the decisions taken.

Read more about involving citizens on the page Get citizens involved in this guide.

Define a data policy

  1. Introduce clear roles of responsibility for the different stages of the data process. Assign the person ultimately responsible for the data work, for example a Chief Data Officer.
  2. Learn how to identify and manage data biases.
  3. Make a policy on what kind of independent party can audit your system’s training data and algorithm and how.
  4. Consider where and how the audit results will be published.
  5. Explain the basics of the functioning of your system to citizens at a sufficient level when the system is used.
  6. If your organisation has a service based on a recommender engine, make it clear to users what the recommendations generated for them are based on. Make it clear that these are suggestions and not binding instructions.

Read more about data on page Define a data policy.

Pay attention to security

  1. Assess how AI can threaten or improve information security. Prepare for different scenarios.
  2. Document your organisation’s AI system to a greater extent that just the technical.
  3. Monitor that the AI system stays on target and does not start going rogue.
  4. Monitor the AI feedback loop, for example the way AI uses outputs generated by AI.
  5. Consider how your organisation uses AI predictions or forecasts. How are they analysed?
  6. In your organisation, discuss how similar to humans AI applications can or should appear.
  7. Protect citizens’ personal data in your organisation’s AI systems. Consider using synthetic data or similar data protection methods.

Read more about security on the page Pay attention to security.

Are you satisfied with the content on this page?