suomi.fi
Go directly to contents.
Good practices for Service Developers

Using AI responsibly

Weigh out the risks

Risks of overly independent AI

It is not possible to analyse all the unexpected functions of learning systems or to detect them mechanically. Continuous monitoring, data management and impact assessment is critical in preventing a learning system from going rogue and veering off target.

A system may technically function properly while producing incorrect outputs. Even the most advanced AI will not know when something is wrong, even if it were producing distorted, incorrect or fabricated “information”. For example, ChatGPT has been found to hallucinate like this to a worrying extent.

Because a machine has no consciousness or conscience, we need people to monitor both the quality and usefulness of outputs.

Updated: 9/11/2023

Risks of combining datasets

The increase in the amount of data available on the internet makes it possible to combine data with the aim of harming individuals and society. In the future, big data published by organisations on the internet may cause unexpected risks as AI has the opportunity to combine and analyse datasets and draw conclusions that we cannot see yet for ourselves. Subtle changes in the datasets’ use contexts over time may also affect the risks.

The risk associated with the amount of data has arisen slowly over time, as is often the case with technology. For example, administrations procured several IT systems in the 1980s and 90s, and their mutual incompatibility only became a problem later as the world digitalised.

Even though the openness of data is a good thing in many cases, it is always still worth assessing in advance whether opening a certain dataset can create special risks when associated with another dataset.

Updated: 9/11/2023

Risks of predictive analytics

Forecasts and predictions can be derived from high-quality big data in very different matters. This is one of the superpowers of AI.

AI research has also identified the limitations of predictive analytics:

  1. AI is bad at predicting rare events, which are often the ones with significant consequences.
  2. If a system has been trained with imperfect, “incomplete” data, the forecasts generated by the system are likely to be inaccurate or even misleading.
  3. If a system has been trained with biased data, its forecasts are similarly or more pronouncedly biased.
  4. If a system has been trained with data that is not relevant to the objective, the outputs will not match the objective either.
Updated: 9/11/2023

The risk of automation promoting passiveness

Digitalisation creates new opportunities for social interaction and using services, but the flip side is a reduced amount of physical activity, increased loneliness and more inequality in the possibilities of using different services  

– Futures review of the ministries 2022

The animated Disney-Pixar film Wall-E (2008) presents a future where mankind has left our lifeless planet to live on spaceships where they are served and entertained by their intelligent robots. The people don’t really do anything meaningful because there isn’t anything to do. Life isn’t terrible, but there is no point to it. Humans have become the passive objects of benevolent machines rather than functional autonomous subjects.

In our accelerating digitalisation, which is further intensified by AI innovations, we can see the initial makings of a life resembling the space colony in Wall-E. What should organisations think about this and how should they try to prevent this kind of development?

Updated: 9/11/2023

The risk of humanising machines

It is justified to ask how human-like we want to make our services and machines.

Most of us have the ability to project human characteristics to objects that do not have them. This is understandable when it comes to pets, for example. However, people sometimes relate emotionally even to their computers, robot vacuums and cars. We are prone to finding humanity in places where there is none.

This is a problem in new technology that is already capable of imitating natural human communication. We react particularly strongly to human-like actions, even if by a machine. The risk lies in situations where a malicious actor cons people or organisations with AI that credibly mimics human characteristics.

Updated: 9/11/2023

Risks to privacy and personal data

People’s privacy can be protected in datasets by two methods:

  • Pseudonymisation is a process where personal data is replaced by random codes but can be linked to the original person with the help of additional information.
  • Anonymisation is an irreversible process where personal data is completely and permanently changed in the dataset. This is why legislation such as the EU General Data Protection Regulation (GDPR) does not apply to anonymised data.

In the world of AI, even these methods are not completely foolproof. When you accumulate data about a person in a system, you create an ever-growing data profile about them. Conclusions can be drawn on this profile even when personal data has been anonymised. This can be done through proxy data.

Even if there is no directly identifiable data about a person, their postal code alone can give hints as to who they are. If enough proxy data is collected, eventually the person can be identified by analysing the combination of data. Algorithmic systems in various countries have already generated discriminatory decisions and forecasts based on variables such as age, gender and home address postcode.

Updated: 9/11/2023

Risks of generative AI

Using generative AI applications for work is becoming more common in private businesses and the public sector alike. Before you start using any application, find out how it works and what its risks are.

Updated: 3/5/2024

Are you satisfied with the content on this page?

Checklist