Studies have shown that AI systems have a tendency to produce decisions or analyses that systematically favour certain groups at the expense of other groups.
Individual cases of algorithmic discrimination have also been observed in Finland. Most cases are based on gender, age, ethnicity, language, socio-economic status or other similar protected personal characteristics. This is often due to training data that inevitably reflects past circumstances and may include old discriminatory structures. For example, we know that face detection algorithms have difficulties in recognising people with dark skin, as there are considerably more white people in the masses of online images that are used as training data.
Sometimes discrimination can also occur if a characteristic of a person that can be grounds for discrimination (like age or gender) is used as a critical variable in an algorithm. This might cause an application to treat people unfairly, for example with loan applications or in job recruitment.
The descriptive metadata generated for images is also often discriminatory in itself. If you ask these machines, a woman in a white coat is usually a “nurse” while a man in a white coat is usually a “doctor”. When these kinds of biases are introduced to high-speed and mass-scale systems, their effects spread rapidly and globally and continue to impact the training of new AI systems, unless we consciously break this vicious cycle.