The reasoning process of public administration cannot be unexplainable
Black boxes and lack of transparency continue to be a major problem. If we won’t be able to figure out why a harmful effect happened, we consequently can’t fix that issue.
– AI Researcher Wendell Wallach, Yale University
There is a particular challenge with more complex AI systems called the “black box”. When the datasets that are used are massive, possibly composed of several sources and contain thousands of data points, the reasoning made by the algorithm from the dataset is not transparent and comprehensible. We can only conclude that “the machine produced what it did”.
This is a major problem in terms of reliability. Especially in the public sector, for the sake of good governance, an AI system should work in a way that we can justify and explain its results. This is important just from the perspective of citizens’ right of appeal and to claim damages.