Risks of overly independent AI
It is not possible to analyse all the unexpected functions of learning systems or to detect them mechanically. Continuous monitoring, data management and impact assessment is critical in preventing a learning system from going rogue and veering off target.
A system may technically function properly while producing incorrect outputs. Even the most advanced AI will not know when something is wrong, even if it were producing distorted, incorrect or fabricated “information”. For example, ChatGPT has been found to hallucinate like this to a worrying extent.
Because a machine has no consciousness or conscience, we need people to monitor both the quality and usefulness of outputs.