Google Develops More Transparent AI System

One of the biggest problems artificial intelligence programmers have currently is determining exactly how AI arrived at the answers it gives. Google aims to make this task easier by announcing the development of a platform that facilitates developers in determining how AI draws its conclusions. By being able to understand the system’s thought processes, developers can have better insight into how these systems learn and create more effective ways to teach them.

Difficulty in Seeing Thought Patterns

AI doesn’t think like humans. Artificial Neural Networks (ANNs) make up the most sophisticated AI systems. However, as the system becomes more complex, it can be extremely challenging for developers to figure out why a particular decision was made. The increased complexity and occlusion of thought patterns also make it nearly impossible to debug any errors that may arise. Among the problems that exist with AI that need to be corrected through debugging are AI bias and illogical correlations.

Addressing the ‘Black Box’ Problem

Many researchers in the field of AI refer to this complicated mechanism of ANN processing as a ‘black box’ problem. They know what goes in and what comes out, but they don’t understand how the system reaches its conclusion. To address the ‘black box’ problem, Google has developed Explainable AI. The system uses three distinct tools to help developers work out what’s happening within the black box. The first tool offers a weighted description of what elements were chosen by the AI and why. The second utility enables researchers to witness how a model fluctuates as different aspects are changed. The third tool offers human reviewers sample results consistently.

AI Needs to Be Transparent

If humans are to trust AI with critical safety systems that are responsible for the preservation of life, then we must understand the processes by which the AI derives its conclusions. Without a proper grasp of how the system works, developers may introduce unintended consequences. Explainable AI offers developers a chance to diagnose errors before they occur by sifting through the output provided by the AI system and noticing where the machine’s conclusions don’t match the inferences it should be making.

CloudWedge
Logo