Content
Can the integration of domain knowledge as, e.g., modeled by means of ontologies and knowledge graphs, help the understandability of explanations of machine learning models?
The availability of big amounts of data has fostered the proliferation of automated decision systems in a wide range of contexts and applications, e.g., self-driving cars, medical diagnosis, insurance and financial services, among others. These applications have shown that when decisions are taken or suggested by automated systems it is essential for practical, social, and increasingly legal reasons that an explanation can be provided to users, developers, or regulators.
As a case in point, the European Union's General Data Protection Regulation (GDPR) stipulates a right to `meaningful information about the logic involved', commonly interpreted as a `right to an explanation', for consumers affected by an automatic decision.
Explainability has been identified as a key factor for the adoption of AI systems. The reasons for equipping intelligent systems with explanation capabilities are not limited to user rights and acceptance. Explainability is also needed for designers and developers to enhance system robustness and enable diagnostics to prevent bias, unfairness, and discrimination, as well as to increase trust by all users in why and how decisions are made.
While interest in XAI had subsided together with that in expert systems after the mid-1980s, recent successes in machine learning technology have brought explainability back into the focus. This has led to a plethora of new approaches for explanations of black-box models, for both autonomous and human-in-the-loop systems, aiming to achieve explainability without sacrificing system performances (accuracy). Only a few of these approaches, however, focus on how to integrate and use domain knowledge to let decisions made by these systems be more explainable and understandable by human users.
For that reason, an important foundational aspect of explainable AI remains hitherto mostly unexplored: Can the integration of domain knowledge as, e.g., modeled by means of ontologies and knowledge graphs, help the understandability of interpretable machine learning models?