Workshop on Data meets Applied Ontologies in XAI

The 3rd edition of the Data meets Applied Ontologies Workshop series is dedicated to the role played by knowledge representation, ontologies, and knowledge graphs in Explainable Artificial Intelligence, in particular, to build Trustworthy and Explainable decision support systems.

...


The workshop will be co-located with FOIS 2020 and it will be part of the Bolzano Summer of Knowledge 2020 event, in Bolzano, September 16, 2020.

Previous editions of the Data meets Applied Ontologies Workshop series were held at JOWO 2017 and JOWO 2019.

Content

Can the integration of domain knowledge as, e.g., modeled by means of ontologies and knowledge graphs, help the understandability of explanations of machine learning models?

The availability of big amounts of data has fostered the proliferation of automated decision systems in a wide range of contexts and applications, e.g., self-driving cars, medical diagnosis, insurance and financial services, among others. These applications have shown that when decisions are taken or suggested by automated systems it is essential for practical, social, and increasingly legal reasons that an explanation can be provided to users, developers, or regulators.

As a case in point, the European Union's General Data Protection Regulation (GDPR) stipulates a right to `meaningful information about the logic involved', commonly interpreted as a `right to an explanation', for consumers affected by an automatic decision.

Explainability has been identified as a key factor for the adoption of AI systems. The reasons for equipping intelligent systems with explanation capabilities are not limited to user rights and acceptance. Explainability is also needed for designers and developers to enhance system robustness and enable diagnostics to prevent bias, unfairness, and discrimination, as well as to increase trust by all users in why and how decisions are made.

While interest in XAI had subsided together with that in expert systems after the mid-1980s, recent successes in machine learning technology have brought explainability back into the focus. This has led to a plethora of new approaches for explanations of black-box models, for both autonomous and human-in-the-loop systems, aiming to achieve explainability without sacrificing system performances (accuracy). Only a few of these approaches, however, focus on how to integrate and use domain knowledge to let decisions made by these systems be more explainable and understandable by human users.

For that reason, an important foundational aspect of explainable AI remains hitherto mostly unexplored: Can the integration of domain knowledge as, e.g., modeled by means of ontologies and knowledge graphs, help the understandability of interpretable machine learning models?

Call for Papers

We welcome original contributions, in the form of discussion papers, experimental contributions, and system and demo descriptions of applications.

The objective of the 2020 edition of the DAO-XAI is to provide stakeholders from the academia, industry, and public organisations opportunities to present their latest developments in explainable and trustworthy decision making, and in approaches that integrate symbolic and non-symbolic reasoning, tackling the above question.

We welcome original contributions, in the form of discussion papers, experimental contributions, and system and demo descriptions of applications that make use of ontologies and knowledge graphs to enhance the explainability and trustworthiness of decision systems, including but not limited to:

*Neural-symbolic Learning and Reasoning*

  • Cognitive computational systems integrating machine learning and automated reasoning
  • Knowledge representation and reasoning in machine learning and deep learning
  • Symbolic knowledge extraction from neural and statistical learning models

*Human-centered Explanations, Usability*

  • Visual exploratory tools of semantic explanations
  • Knowledge representation for human-centric explanations
  • Usability and acceptance of knowledge-enhanced semantic explanations

*Applications of Ontologies for Explainability and Trustworthyness in Specific Domains*

  • Life sciences, health and medicine
  • Humanities and social sciences
  • eGovernment

Submission intructions

Papers should be submitted non-anonymously in PDF format following IOS Press formatting guidelines (downloadable here). Papers should be uploaded via Easy Chair: https://easychair.org/conferences/?conf=jowo2020

CEUR

Articles will be published by CEUR workshop proceedings. See previous editions here.

Submission of an article should be regarded as an agreement that, should the article be accepted, at least one of the authors will attend the workshop to present the work.

We accept submissions of 5-10 pages in length (including bibliography) of the following types:

  • submissions describing original unpublished work, neither submitted to, nor accepted for, any other venue;
  • descriptions of ongoing research and projects, preliminary approaches, position papers;
  • extended abstracts of full papers that have been published previously (notice that the full paper must be explicitly referenced in the submission);
  • extended abstracts of full papers that are currently under revision for a different venue (i.e., a conference or a journal); when a submissions is of this type, this must be explicitly indicated in the abstract.

Important dates

  • June 23, 2020: Abstract registration
  • June 30, 2020: Paper submission deadline
  • July 31, 2020: Acceptance notification to authors
  • August 15, 2020: Camera ready version due

  • September 16, 2020: DAO-XAI 2019

Keynote Speaker

Organisation

The workshop is organised by:
  • Roberto Confalonieri - Free University of Bozen-Bolzano, Faculty of Computer Science
  • Alessandro Mosca - Smart Data Factory, Faculty of Computer Science, Free University of Bozen-Bolzano
  • Diego Calvanese - Free University of Bozen-Bolzano, Faculty of Computer Science

Programme Committee

  • Andreas Holzinger, Medical University Graz, Institute for Medical Informatics / Statistics
  • Bartek Skorulski, Telefonica Innovation Alpha, Spain
  • Enric Plaza, IIIA - Institut d’Investigació en Intel·ligència Artificial, CSIC - Spanish Council for Scientific Research
  • Gabriele Sottocornola, Free University of Bozen-Bolzano, Faculty of Computer Science
  • Ivan Donadello, Fondazione Bruno Kessler, DKM - Data and Knowledge Management Research Unit
  • Loris Bozzato, Fondazione Bruno Kessler, DKM - Data and Knowledge Management Research Unit
  • Luciano Serafini, Fondazione Bruno Kessler, DKM -Data and Knowledge Management Research Unit
  • Ludovik Coba, Free University of Bozen-Bolzano, Faculty of Computer Science
  • Pietro Galliani, Free University of Bozen-Bolzano, Faculty of Computer Science
  • Rafael Peñaloza, Università degli Studi di Milano-Bicocca, Information and Knowledge Representation, Retrieval, and Reasoning (IKR3)
  • Riccardo Guidotti, Knowledge Discovery and Data Mining Laboratory (KDDLab), Italian National Research Council
  • Shane T. Mueller, Michigan Technological University
  • Tarek R. Besold, Telefonica Innovation Alpha, Spain
  • Yevgeny Kazako, Ulm University, Institut for Artificial Intelligence
...
Free University of Bozen-Bolzano
Faculty of Computer Science
Domenikanerplatz 3 - Piazza Domenicani
39100 Bozen-Bolzano, Italy
Related Events