Workshop on Data meets Applied Ontologies in XAI

The 3rd edition of the Data meets Applied Ontologies Workshop series is dedicated to the role played by knowledge representation, ontologies, and knowledge graphs in Explainable Artificial Intelligence, in particular, to build Trustworthy and Explainable decision support systems.

...

The workshop will be co-located with the Bratislava Knowledge September (BASK) 2021, a joint meeting of researchers, students, and industry professionals dealing with various aspects of knowledge processing, which will take place in Bratislava, September 2021. BAKS will host the 9th International Conference on Artificial Neural Networks (ICANN 2021) and the 34th International Workshop on Description Logics (DL 2021).

The DAO-XAI workshop will take place as a hybrid event on September 18-19, 2021.

Previous editions of the Data meets Applied Ontologies Workshop series were held at JOWO 2017 and JOWO 2019.


Content

Can the integration of domain knowledge as, e.g., modeled by means of ontologies and knowledge graphs, help the understandability of explanations of machine learning models?

The availability of big amounts of data has fostered the proliferation of automated decision systems in a wide range of contexts and applications, e.g., self-driving cars, medical diagnosis, insurance and financial services, among others. These applications have shown that when decisions are taken or suggested by automated systems it is essential for practical, social, and increasingly legal reasons that an explanation can be provided to users, developers, or regulators.

As a case in point, the European Union's General Data Protection Regulation (GDPR) stipulates a right to `meaningful information about the logic involved', commonly interpreted as a `right to an explanation', for consumers affected by an automatic decision.

Explainability has been identified as a key factor for the adoption of AI systems. The reasons for equipping intelligent systems with explanation capabilities are not limited to user rights and acceptance. Explainability is also needed for designers and developers to enhance system robustness and enable diagnostics to prevent bias, unfairness, and discrimination, as well as to increase trust by all users in why and how decisions are made.

While interest in XAI had subsided together with that in expert systems after the mid-1980s, recent successes in machine learning technology have brought explainability back into the focus. This has led to a plethora of new approaches for explanations of black-box models, for both autonomous and human-in-the-loop systems, aiming to achieve explainability without sacrificing system performances (accuracy). Only a few of these approaches, however, focus on how to integrate and use domain knowledge to let decisions made by these systems be more explainable and understandable by human users.

For that reason, an important foundational aspect of explainable AI remains hitherto mostly unexplored: Can the integration of domain knowledge as, e.g., modeled by means of ontologies and knowledge graphs, help the understandability of interpretable machine learning models?

Accepted Papers

The program committee is pleased to announce the list of papers accepted for presentation at DAO-XAI 2021.

Call for Papers

We welcome the submission of original contributions, investigating novel methodologies that integrate sub-symbolic and symbolic reasoning to build transparent and scrutable AI systems, and algorithms for the design of Trustworthy and Explainable decision support systems.

The 3rd edition of the Data meets Applied Ontologies Workshop series will take place as a bridge event between the 30th International Conference on Artificial Neural Networks (ICANN 2021) and the 34th International Workshop on Description Logics (DL 2021), two venues with a long tradition of research contributions related to sub-symbolic and symbolic reasoning respectively.

To this end, the 3rd edition of the Data meets Applied Ontologies Workshop will focus on the integration of sub-symbolic and symbolic reasoning, particularly, on the role played by explicit and formal knowledge, such as ontologies, knowledge graphs, knowledge bases, etc., in Explainable Artificial Intelligence.

We welcome the submission of original contributions, in the form of theoretical contributions, discussion papers, experimental contributions, system and demo descriptions of applications that make use of explicit and formal knowledge to enhance the explainability and trustworthiness of decision systems, including - but not limited to - the following topics of interest:

*Neural-symbolic Learning and Reasoning*

  • Cognitive computational systems integrating machine learning and automated reasoning
  • Knowledge representation and reasoning in machine learning and deep learning
  • Knowledge extraction and distillation from neural and statistical learning models
  • Representation and refinement of symbolic knowledge by artificial neural networks

*Human-centered Explanations, Usability*

  • Explanation formats exploiting domain knowledge
  • Visual exploratory tools of semantic explanations
  • Knowledge representation for human-centric explanations
  • Usability and acceptance of knowledge-enhanced semantic explanations
  • Evaluation of transparency and interpretability of AI Systems

*Applications of Ontologies for Explainability and Trustworthiness in Specific Domains*

  • Life sciences, health
  • Biomedicine
  • Humanities and social sciences
  • eGovernment

Submission intructions

We accept submissions of 6-12 pages in length (excluding bibliography) of the following types:

  • Regular papers (max. 12 + references – CEUR WS format);
  • Short/Position papers (max. 6 pages + references – CEUR WS format);
  • Abstract (max. 2 pages + references – CEUR WS format) - not included in the proceedings.

All submitted papers will be evaluated based on originality, significance, relevance and technical quality. Papers should be submitted non-anonymously in PDF format following the CEUR-WS single column formatting guidelines found at http://ceur-ws.org/Vol-XXX/

The direct template download for Latex and MS Word is available here: http://ceur-ws.org/Vol-XXX/CEURART.zip

There is also an Overleaf Template available here: https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org/hpvjjzhjxzjk

Submission Site: https://easychair.org/conferences/?conf=daoxai2021

Accepted papers will be published in a proceedings volume in the IAOA series of CEUR-WS (http://ceur-ws.org/iaoa.html). Please notice that due to a change of policy at CEUR-WS.org Abstracts are no longer indexed by dblp.org and cannot be included in the proceedings. They will be accepted for oral presentation only.

Authors who require a visa to travel to Slovakia and for this reason think that they need to receive notification earlier than 13 August may send an e-mail to roberto.confalonieri (at) unibz.it, including the submission number of the paper and a statement explaining the circumstances that require an earlier notification.

Special Issue

Extended versions of a selection of the papers presented at the workshop will be invited to be submitted to a Special Issue on 'The Role of Ontologies and Knowledge in Explainable AI' to be published in the Semantic Web Journal.

Program

DAO-XAI will take place on two days:

An overview of the program, together with AKMIS and DL, can be found here.

You can to connect to the sessions on Saturday 18 September using Zoom or Skype for Business:

The workshop will be scheduled as follows:




Keynote Speaker

Luciano Serafini

Head of the Data and Knowledge Management Research Unit (Fondazione Bruno Kessler, Trento - ITALY)

Read more

Luciano Serafini is a joint invited speaker with DL 2021.

Title

Learning and Reasoning with Logic Tensor Networks: the framework and an application

Abstract

Logic Tensor Networks (LTN) is a theoretical framework and an experimental platform that integrates learning based on tensor neural networks with reasoning using first-order many-valued/fuzzy logic. LTN supports a wide range of reasoning and learning tasks with logical knowledge and data using rich symbolic knowledge representation in first-order logic (FOL) to be combined with efficient data-driven machine learning based on the manipulation of real-valued vectors. In practice, FOL reasoning including function symbols is approximated through the usual iterative deepening of clause depth. Given data available in the form of real-valued vectors, logical soft and hard constraints and relations which apply to certain subsets of the vectors can be specified compactly in FOL. All the different tasks can be represented in LTN as a form of approximated satisfiability, reasoning can help improve learning, and learning from new data may revise the constraints thus modifying reasoning. We apply LTNs to Semantic Image Interpretation (SII) in order to solve the following tasks: (i) the classification of an image's bounding boxes and (ii) the detection of the relevant part-of relations between objects. The results shows that the usage of background knowledge improves the performance of pure machine learning data driven methods.

Register & Attend

DAO-XAI 2021 will be held as a hybrid event. For in-person participants you can find basic attendance information here. Please, pay close attention especially to Covid-19 situation and requirements in Slovakia.

You can to connect to the sessions on Saturday 18 September using Zoom or Skype for Business:

Registration

DAO-XAI 2021 is tightly collocated with AKMIS and DL. Registration is done via a joint registration form.

For each accepted paper or extended abstract, one author should either register in person or online for 'Workshops only' or other packages. You can find more information and registration fees combos here.

Limited amount of support is still available for students who will register a paper and/or attend in person. Please follow the instructions here.

Early registration ends at 23:59:59 CEST on August 31 (convert to your time zone).

The DAO-XAI in-person registration fee includes coffee-breaks, and lunch.

Free Online Registration

Each in-person participant is required to register for in-person participation and pay the respective registration fee. For each paper that will be presented online, one unique online registration and payment of the respective online fee is required.

Free registration will be granted to online participants who have not co-authored any paper, and to online participants such that for all their co-authored papers one of the following holds: (a) the paper will be presented in-person by one of their paying registered co-authors, or (b) the paper will be presented online and it is covered by one unique paid online registration by one of their co-authors.

The free access will be provided upon completion of a separate free online registration form that must be submitted by Sep 15, 2021.

DAO-XAI attendees might also be interested in registering to BOSK.

Terms and Conditions

Payments can be done by credit card or by wire transfer. In the latter case, payment details, including the registration number to be included in the payment message, will be sent by e-mail once registration is complete.

Modification of registration and withdrawals are only possible by Aug 31 CEST. Refunds will be done after a deduction of a 10% handling surcharge.

See full Terms and Conditions for details.

Register now

Dates

  • July 5-10, 2021: (Papers') Abstract registration
  • July 16, 2021: Paper submission deadline
  • August 13, 2021: Notification to authors
  • August 31, 2021: Early registration (before 23:59 CEST)
  • September 3, 2021: Camera-ready copies
  • September 15, 2021: Free registration (only eligible participants)

  • September 18-19, 2021: DAO-XAI 2021

Organisation

The workshop is organised by:
  • Roberto Confalonieri - Free University of Bozen-Bolzano, Faculty of Computer Science
  • Oliver Kutz - Free University of Bozen-Bolzano, Faculty of Computer Science
  • Diego Calvanese - Free University of Bozen-Bolzano, Faculty of Computer Science
  • Alessandro Mosca - Smart Data Factory, Faculty of Computer Science, Free University of Bozen-Bolzano

Programme Committee

  • Alberto J. Bugarín Diz, Universidad de Santiago de Compostela, Centro Singular de Investigación en Tecnoloxías Intelixentes
  • Bart Bogaerts, Vrije Universiteit Brussel, Belgium
  • Andreas Holzinger, Medical University Graz, Institute for Medical Informatics / Statistics
  • Daniele Porello, Laboratory for Applied Ontology, Institute of Cognitive Science and Technologies, National Research Council (CNR)
  • Enric Plaza, IIIA - Institut d’Investigació en Intel·ligència Artificial, CSIC - Spanish Council for Scientific Research
  • Franz Baader, Technische Universität Dresden - Fakultät Informatik, Institut für Theoretische Informatik
  • Ivan Donadello, Free University of Bozen-Bolzano, Faculty of Computer Science
  • Jose Maria Alonso Moral, Universidad de Santiago de Compostela, Centro Singular de Investigación en Tecnoloxías Intelixentes
  • Loris Bozzato, Fondazione Bruno Kessler, DKM - Data and Knowledge Management Research Unit
  • Marco Schorlemmer, IIIA - Institut d’Investigació en Intel·ligència Artificial, CSIC - Spanish Council for Scientific Research
  • Michael Spranger, SONY AI Lab
  • Nicolas Troquard, Free University of Bozen-Bolzano, Faculty of Computer Science
  • Pietro Galliani, Free University of Bozen-Bolzano, Faculty of Computer Science
  • Rafael Peñaloza, Università degli Studi di Milano-Bicocca, Information and Knowledge Representation, Retrieval, and Reasoning (IKR3)
  • Riccardo Guidotti, Knowledge Discovery and Data Mining Laboratory (KDDLab), Italian National Research Council
  • Stefan Schlobach, Vrije Universiteit Amsterdam, Faculty of Sciences, Department of Computer Science
...
Free University of Bozen-Bolzano
Faculty of Computer Science
Domenikanerplatz 3 - Piazza Domenicani
39100 Bozen-Bolzano, Italy