Aktuelles

Winter Term 2023 Information

-

In the winter term 2022/23 we will offer the lecture AI-based Decision Support I as well as the Scientific Project Applications of Artificial Intelligences and the Bachelor-wirtschaftswissenschaftliches Seminar (Ethische, rechtliche und ökonomische Aspekte Künstlicher Intelligenz). As every term we also have openings for master theses topics.

mehr ...

Research paper published in IJIM

-

 

X02684012

Together with colleagues from JMU Würzburg, we published a paper on the trade-off between explainability and accuracy in ML research.

Explaining #AI system decision models to users is becoming ever more important. But mathematical and programmatic considerations do not suffice to scrutinize applications with humans.

We show that we should neither simplify the tradeoff between performance and explainability as continuous nor that the data-driven interpretability of algorithms entails algorithm explainability towards end users. Rather, we show that there are currently three groups of algorithm explainability somewhat distinct in performance capabilities. Hence, we say: Stop Ordering Machine Learning Algorithms by their Explainability.

The article is available here and open access in the International Journal of Information Management, #1 journal in SJR for Management Information Systems & Information Systems and Management. 

mehr ...

Article published and nominated for best paper at ECIS 2022

-

logo-2022

Together with colleagues at TU-Dresden, we published a paper entitled "Where was COVID-19 first discovered? Designing a question-answering system for pandemic situations" in the ECIS 2022 proceedings, more specifically in the Design Science track. The paper has won the best paper in track award and was also nominated for best conference paper.

The COVID-19 pandemic is accompanied by a massive "infodemic" that makes it hard to identify concise and credible information for COVID-19-related questions, like incubation time, infection rates, or the effectiveness of vaccines. As a novel solution, our paper is concerned with designing a question-answering system based on modern technologies from natural language processing to overcome information overload and misinformation in pandemic situations. To carry out our research, we followed a design science research approach and applied Ingwersen's cognitive model of information retrieval interaction to inform our design process from a socio-technical lens. On this basis, we derived prescriptive design knowledge in terms of design requirements and design principles, which we translated into the construction of a prototypical instantiation. Our implementation is based on the comprehensive CORD-19 dataset, and we demonstrate our artifact's usefulness by evaluating its answer quality based on a sample of COVID-19 questions labeled by biomedical experts.

You can find a preprint of the article here.

mehr ...

Article published in the I3E 2021 conference proceedings

-

978-3-030-85447-8

 

Together with colleagues at JMU Würzburg, we published a paper entitled "Stop Ordering Machine Learning Algorithms by Their Explainability! An Empirical Investigation of the Tradeoff Between Performance and Explainability" in the I3E proceedings covering topics of "Responsible AI and Analytics for an Ethical and Inclusive Digitized Society"

Numerous machine learning algorithms have been developed and applied in the field. Their application indicates that there seems to be a tradeoff between their model performance and explainability. That is, machine learning models with higher performance are often based on more complex algorithms and therefore lack interpretability or explainability and vice versa. The true extent of this tradeoff remains unclear while some theoretical assumptions exist. With our research, we aim to explore this gap empirically with a user study. Using four distinct datasets, we measured the tradeoff for five common machine learning algorithms. Our two-factor factorial design considers low-stake and high-stake applications as well as classification and regression problems. Our results differ from the widespread linear assumption and indicate that the tradeoff between model performance and model explainability is much less gradual when considering end user perception. Further, we found it to be situational. Hence, theory-based recommendations cannot be generalized across applications.

mehr ...

Letzte Änderung: 21.02.2024 - Ansprechpartner: Webmaster