Aktuelles

Article published in the I3E 2021 conference proceedings

-

978-3-030-85447-8

 

Together with colleagues at JMU Würzburg, we published a paper entitled "Stop Ordering Machine Learning Algorithms by Their Explainability! An Empirical Investigation of the Tradeoff Between Performance and Explainability" in the I3E proceedings covering topics of "Responsible AI and Analytics for an Ethical and Inclusive Digitized Society"

Numerous machine learning algorithms have been developed and applied in the field. Their application indicates that there seems to be a tradeoff between their model performance and explainability. That is, machine learning models with higher performance are often based on more complex algorithms and therefore lack interpretability or explainability and vice versa. The true extent of this tradeoff remains unclear while some theoretical assumptions exist. With our research, we aim to explore this gap empirically with a user study. Using four distinct datasets, we measured the tradeoff for five common machine learning algorithms. Our two-factor factorial design considers low-stake and high-stake applications as well as classification and regression problems. Our results differ from the widespread linear assumption and indicate that the tradeoff between model performance and model explainability is much less gradual when considering end user perception. Further, we found it to be situational. Hence, theory-based recommendations cannot be generalized across applications.

mehr ...

Article published in the Journal of Business Analytics

-

JBA

 

Together with colleagues at JMU Würzburg, we published a paper entitled "A social evaluation of the perceived goodness of explainability in machine learning" in the Journal of Business Analytics. 

Machine learning in decision support systems already outperforms pre-existing statistical methods. However, their predictions face challenges as calculations are often complex and not all model predictions are traceable. In fact, many well-performing models are black boxes to the user who– consequently– cannot interpret and understand the rationale behind a model’s prediction. Explainable artificial intelligence has emerged as a field of study to counteract this. However, current research often neglects the human factor. Against this backdrop, we derived and examined factors that influence the goodness of a model’s explainability in a social evaluation of end users. We implemented six common ML algorithms for four different benchmark datasets in a two-factor factorial design and asked potential end users to rate different factors in a survey. Our results show that the perceived goodness of explainability is moderated by the problem type and strongly correlates with trustworthiness as the most important factor.

mehr ...

Fundamentals article published in Electronic Markets

-

 

em-241x320

 

Artificial intelligence technology has started to shape how decisions are being taken and how intelligent information systems are being implemented today. However, artificial intelligence should not be considered as an abstract system property, but researchers and practitioners must also understand its inner workings as it affects many socio-technical issues downstream.

In our Electronic Markets fundamentals article “Machine Learning and Deep Learning”, together with co-authors Christian Janiesch and Kai Heinrich, we distinguish approaches for shallow machine learning and deep learning and explain the process of analytical model building from a more technical information systems perspective. Further, we detail four overarching challenges that research and practice will have to manage going forward.

The article aims to be a terminological baseline, gentle introduction and pointer to relevant work, as well as a motivator to approach said challenges.

The article is open access and available at: https://link.springer.com/article/10.1007%2Fs12525-021-00475-2

mehr ...

Research papers accepted at ECIS 2021

-

 

Logo-sponso

Together with colleagues from Würzburg, FAU and KIT we published two papers at the ECIS 2021 conference, which was held virtually in June. 

The first paper, which was submitted in a joint effort by colleagues from the KIT (Jannis Walk, Niklas Kühl and Michael Vössing) and FAU (Patrick Zschech) proposes Design Principals and testable proposition for Computer-Vision-based Hybrid Intelligence Systems. Besides technical aspects, we also focused on the often neglected socio-technical facets, such as trust, control, and autonomy.

The paper is available here: https://aisel.aisnet.org/ecis2021_rp/127/

The second paper that was submitted together with colleauges from the JMU Würzburg (Jonas Wanner, Laurell Popp, Kevin Fuchs, Kevin Fuchs and Christian Janiesch) explores adoption barriers for AI-based systems in the context of predictive maintenance.

The paper is available here: https://aisel.aisnet.org/ecis2021_rip/40/

 

mehr ...

Letzte Änderung: 27.08.2024 - Ansprechpartner: Webmaster