Article published in the Journal of Business Analytics

JBA

 

Together with colleagues at JMU Würzburg, we published a paper entitled "A social evaluation of the perceived goodness of explainability in machine learning" in the Journal of Business Analytics. 

Machine learning in decision support systems already outperforms pre-existing statistical methods. However, their predictions face challenges as calculations are often complex and not all model predictions are traceable. In fact, many well-performing models are black boxes to the user who– consequently– cannot interpret and understand the rationale behind a model’s prediction. Explainable artificial intelligence has emerged as a field of study to counteract this. However, current research often neglects the human factor. Against this backdrop, we derived and examined factors that influence the goodness of a model’s explainability in a social evaluation of end users. We implemented six common ML algorithms for four different benchmark datasets in a two-factor factorial design and asked potential end users to rate different factors in a survey. Our results show that the perceived goodness of explainability is moderated by the problem type and strongly correlates with trustworthiness as the most important factor.

Letzte Änderung: 20.09.2021 - Ansprechpartner: Webmaster