Explainable AI for Clinical Decision Support

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/176377
http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1763777
http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1763777
http://dx.doi.org/10.15496/publikation-117702
Dokumentart: Dissertation
Erscheinungsdatum: 2026-03-05
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Informatik
Gutachter: Lensch, Hendrik (Prof. Dr.)
Tag der mündl. Prüfung: 2026-02-09
DDC-Klassifikation: 004 - Informatik
Schlagworte: Künstliche Intelligenz , Entscheidungsunterstützung , Erklärbare künstliche Intelligenz
Lizenz: https://creativecommons.org/licenses/by/4.0/legalcode.de https://creativecommons.org/licenses/by/4.0/legalcode.en http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Gedruckte Kopie bestellen: Print-on-Demand
Zur Langanzeige

Abstract:

The rapid adoption of artificial intelligence (AI) in healthcare presents a core challenge: the design of explainable, trustworthy, and workflow-aware clinical decision support systems (CDSSs) whose benefits are validated through real-world impact rather than proxy measures. Although explanations frequently promote trust and acceptance, they do not automatically enhance human-AI team performance, and the determinants of effective collaboration remain insufficiently understood. This gap persists in large part because rigorous, application-grounded evaluations that follow best practices are still uncommon in the field. This dissertation comprises four publications that adopt an evaluation-driven approach, focusing on the practical challenge of AI-supported arousal detection from polysomnography (PSG) data in sleep medicine, a task that remains underexplored in real-world clinical context. Initially, a method is developed to assess and improve the semantic coherence of intrinsically interpretable prototype classification models. Following this, a comprehensive PSG dataset is compiled from clinical practice and released to support subsequent research. Building on this, a framework for optimizing and evaluating machine learning (ML) models for temporal event detection is introduced and used to guide the development of a domain- and task-aligned ML model for arousal detection in clinical practice. Furthermore, an application-grounded user study involving clinicians investigates how explanation transparency and workflow timing influence both human-AI team performance and acceptance in a real-world environment. The broader discussion illustrates the connections among the individual publications, clarifies their distinct contributions, and argues for the continued importance of direct human involvement as interest in more autonomous AI decision making continues to grow. By placing clinical needs at the center of the design, development, and evaluation of the explainable AI-based CDSS, this work demonstrates significant improvements in human-AI collaboration compared to unaided human performance, as well as advantages of transparent explanations over black-box AI systems. In addition, the dissertation synthesizes relevant conceptual foundations of explainable artificial intelligence (XAI) and CDSSs and examines contextual factors, including regulation, adoption barriers, and systems in practice, to situate the findings within the contemporary scientific and practical context, thereby offering guidance for future XAI research and supporting the translation of research into clinical practice.

Das Dokument erscheint in:

cc_by Solange nicht anders angezeigt, wird die Lizenz wie folgt beschrieben: cc_by