Leveraging imperfection with MEDLEY: a multi-model approach harnessing bias in medical AI
2026 (English)In: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 9, article id 1701665Article in journal (Refereed) Published
Abstract [en]
Bias in medical artificial intelligence is conventionally viewed as a defect that requires elimination. However, human reasoning inherently incorporates biases shaped by education, culture, and experience, suggesting their presence may be inevitable and potentially valuable. We propose MEDLEY (Medical Ensemble Diagnostic system with Leveraged diversitY), a conceptual framework that orchestrates multiple AI models while preserving their diverse outputs rather than collapsing them into a consensus. Unlike traditional approaches that suppress disagreement, MEDLEY documents model-specific biases as potential strengths and treats hallucinations as provisional hypotheses for clinician verification. A proof-of-concept demonstrator for differential diagnosis was developed using over 30 large language models, preserving both consensus and minority views, rendering diagnostic uncertainty and latent biases transparent to support clinical oversight. While not yet a validated clinical tool, the demonstration illustrates how structured diversity can enhance medical reasoning under the supervision of clinicians. By reframing AI imperfection as a resource, MEDLEY offers a paradigm shift that opens new regulatory, ethical, and innovation pathways for developing trustworthy medical AI systems.
Place, publisher, year, edition, pages
Frontiers Media S.A., 2026. Vol. 9, article id 1701665
Keywords [en]
AI regulation and governance, bias and fairness in AI, clinical decision support systems, diagnostic uncertainty, hallucination in large language models, human-in-the-loop AI, medical artificial intelligence, multi-model and ensemble learning
National Category
Other Medical Sciences not elsewhere specified Other Computer and Information Science
Research subject
Textiles and Fashion (General)
Identifiers
URN: urn:nbn:se:hb:diva-35457DOI: 10.3389/frai.2026.1701665ISI: 001717350300001PubMedID: 41858846Scopus ID: 2-s2.0-105033261448OAI: oai:DiVA.org:hb-35457DiVA, id: diva2:2050878
2026-04-072026-04-072026-04-07Bibliographically approved