Leveraging SHAP and LIME for Enhanced Explainability in AI-Driven Diagnostic Systems

Authors

  • Aravind Kumar Kalusivalingam

    Author
  • Amit Sharma

    Author
  • Neha Patel

    Author
  • Vikram Singh

    Author

Keywords:

Explainable AI , SHAP , LIME , AI, Model interpretability , Machine learning explainability , Healthcare AI , Diagnostic systems , Transparency in AI , Feature importance , Black, Interpretability techniques , AI in healthcare , Explainability methods , SHAP vs, Model transparency , Clinical AI applications , Trust in AI systems , Decision support systems , Medical AI models , Model, Real, Deep learning interpretability , Fairness in AI diagnostics , User, AI model accountability , Enhancing AI trustworthiness , Explainability in healthcare , SHAP applications , LIME use cases , Interpreting AI predictions , Ethical AI in diagnostics , Integrating SHAP and LIME , Data, AI model evaluation , Predictive analytics in medicine , Algorithm transparency , Improving AI reliability , Patient

Abstract

This research paper investigates the integration of SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to enhance the transparency and interpretability of AI-driven diagnostic systems, which are increasingly used in healthcare for predictive analytics and decision support. Given the black-box nature of many machine learning algorithms employed in these systems, there is a pressing need for interpretable models that engender trust among healthcare professionals. The study presents a comparative analysis of SHAP and LIME within various diagnostic contexts, assessing their effectiveness in elucidating model predictions. Methodologically, we applied both SHAP and LIME across multiple datasets from different clinical domains, taking into account factors such as model complexity, input feature importance, and contextual relevance of explanations. Our findings indicate that while both methods substantially improve model transparency, SHAP offers more consistent and globally coherent explanations, whereas LIME provides highly intuitive and context-specific insights at a local level. Additionally, the research evaluates user trust and acceptance through a survey of healthcare practitioners, highlighting their preference for explanations that align closely with medical knowledge. The paper concludes by discussing implications for the design of AI diagnostic tools, recommending a hybrid approach that leverages the strengths of both SHAP and LIME to achieve optimal explainability. This work contributes significantly to the field by providing a framework for integrating explanation models into AI systems, ultimately aiming to foster more informed clinical decision-making and improved patient outcomes.

Downloads

Published

2021-02-15

How to Cite

Leveraging SHAP and LIME for Enhanced Explainability in AI-Driven Diagnostic Systems. (2021). International Journal of AI and ML, 2(3). https://www.cognitivecomputingjournal.com/index.php/IJAIML-V1/article/view/81