Comprehensive Evaluation of
Feature Attribution Methods in Explainable AI via Input Perturbation

Explainable AI (XAI) has demonstrated its potential in deciphering discriminatory features in machine learning (ML) decision-making processes. Specifically, XAI’s feature attribution methods shed light on individual decisions made by ML models. However, despite their visual appeal, these attributions can be unfaithful. To ensure the faithfulness of feature attributions, it is Read more…

Interpretable and Interactive Disease Diagnosis Using Collaborative Learning of Segmentation and Classification

Medical image analysis encompasses two crucial research areas: disease grading and fine-grained lesion segmentation. Although disease grading often relies on fine-grained lesion segmentation, they are usually studied separately. Disease severity grading can be approached as a classification problem, utilizing image-level annotations to determine the severity of a medical condition. On Read more…