Deliver Your News to the World

PRISM: An Explainable Generative AI Model for Medical Imaging


WEBWIRE

What if radiologists could use AI systems as a reliable diagnostic assistant? Our new paper brings this closer to reality: by using generative AI, we can visually explore alternative scenarios based on real X-ray images, like what a patient’s body would look like without a specific disease. This provides doctors with safe and explainable AI tools for making better patient decisions.

PRISM: High-Resolution & Precise Counterfactual Medical Image Generation using Language-guided Stable Diffusion, (accepted as an oral at MIDL 2025) gives clinicians a model that not only explains the content of medical images, but also generates high-resolution, precise "counterfactual" versions of them to personalize patient care and build trust in the decision process.

By fine-tuning a vision-language foundation model (Stable Diffusion) on thousands of medical images, we developed an AI model that is both trustworthy and explainable enough for use in real-world clinical settings.

PRISM is:

  • Easy to use with natural language;
  • Trustworthy as it focuses on image-based explainability;
  • Adaptable to multiple medical imaging scenarios.
Opening the black box

A major hurdle for using AI in real-world medical scenarios is that most of the model’s decisions are hidden: the only information that the doctor often sees is the output (e.g., “the given medical image shows a healthy or sick patient”). 

Current medical imaging AI models mostly use classifiers, or “black-box” models for disease diagnosis. Due to this “black box” nature, there is a lack of explainability: we don’t know how the model makes a specific decision. These solutions are challenging to implement in real medical settings, where the decision-making process is just as crucial as the diagnosis.

To increase trust in AI medical imaging tools, we investigated where the model looks to determine the patient’s condition and what informs its decisions. Our goal is to open the black box to see what is behind the curtain: if a patient is sick, we want to know why.

Trust in the process

We addressed the fundamental problem of black-box models by generating high-resolution counterfactual images — alternate scenarios where a specific attribute is changed, e.g., when a disease pathology or a medical device is removed from the original image.

The model performs precise editing, ignoring other confounders of the disease — hidden factors that indicate a relationship between an exposure and a disease. Subtracting the factual and the counterfactual images shows exactly which areas must change to create the generated image, highlighting areas the model associated with the disease.

PRISM also ignores spurious correlations -or shortcuts- present in the dataset that could affect the ability of a model to generalize (i.e. adapt to unseen data): a model learning what a disease looks like by studying sick patients images could falsely associate the disease with the device used to treat it — like a chest tube or a pacemaker.

Increasing accessibility

To make the model even more accessible, we allowed the use of language guidance: a doctor can prompt the model to generate a sick patient’s image from a healthy patient’s X-ray, and the model will synthesize the requested image.

PRISM could be developed into a back-end software for medical image analysis or integrated in existing tools to be an effective AI assistant for practicing radiologists or a reliable training tool for future doctors. Our open-source weights allow further fine-tuning to fit even more medical imaging scenarios in the future.


( Press Release Image: https://photos.webwire.com/prmedia/7/340485/340485-1.jpg )


WebWireID340485





This news content was configured by WebWire editorial staff. Linking is permitted.

News Release Distribution and Press Release Distribution Services Provided by WebWire.