The Interplay Between Explainability and Differential Privacy in Federated Healthcare
Sharing patient data across hospitals is essential for training powerful AI, but privacy laws demand strong safeguards. Differential Privacy (DP) is one such safeguard, it adds noise to protect individuals, yet this noise also weakens both model accuracy and interpretability. We asked: what happens when explainable AI meets DP in real federated healthcare? In a cross-silo federation combining BraTS 2025 clients with a real hospital dataset, we discovered a heterogeneity amplifier effect: DP noise hurts performance and explanations much more on hospitals with unique, diverse data than on standardized clients. Standard saliency methods (Grad-CAM, Seg-Grad-CAM) often collapsed under strong DP, leaving explanations unusable. To address this, we developed BID-CAM, a hybrid explanation method that disentangles tumor boundaries from interiors and is explicitly DP-aware. BID-CAM preserved far more coherent explanations under privacy constraints, especially for heterogeneous clients, making private federated models locally interpretable again.
Our role in this project
We quantified how differential privacy noise degrades both accuracy and explanation fidelity in federated medical imaging. We also identified the heterogeneity amplifier effect, showing that clients with unique data suffer disproportionate degradation. Finally, we introduced BID-CAM, a DP-robust hybrid explanation method that outperforms existing saliency tools under privacy constraints.
Collaborators

