Paper on gradient-based explanations for survival models presented at ICML 2025

Vancouver, July 2025 – At the 2025 International Conference on Machine Learning (ICML), researchers presented their work on improving the interpretability of deep learning models in survival analysis through gradient-based explanation methods.

The paper “Gradient-Based Explanations for Machine Learning Survival Models” was presented at ICML 2025, one of the world’s leading conferences on machine learning.

The study addresses a key challenge in survival analysis: making deep learning models interpretable. The researchers developed a framework that generalizes gradient-based explanation methods—such as Saliency, Integrated Gradients, and Gradient×Input—to the time-to-event setting, enabling time-dependent feature attributions. This makes it possible to understand how and when different features influence survival predictions.

A central result is GradSHAP(t), a new method that achieves accuracy close to SurvSHAP(t) while offering significantly higher computational efficiency. This improvement makes large-scale and multimodal applications more feasible.

The work highlights new possibilities for interpretable AI in medicine and other fields, where understanding the reasoning behind model predictions can enhance trust and support informed decision-making.

More information.