Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Protecting patients' right to be forgotten (RtbF) in the digital era underscores algorithms to delete health records from deployed artificial intelligence (AI) models upon requests, a paradigm also known as machine unlearning. Such forgetfulness, however, will alter AI models' views of subpopulations and could yield discriminating clinical decisions that undermine healthcare equality and endanger patient trust, a common phenomenon confirmed by our thorough investigations. This observation reveals an ethical dilemma in clinical AI: protecting the RtbF of certain patients will compromise the delivery of equitable decisions and, thus, is against the AI fairness principle, necessitating a safeguarding approach. To this end, we propose a fair unlearning strategy to effectively remove medical records from trained models while mitigating decision biases to improve algorithmic equality. This strategy avoids negative interventions between potentially conflicting objectives by enforcing their gradient orthogonality. We perform extensive evaluations on real-world, multi-hospital datasets of rapid COVID-19 screening, in-hospital mortality and shock predictions. Compared with state-of-the-art approaches, our method can teach models to more effectively forget patient records with better predictive performance while, more importantly, mitigating demographic unfairness across subpopulations of ethnicities and sites (hospitals), accommodating the first generalised solution for the dilemma of RtbF and fairness.

More information Original publication

DOI

10.1038/s41467-026-72601-7

Type

Journal article

Publication Date

2026-05-04T00:00:00+00:00