Zhaohui Su, VP of Biostatistics at Ontada, shared a post on LinkedIn:
“This study describes a privacy-preserving approach that combines Federated Learning (FL) with Differential Privacy (DP) in the healthcare. FL allows multiple healthcare institutions to collaboratively train a shared model without the need to exchange raw patient data, thereby minimizing privacy risks and enhancing compliance with regulations such as HIPAA and GDPR. To further protect sensitive information, DP incorporates calibrated Gaussian noise into model updates, which prevents adversaries from reconstructing patient data.
Using the Breast Cancer Wisconsin Diagnostic dataset, the authors evaluate three methodologies: non-FL, FL, and FL with DP. The results show that FL achieves the highest accuracy at 97.7%, surpassing the centralized model’s accuracy of 96.0%. While incorporating DP leads to a slight accuracy decrease to 96.1%, it significantly bolsters privacy protection. The study also addresses key challenges such as data heterogeneity, communication overhead, and adversarial threats, concluding that DP is the most practical privacy technique due to its scalability. Overall, the FL-DP framework presents a robust balance between privacy protection and diagnostic performance in real-world clinical applications.”
Title: Federated learning with differential privacy for breast cancer diagnosis enabling secure data sharing and model integrity
Authors: Shubhi Shukla, Suraksha Rajkumar, Aditi Sinha, Mohamed Esha, Konguvel Elango, Vidhya Sampath
Read the Full Article.

More posts featuring Zhaohui Su.