Data Security and Privacy in Machine Unlearning:

Recent Advances, Challenges, and Future Perspectives

ICDM 2025 Tutorial | Washington DC, USA

Aobo Chen, Wei Qian, Zheyuan Liu, Shagufta Mehnaz, Tianhao Wang, Mengdi Huai

Tutorial Date: TBD

Tutorial Abstract

Machine unlearning enables the removal of specific data or knowledge from trained models without retraining from scratch, thereby operationalizing the privacy principle of the right to be forgotten. It has gained significant attention in recent years. However, the development of machine unlearning is associated with inherent vulnerabilities and threats, posing significant security and privacy challenges for researchers and practitioners. This tutorial will focus on the following aspects: (1) providing a comprehensive review of security and privacy challenges in machine unlearning from the data mining perspective; (2) introducing cutting-edge techniques to solve the security and privacy risks in machine unlearning from both data and model perspectives; and (3) identifying open challenges and proposing convincing future research directions in robust unlearning. We believe this is an emerging and potentially high-impact topic, which will attract both researchers and practitioners from academia and industry.

Target Audience and Prerequisites

The participants are expected to have basic knowledge of linear algebra, data mining, and machine learning. Those audience with experience in fields such as security, adversarial robustness, privacy, machine unlearning, foundation models, and large language models are especially encouraged to participate in Q&A sessions during the tutorial or engage in offline discussions. Participants will gain insights into preserving data privacy and security while harnessing the power of machine unlearning, a crucial skill for enforcing “the right to be forgotten”. We expect the number of participants for this tutorial to be around 30.

Tutorial Materials

All materials for this tutorial, including slides and a video recording, will be made available here.

Tutorial Modules

Part 1: Introduction (10 minutes)

Part 2: Security and Privacy (20 minutes)

Part 3: Security Attacks and Countermeasures in Machine Unlearning (30 minutes)

Part 4: Privacy Attacks and Countermeasures in Machine Unlearning (30 minutes)

Part 5: Summary and Future Work (10 minutes)

Contributors

Aobo Chen

Aobo Chen

Aobo Chen (aobochen@iastate.edu) is currently a Ph.D. student in the Department of Computer Science at the Iowa State University (ISU). His research focuses on data mining and machine learning, with particular emphasis on the security and privacy vulnerabilities of modern models. His recent work investigates adversarial risks, machine unlearning, privacy leakage, and robustness challenges in trustworthy AI. He has been recognized with the Publication Awards in 2024 and 2025 from the Computer Science department at ISU for his research contributions and continues to advance his research.

Wei Qian

Wei Qian

Wei Qian (wqi@iastate.edu) is currently a Ph.D. student in the Department of Computer Science at ISU. His research interests are data mining and machine learning. More specifically, he is working on exploring adversarial robustness, security and privacy in machine unlearning, and trustworthy AI. He has also earned multiple awards, including the Publication Awards from the Computer Science department at ISU in 2024 and 2025, and the Graduate College Research Excellence Award at the Iowa State University.

Zheyuan Liu

Zheyuan Liu

Zheyuan Liu (zliu29@nd.edu) is a Ph.D. student in Computer Science and Engineering at the University of Notre Dame. His research interests lie in post-training techniques, generative AI, and agentic AI. In particular, he focuses on developing methods such as machine unlearning to enhance the trustworthiness of GenAIs, including their privacy, safety, and fairness. He has received multiple awards and fellowships, including the Professional Development Award from the Department of Computer Science and Engineering at Notre Dame in 2024. One of his most recent unlearning works has obtained a U.S. patent.

Shagufta Mehnaz

Shagufta Mehnaz

Dr. Shagufta Mehnaz (smehnaz@psu.edu) is an assistant professor in the Department of Computer Science and Engineering at the Pennsylvania State University. Her research interests are at the intersection of security, privacy, and machine learning, with a focus on topics such as secure and privacy-preserving machine unlearning. Her research has been published in top-tier Security/Privacy and ML/AI/data mining conferences. She has earned multiple awards, including the NSF CAREER Award, the Best Paper Award in ACM CODASPY 2017, the Bilsland Dissertation Fellowship (Purdue University), and the Faculty For The Future Fellowship.

Tianhao Wang

Tianhao Wang

Dr. Tianhao Wang (tianhao@virginia.edu) is a data privacy and security researcher that joined the University of Virginia (UVA) in 2022. He serves as an assistant professor in the Department of Computer Science and at the School of Data Science by Courtesy. He has extensive publications in top security and database conferences. His work about differentially private synthetic data generation won multiple awards in NIST's competition. Prior to the University of Virginia, he obtained his Ph.D. from Purdue University in 2021 and held a postdoc position at Carnegie Mellon University.

Mengdi Huai

Mengdi Huai

Dr. Mengdi Huai (mdhuai@iastate.edu) is currently an assistant professor in the Department of Computer Science at ISU. Her research interests lie in data mining and machine learning, with the current emphasis on security and robustness, machine unlearning, and privacy protection. She has received multiple awards, including the NSF CAREER Award, the AAAI New Faculty Highlights, the Departmental Excellence Award (ISU), the Rising Star in EECS at MIT, the Sture G. Olsson Fellowship in Engineering (UVA), and the Best Paper Runner-up for KDD 2020.

Contact

For questions regarding the tutorial, please contact us at the emails above.

Acknowledgments

We sincerely thank all the contributors and the community for their continuous support and valuable feedback. This research is funded by NSF Grant #2350332, "Security and Privacy in Machine Unlearning", and NSF Grant #2442750, "CAREER: Enabling Reliable Uncertainty-Aware Decision Making with Unreliable Data".

References

[1] A. Chen, Y. Li, C. Zhao, and M. Huai, “A survey of security and privacy issues of machine unlearning,” 2025.

[2] S. Liu, Y. Liu, N. B. Angel, and E. Triantafillou, “Machine unlearning in computer vision: Foundations and applications (cvpr 2024 tutorial),” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.

[3] V. Patil, M. Mazeika, W. Hodgkins, S. Basart, Y. Liu, K. Lee, M. Bansal, and B. Li, “Machine unlearning for generative ai (icml 2025 workshop),” in Proceedings of the 42st International Conference on Machine Learning (ICML), 2025.

[4] D. Alabi, S. Galhotra, S. Mehnaz, Z. Song, and E. Wu, “Privacy and security in distributed data markets,” in Companion of the 2025 International Conference on Management of Data, 2025, pp. 775–787.

[5] L. Li, K. Zhao, K. Ding, Y. Zhao, Y. Dong, and N. Gong, “Model extraction attack and defense for large language models: Recent advances, challenges, and future perspectives (kdd 2025 tutorial),” in Proceedings of the 31th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2025.

[6] M. Chen, Z. Zhang, T. Wang, M. Backes, M. Humbert, and Y. Zhang, “Graph unlearning,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022.

[7] C. Gong, K. Li, J. Yao, and T. Wang, “TrajDeleter: Enabling trajectory forgetting in offline reinforcement learning agents,” in NDSS, 2025.

[8] Z. Liu, G. Dou, E. Chien, C. Zhang, Y. Tian, and Z. Zhu, “Breaking the trilemma of privacy, utility, and efficiency via controllable machine unlearning,” in Proceedings of the ACM Web Conference 2024, 2024, pp. 1260–1271.

[9] Z. Liu, G. Dou, M. Jia, Z. Tan, Q. Zeng, Y. Yuan, and M. Jiang, “Protecting privacy in multimodal large language models with MLLMU-bench,” in Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Association for Computational Linguistics, 2025, pp. 4105–4135.

[10] K. Gu, M. R. U. Rashid, N. Sultana, and S. Mehnaz, “Robust unlearning for large language models,” in PAKDD (5), 2025, pp. 143–155.

[11] W. Qian, C. Zhao, W. Le, M. Ma, and M. Huai, “Towards understanding and enhancing robustness of deep learning models against malicious unlearning attacks,” in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023, pp. 1932–1942.

[12] C. Zhao, W. Qian, R. Ying, and M. Huai, “Static and sequential malicious attacks in the context of selective forgetting,” Advances in Neural Information Processing Systems, vol. 36, pp. 74 966–74979, 2023.

[13] W. Qian, A. Chen, C. Zhao, Y. Li, and M. Huai, “Exploring fairness in educational data mining in the context of the right to be forgotten,” arXiv preprint arXiv:2405.16798, 2024.

[14] Y. Huang, D. Liu, L. Chua, B. Ghazi, P. Kamath, R. Kumar, P. Manurangsi, M. Nasr, A. Sinha, and C. Zhang, “Unlearn and burn: Adversarial machine unlearning requests destroy model accuracy,” arXiv preprint arXiv: 2410.09591, 2024.

[15] B. Ma, T. Zheng, H. Hu, D. Wang, S. Wang, Z. Ba, Z. Qin, and K. Ren, “Releasing malevolence from benevolence: The menace of benign data on machine unlearning,” arXiv preprint arXiv:2407.05112, 2024.

[16] J. Z. Di, J. Douglas, J. Acharya, G. Kamath, and A. Sekhari, “Hidden poison: Machine unlearning enables camouflaged poisoning attacks,” in NeurIPS ML Safety Workshop, 2022.

[17] C. Zhao, W. Qian, Y. Li, A. Chen, and M. Huai, “Rethinking adversarial robustness in the context of the right to be forgotten,” in Proceedings of the 41st International Conference on Machine Learning, 2024, pp. 60927–60939.

[18] Z. Liu, T. Wang, M. Huai, and C. Miao, “Backdoor attacks via machine unlearning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, 2024, pp. 14115–14 123.

[19] Z. Huang, Y. Mao, and S. Zhong, “{UBA-Inf}: Unlearning activated backdoor attack with {Influence-Driven} camouflage,” in 33rd USENIX Security Symposium (USENIX Security 24), 2024, pp. 4211–4228.

[20] M. Alam, H. Lamri, and M. Maniatakos, “Reveil: Unconstrained concealed backdoor attack on deep neural networks using machine unlearning,” arXiv preprint arXiv: 2502.11687, 2025.

[21] J. Ji, Y. Liu, Y. Zhang, G. Liu, R. Kompella, S. Liu, and S. Chang, “Reversing the forget-retain objectives: An efficient Ilm unlearning framework from logit difference,” Advances in Neural Information Processing Systems, vol. 37, pp. 12581–12611, 2024.

[22] Y. Hu, J. Lou, J. Liu, W. Ni, F. Lin, Z. Qin, and K. Ren, “Eraser: Machine unlearning in mlaas via an inference serving-aware approach,” in Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, 2024, pp. 3883–3897.

[23] M. R. U. Rashid, J. Liu, T. Koike-Akino, Y. Wang, and S. Mehnaz, “Forget to flourish: Leveraging machine-unlearning on pretrained language models for privacy leakage,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, 2025, pp. 20 139–20 147.

[24] J. Łucki, B. Wei, Y. Huang, P. Henderson, F. Tramèr, and J. Rando, “An adversarial perspective on machine unlearning for ai safety,” Transactions on Machine Learning Research, 2025.

[25] H. Yuan, Z. Jin, P. Cao, Y. Chen, K. Liu, and J. Zhao, “Towards robust knowledge unlearning: An adversarial framework for assessing and improving unlearning robustness in large language models,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 24, 2025, pp. 25769–25777.

[26] H. Hu, S. Wang, T. Dong, and M. Xue, “Learn what you want to unlearn: Unlearning inversion attacks against machine unlearning,” in 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024. pp. 3257–3275.

[27] M. Chen, Z. Zhang, T. Wang, M. Backes, M. Humbert, and Y. Zhang, “When machine unlearning jeopardizes privacy,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 896–911.

[28] W. Wang, C. Zhang, Z. Tian, S. Liu, and S. Yu, “Crfu: Compressive representation forgetting against privacy leakage on machine unlearning,” IEEE Transactions on Dependable and Secure Computing, 2025.

[29] C. Fan, J. Jia, Y. Zhang, A. Ramakrishna, M. Hong, and S. Liu, “Towards Ilm unlearning resilient to relearning attacks: A sharpness-aware minimization perspective and beyond,” International conference on machine learning, 2025.

[30] B. Zhang, Y. Dong, T. Wang, and J. Li, “Towards certified unlearning for deep neural networks,” in Forty-first International Conference on Machine Learning, 2024.