Authors

* External authors

Venue

Date

Share

UltraRE: Enhancing RecEraser for Recommendation Unlearning via Error Decomposition

Yuyuan Li*

Chaochao Chen*

Yizhao Zhang*

Weiming Liu*

Lingjuan Lyu

Xiaolin Zheng*

Dan Meng*

Jun Wang*

* External authors

NeurIPS 2023

2023

Abstract

With growing concerns regarding privacy in machine learning models, regulations have committed to granting individuals the right to be forgotten while mandating companies to develop non-discriminatory machine learning systems, thereby fueling the study of the machine unlearning problem. Our attention is directed toward a practical unlearning scenario, i.e., recommendation unlearning. As the state-of-the-art framework, i.e., RecEraser, naturally achieves full unlearning completeness, our objective is to enhance it in terms of model utility and unlearning efficiency. In this paper, we rethink RecEraser from an ensemble-based perspective and focus on its three potential losses, i.e., redundancy, relevance, and combination. Under the theoretical guidance of the above three losses, we propose a new framework named UltraRE, which simplifies and powers RecEraser for recommendation tasks. Specifically, for redundancy loss, we incorporate transport weights in the clustering algorithm to optimize the equilibrium between collaboration and balance while enhancing efficiency; for relevance loss, we ensure that sub-models reach convergence on their respective group data; for combination loss, we simplify the combination estimator without compromising its efficacy. Extensive experiments on three real-world datasets demonstrate the effectiveness of UltraRE.

Related Publications

Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?

NeurIPS, 2023
Xiaoxiao Sun*, Nidham Gazagnadou, Vivek Sharma, Lingjuan Lyu, Hongdong Li*, Liang Zheng*

Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used to evaluate model privacy risk under reconstruction attacks. Under these metrics, reconstructed images that are determined to resemble the original one generally indicate more privacy leakage. Image…

Towards Personalized Federated Learning via Heterogeneous Model Reassembly

NeurIPS, 2023
Jiaqi Wang*, Xingyi Yang*, Suhan Cui*, Liwei Che*, Lingjuan Lyu, Dongkuan Xu*, Fenglong Ma*

This paper focuses on addressing the practical yet challenging problem of model heterogeneity in federated learning, where clients possess models with different network structures. To track this problem, we propose a novel framework called pFedHR, which leverages heterogeneo…

Is Heterogeneity Notorious? Taming Heterogeneity to Handle Test-Time Shift in Federated Learning

NeurIPS, 2023
Yue Tan, Chen Chen, Weiming Zhuang, Xin Dong, Lingjuan Lyu, Guodong Long*

Federated learning (FL) is an effective machine learning paradigm where multiple clients can train models based on heterogeneous data in a decentralized manner without accessing their private data. However, existing FL systems undergo performance deterioration due to feature…

  • HOME
  • Publications
  • UltraRE: Enhancing RecEraser for Recommendation Unlearning via Error Decomposition

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.