Authors

* External authors

Venue

Date

Share

Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

Xu Xinyi*

Lingjuan Lyu

Xingjun Ma*

Chenglin Miao*

Chuan-Sheng Foo*

Bryan Kian Hsiang Low*

* External authors

NeurIPS-2021

2021

Abstract

Collaborative machine learning provides a promising framework for different agents to pool their resources (e.g., data) for a common learning task. In realistic settings where agents are self-interested and not altruistic, they may be unwilling to share data or model without adequate rewards. Furthermore, as the data/model the agents share may differ in quality, designing rewards which are fair to them is important so they do not feel exploited and discouraged from sharing. In this paper, we investigate this problem in gradient-based collaborative machine learning. We propose a novel cosine gradient Shapley to evaluate the agents’ contributions and design commensurate rewards in the form of better models. Compared to existing baselines, our method is more efficient and does not require a validation dataset. We provide theoretical fairness guarantees and empirically validate the effectiveness of our method.

Related Publications

Heterogeneous Graph Node Classification with Multi-Hops Relation Features

ICASSP, 2022
Xiaolong Xu*, Lingjuan Lyu, Hong Jin*, Weiqiang Wang*, Shuo Jia*

In recent years, knowledge graph~(KG) has obtained many achievements in both research and industrial fields. However, most KG algorithms consider node embedding with only structure and node features, but not relation features. In this paper, we propose a novel Heterogeneous …

How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data

ICLR, 2022
Zhiyuan Zhang*, Lingjuan Lyu, Weiqiang Wang*, Lichao Sun*, Xu Sun*

Since training a large-scale backdoored model from scratch requires a large training dataset, several recent attacks have considered to inject backdoors into a trained clean model without altering model behaviors on the clean data. Previous work finds that backdoors can be i…

Decision Boundary-aware Data Augmentation for Adversarial Training

TDSC, 2022
Chen Chen, Jingfeng Zhang*, Xilie Xu*, Lingjuan Lyu, Chaochao Chen*, Tianlei Hu*, Gang Chen*

Adversarial training (AT) is a typical method to learn adversarially robust deep neural networks via training on the adversarial variants generated by their natural examples. However, as training progresses, the training data becomes less attackable, which may undermine the …

  • HOME
  • Publications
  • Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.