Events

Location

  • Online

Share

Related Publications

Ethical Considerations for Responsible Data Curation

NeurIPS, 2023
Jerone Andrews, Dora Zhao, William Thong, Apostolos Modas, Orestis Papakyriakopoulos, Alice Xiang

Human-centric computer vision (HCCV) data curation practices often neglect privacy and bias concerns, leading to dataset retractions and unfair models. HCCV datasets constructed through nonconsensual web scraping lack crucial metadata for comprehensive fairness and robustnes…

Posthoc privacy guarantees for collaborative inference with modified Propose-Test-Release

NeurIPS, 2023
Abhishek Singh*, Praneeth Vepakomma*, Vivek Sharma, Ramesh Raskar*

Cloud-based machine learning inference is an emerging paradigm where users query by sending their data through a service provider who runs an ML model on that data and returns back the answer. Due to increased concerns over data privacy, recent works have proposed Collaborat…

STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events

NeurIPS, 2023
Kazuki Shimada, Archontis Politis*, Parthasaarathy Sudarsanam*, Daniel Krause*, Kengo Uchida, Sharath Adavann*, Aapo Hakala*, Yuichiro Koyama*, Naoya Takahashi, Shusuke Takahashi*, Tuomas Virtanen*, Yuki Mitsufuji

While direction of arrival (DOA) of sound events is generally estimated from multichannel audio data recorded in a microphone array, sound events usually derive from visually perceptible source objects, e.g., sounds of footsteps come from the feet of a walker. This paper pro…

Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?

NeurIPS, 2023
Xiaoxiao Sun*, Nidham Gazagnadou, Vivek Sharma, Lingjuan Lyu, Hongdong Li*, Liang Zheng*

Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used to evaluate model privacy risk under reconstruction attacks. Under these metrics, reconstructed images that are determined to resemble the original one generally indicate more privacy leakage. Image…

LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning

NeurIPS, 2023
Bo Liu*, Yifeng Zhu*, Chongkai Gao*, Yihao Feng*, Qiang Liu*, Yuke Zhu*, Peter Stone

Lifelong learning offers a promising paradigm of building a generalist agent that learns and adapts over its lifespan. Unlike traditional lifelong learning problems in image and text domains, which primarily involve the transfer of declarative knowledge of entities and conce…

FAMO: Fast Adaptive Multitask Optimization

NeurIPS, 2023
Bo Liu*, Yihao Feng*, Peter Stone, Qiang Liu*

One of the grand enduring goals of AI is to create generalist agents that can learn multiple different tasks from diverse data via multitask learning (MTL). However, gradient descent (GD) on the average loss across all tasks may yield poor multitask performance due to severe…

Elden: Exploration via Local Dependencies

NeurIPS, 2023
Zizhao Wang*, Jiaheng Hu*, Roberto Martin-Martin*, Peter Stone

Tasks with large state space and sparse reward present a longstanding challenge to reinforcement learning. In these tasks, an agent needs to explore the state space efficiently until it finds reward: the hard exploration problem. To deal with this problem, the community has …

f-Policy Gradients: A General Framework for Goal-Conditioned RL using f-Divergences

NeurIPS, 2023
Siddhant Agarwal*, Ishan Durugkar, Amy Zhang*, Peter Stone

Goal-Conditioned RL problems provide sparse rewards where the agent receives a reward signal only when it has achieved the goal, making exploration a difficult problem. Several works augment this sparse reward with a learned dense reward function, but this can lead to subopt…

Differentially Private Image Classification by Learning Priors from Random Processes

NeurIPS, 2023
Xinyu Tang*, Ashwinee Panda*, Vikash Sehwag, Prateek Mittal*

In privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) performs worse than SGD due to per-sample gradient clipping and noise addition.A recent focus in private learning research is improving the performance of DP-SGD on private da…

UltraRE: Enhancing RecEraser for Recommendation Unlearning via Error Decomposition

NeurIPS, 2023
Yuyuan Li*, Chaochao Chen*, Yizhao Zhang*, Weiming Liu*, Lingjuan Lyu, Xiaolin Zheng*, Dan Meng*, Jun Wang*

With growing concerns regarding privacy in machine learning models, regulations have committed to granting individuals the right to be forgotten while mandating companies to develop non-discriminatory machine learning systems, thereby fueling the study of the machine unlearn…

Towards Personalized Federated Learning via Heterogeneous Model Reassembly

NeurIPS, 2023
Jiaqi Wang*, Xingyi Yang*, Suhan Cui*, Liwei Che*, Lingjuan Lyu, Dongkuan Xu*, Fenglong Ma*

This paper focuses on addressing the practical yet challenging problem of model heterogeneity in federated learning, where clients possess models with different network structures. To track this problem, we propose a novel framework called pFedHR, which leverages heterogeneo…

Is Heterogeneity Notorious? Taming Heterogeneity to Handle Test-Time Shift in Federated Learning

NeurIPS, 2023
Yue Tan, Chen Chen, Weiming Zhuang*, Xin Dong, Lingjuan Lyu, Guodong Long*

Federated learning (FL) is an effective machine learning paradigm where multiple clients can train models based on heterogeneous data in a decentralized manner without accessing their private data. However, existing FL systems undergo performance deterioration due to feature…

Where Did I Come From? Origin Attribution of AI-Generated Images

NeurIPS, 2023
Zhenting Wang, Chen Chen, Yi Zeng, Lingjuan Lyu, Shiqing Ma*

Image generation techniques have been gaining increasing attention recently, but concerns have been raised about the potential misuse and intellectual property (IP) infringement associated with image generation models. It is, therefore, necessary to analyze the origin of ima…

Towards a fuller understanding of neurons with Clustered Compositional Explanations

NeurIPS, 2023
Biagio La Rosa*, Leilani H. Gilpin*, Roberto Capobianco

Compositional Explanations is a method for identifying logical formulas of concepts that approximate the neurons' behavior. However, these explanations are linked to the small spectrum of neuron activations used to check the alignment (i.e., the highest ones), thus lacking c…

Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors

NeurIPS, 2023
Swami Sankaranarayanan, Thomas Hartvigsen*, Hamid Palangi*, Yoon Kim*, Marzyeh Ghassemi*

Deployed models decay over time due to shifting inputs, changing user needs, or emergent knowledge gaps. When harmful behaviors are identified, targeted edits are required. However, current model editors, which adjust specific behaviors of pre-trained models, degrade model p…

FRUNI and FTREE synthetic knowledge graphs for evaluating explainability

NeurIPS, 2023
Pablo Sanchez Martin, Tarek Besold, Priyadarshini Kumari

Research on knowledge graph completion (KGC)---i.e., link prediction within incomplete KGs---is witnessing significant growth in popularity. Recently, KGC using KG embedding (KGE) models, primarily based on complex architectures (e.g., transformers), have achieved remarkable…

BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach

NeurIPS, 2022
Bo Liu*, Mao Ye*, Stephen Wright*, Peter Stone, Qiang Liu*

Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta-learning, continual learning, and reinforcement learning. Conventional BO methods need to differentiate through the…

Causality for Temporal Unfairness Evaluation and Mitigation

NeurIPS, 2022
Aida Rahmattalabi, Alice Xiang

Recent interests in causality for fair decision-making systems has been accompanied with great skepticism due to practical and epistemological challenges with applying existing causal fairness approaches. Existing works mainly seek to remove the causal effect of social categ…

Men Also Do Laundry: Multi-Attribute Bias Amplification

NeurIPS, 2022
Dora Zhao, Jerone T. A. Andrews, Alice Xiang

As computer vision systems become more widely deployed, there is increasing concern from both the research community and the public that these systems are not only reproducing but amplifying harmful social biases. The phenomenon of bias amplification, which is the focus of t…

A View From Somewhere: Human-Centric Face Representations

NeurIPS, 2022
Jerone T. A. Andrews, Przemyslaw Joniak*, Alice Xiang

We propose to implicitly learn a set of continuous face-varying dimensions, without ever asking an annotator to explicitly categorize a person. We uncover the dimensions by learning on a novel dataset of 638,180 human judgments of face similarity (FAX). We demonstrate the ut…

A View From Somewhere: Human-Centric Face Representations

NeurIPS, 2022
Jerone T. A. Andrews, Przemyslaw Joniak*, Alice Xiang

Biases in human-centric computer vision models are often attributed to a lack of sufficient data diversity, with many demographics insufficiently represented. However, auditing datasets for diversity can be difficult, due to an absence of ground-truth labels of relevant feat…

MocoSFL: enabling cross-client collaborative self-supervised learning

NeurIPS, 2022
Jingtao Li, Lingjuan Lyu, Daisuke Iso, Chaitali Chakrabarti*, Michael Spranger

Existing collaborative self-supervised learning (SSL) schemes are not suitable for cross-client applications because of their expensive computation and large local data requirements. To address these issues, we propose MocoSFL, a collaborative SSL framework based on Split Fe…

Proppo: a Message Passing Framework for Customizable and Composable Learning Algorithms

NeurIPS, 2022
Paavo Parmas*, Takuma Seno

While existing automatic differentiation (AD) frameworks allow flexibly composing model architectures, they do not provide the same flexibility for composing learning algorithms---everything has to be implemented in terms of back propagation. To address this gap, we invent A…

Value Function Decomposition for Iterative Design of Reinforcement Learning Agents

NeurIPS, 2022
James MacGlashan, Evan Archer, Alisa Devlic, Takuma Seno, Craig Sherstan, Peter R. Wurman, Peter Stone

Designing reinforcement learning (RL) agents is typically a difficult process that requires numerous design iterations. Learning can fail for a multitude of reasons and standard RL methods provide too few tools to provide insight into the exact cause. In this paper, we show …

Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling

NeurIPS, 2022
Junyuan Hong, Lingjuan Lyu, Jiayu Zhou*, Michael Spranger

As deep learning blooms with growing demand for computation and data resources, outsourcing model training to a powerful cloud server becomes an attractive alternative to training at a low-power and cost-effective end device. Traditional outsourcing requires uploading device…

Calibrated Federated Adversarial Training with Label Skewness

NeurIPS, 2022
Chen Chen, Yuchen Liu*, Xingjun Ma*, Lingjuan Lyu

Recent studies have shown that, like traditional machine learning, federated learning (FL) is also vulnerable to adversarial attacks.To improve the adversarial robustness of FL, few federated adversarial training (FAT) methods have been proposed to apply adversarial training…

DENSE: Data-Free One-Shot Federated Learning

NeurIPS, 2022
Jie Zhang*, Chen Chen, Bo Li*, Lingjuan Lyu, Shuang Wu*, Shouhong Ding*, Chunhua Shen*, Chao Wu*

One-shot Federated Learning (FL) has recently emerged as a promising approach, which allows the central server to learn a model in a single communication round. Despite the low communication cost, existing one-shot FL methods are mostly impractical or face inherent limitatio…

CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks

NeurIPS, 2022
Xuanli He*, Qiongkai Xu*, Yi Zeng, Lingjuan Lyu, Fangzhao Wu*, Jiwei Li*, Ruoxi Jia*

Previous works have validated that text generation APIs can be stolen through imitation attacks, causing IP violations. In order to protect the IP of text generation APIs, a recent work has introduced a watermarking algorithm and utilized the null-hypothesis test as a post-h…

Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization

NeurIPS, 2022
Zijie Zhang*, Xin Zhao*, Tianshi Che*, Yang Zhou*, Lingjuan Lyu

The right to be forgotten calls for efficient machine unlearning techniques that make trained machine learning models forget a cohort of data. The combination of training and unlearning operations in traditional machine unlearning methods often leads to the expensive computa…

FairVFL: A Fair Vertical Federated Learning Framework with Contrastive Adversarial Learning

NeurIPS, 2022
Tao Qi*, Fangzhao Wu*, Chuhan Wu*, Lingjuan Lyu, Tong Xu*, Hao Liao*, Zhongliang Yang*, Yongfeng Huang*, Xing Xie*

Vertical federated learning (VFL) is a privacy-preserving machine learning paradigm that can learn models from features distributed on different platforms in a privacy-preserving way. Since in real-world applications the data may contain bias on fairness-sensitive features (…

Exploiting Data Sparsity in Secure Cross-Platform Social Recommendation

NeurIPS, 2021
Jamie Cui*, Chaochao Chen*, Lingjuan Lyu, Carl Yang*, Li Wang*

Social recommendation has shown promising improvements over traditional systems since it leverages social correlation data as an additional input. Most existing works assume that all data are available to the recommendation platform. However, in practice, user-item interacti…

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

NeurIPS, 2021
Yige Li*, Xixiang Lyu*, Nodens Koren*, Lingjuan Lyu, Bo Li*, Xingjun Ma*

Backdoor attack has emerged as a major security threat to deep neural networks(DNNs). While existing defense methods have demonstrated promising results on detecting and erasing backdoor triggers, it is still not clear if measures can be taken to avoid the triggers from bein…

Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

NeurIPS, 2021
Xu Xinyi*, Lingjuan Lyu, Xingjun Ma*, Chenglin Miao*, Chuan-Sheng Foo*, Bryan Kian Hsiang Low*

Collaborative machine learning provides a promising framework for different agents to pool their resources (e.g., data) for a common learning task. In realistic settings where agents are self-interested and not altruistic, they may be unwilling to share data or model without…

Expert Human-Level Driving in Gran Turismo Sport Using Deep Reinforcement Learning with Image-based Representation

NeurIPS, 2021
Ryuji Imamura, Takuma Seno, Kenta Kawamoto, Michael Spranger

When humans play virtual racing games, they use visual environmental information on the game screen to understand the rules within the environments. In contrast, a state-of-the-art realistic racing game AI agent that outperforms human players does not use image-based environ…

d3rlpy: An Offline Deep Reinforcement Learning Library

NeurIPS, 2021
Takuma Seno, Michita Imai*

In this paper, we introduce d3rlpy, an open-sourced offline deep reinforcement learning (RL) library for Python. d3rlpy supports a number of offline deep RL algorithms as well as online algorithms via a user-friendly API. To assist deep RL research and development projects, …

Assessing SATNet's Ability to Solve the Symbol Grounding Problem

NeurIPS, 2020
Michael Spranger, Oscar Chang*, Lampros Flokas*, Hod Lipson*

SATNet is an award-winning MAXSAT solver that can be used to infer logical rules and integrated as a differentiable layer in a deep neural network. It had been shown to solve Sudoku puzzles visually from examples of puzzle digit images, and was heralded as an impressive achi…

Temporal Positive-unlabeled Learning for Biomedical Hypothesis Generation via Risk Estimation

NeurIPS, 2020
Uchenna Akujuobi, Jun Chen*, Mohamed Elhoseiny*, Michael Spranger, Xiangliang Zhang*

Understanding the relationships between biomedical terms like viruses, drugs, and symptoms is essential in the fight against diseases. Many attempts have been made to introduce the use of machine learning to the scientific process of hypothesis generation (HG), which refers …

Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks

NeurIPS, 2020
Lemeng Wu*, Bo Liu*, Peter Stone, Qiang Liu*

We propose firefly neural architecture descent, a general framework for progressively and dynamically growing neural networks to jointly optimize the networks' parameters and architectures. Our method works in a steepest descent fashion, which iteratively finds the best netw…

An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch

NeurIPS, 2020
Siddharth Desai*, Ishan Durugkar, Haresh Karnan*, Garrett Warnell*, Josiah Hanna*, Peter Stone

We examine the problem of transferring a policy learned in a source environment to a target environment with different dynamics, particularly in the case where it is critical to reduce the amount of interaction with the target environment during learning. This problem is par…

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.