Lingjuan
Lyu

Profile

Lingjuan is a senior research scientist and Privacy-Preserving Machine Learning (PPML) team leader in Sony AI. Prior to joining Sony AI, she spent more than six years working in academia and at industry research organizations. Lingjuan received her Ph.D. from the University of Melbourne. She was a recipient of the prestigious IBM PhD Fellowship Award in 2017, and has contributed to various professional activities, including ICML, NeurIPS, AAAI, IJCAI, and others. Lingjuan’s current research interests include federated learning, AI privacy and security, fairness, edge intelligence, and more. She has had more than 50 papers published in top conferences and journals, including NeurIPS, ICML, ICLR, Nature, AAAI, IJCAI, etc. Her papers have won numerous awards and have been selected as oral presentations at top conferences.

Message

“The Sony AI Privacy-Preserving Machine Learning (PPML) team conducts cutting-edge research on trustworthy AI. Our team aims to integrate more privacy-preserving and robust AI solutions across Sony products. In the long-term, I hope that we can make the industrial AI systems privacy-compliant and robust for social good.”

Publications

Heterogeneous Graph Node Classification with Multi-Hops Relation Features

ICASSP, 2022
Xiaolong Xu*, Lingjuan Lyu, Hong Jin*, Weiqiang Wang*, Shuo Jia*

In recent years, knowledge graph~(KG) has obtained many achievements in both research and industrial fields. However, most KG algorithms consider node embedding with only structure and node features, but not relation features. In this paper, we propose a novel Heterogeneous …

How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data

ICLR, 2022
Zhiyuan Zhang*, Lingjuan Lyu, Weiqiang Wang*, Lichao Sun*, Xu Sun*

Since training a large-scale backdoored model from scratch requires a large training dataset, several recent attacks have considered to inject backdoors into a trained clean model without altering model behaviors on the clean data. Previous work finds that backdoors can be i…

Decision Boundary-aware Data Augmentation for Adversarial Training

TDSC, 2022
Chen Chen, Jingfeng Zhang*, Xilie Xu*, Lingjuan Lyu, Chaochao Chen*, Tianlei Hu*, Gang Chen*

Adversarial training (AT) is a typical method to learn adversarially robust deep neural networks via training on the adversarial variants generated by their natural examples. However, as training progresses, the training data becomes less attackable, which may undermine the …

Communication-Efficient Federated Learning via Knowledge Distillation

Nature Communications, 2022
Yongfeng Huang*, Chuhan Wu*, Fangzhao Wu*, Lingjuan Lyu, Xing Xie*

Federated learning is a privacy-preserving machine learning technique to train intelligent models from decentralized data, which enables exploiting private data by communicating local model updates in each iteration of model learning rather than the raw data. However, model …

Practical Attribute Reconstruction Attack Against Federated Learning

IEEE Transactions on Big Data, 2022
Chen Chen, Lingjuan Lyu, Han Yu*, Gang Chen*

Existing federated learning (FL) designs have been shown to exhibit vulnerabilities which can be exploited by adversaries to compromise data privacy. However, most current works conduct attacks by leveraging gradients calculated on a small batch of data. This setting is not …

Traffic Anomaly Prediction Based on Joint Static-Dynamic Spatio-Temporal Evolutionary Learning

TKDE, 2022
Xiaoming Liu*, Zhanwei Zhang*, Lingjuan Lyu, Zhaohan Zhang*, Shuai Xiao*, Chao Shen*, Philip Yu*

Accurate traffic anomaly prediction offers an opportunity to save the wounded at the right location in time. However, the complex process of traffic anomaly is affected by both various static factors and dynamic interactions. The recent evolving representation learning provi…

FedCTR: Federated Native Ad CTR Prediction with Cross Platform User Behavior Data

ACM TIST, 2022
Chuhan Wu*, Fangzhao Wu*, Lingjuan Lyu, Yongfeng Huang*, Xing Xie*

Native ad is a popular type of online advertisement which has similar forms with the native content displayed on websites. Native ad CTR prediction is useful for improving user experience and platform revenue. However, it is challenging due to the lack of explicit user inten…

FedBERT: When Federated Learning Meets Pre-Training

ACM TIST, 2022
Yuanyishu Tian*, Yao Wan*, Lingjuan Lyu, Dezhong Yao*, Hai Jin*, Lichao Sun*

The fast growth of pre-trained models (PTMs) has brought natural language processing to a new era, which becomes a dominant technique for various natural language processing (NLP) applications. Every user can download weights of PTMs, then fine-tune the weights on a task on …

Byzantine-resilient Federated Learning via Gradient Memorization

AAAI, 2022
Chen Chen, Lingjuan Lyu, Yuchen Liu*, Fangzhao Wu*, Chaochao Chen*, Gang Chen*

Federated learning (FL) provides a privacy-aware learning framework by enabling a multitude of participants to jointly construct models without collecting their private training data. However, federated learning has exhibited vulnerabilities to Byzantine attacks. Many existi…

GEAR: A Margin-based Federated Adversarial Training Approach

AAAI, 2022
Chen Chen, Jie Zhang, Lingjuan Lyu

Previous studies have shown that federated learning (FL) is vulnerable to well-crafted adversarial examples. Some recent efforts tried to combine adversarial training with FL, i.e., federated adversarial training (FAT), in order to achieve adversarial robustness in FL. Howev…

Differential Private Knowledge Transfer for Privacy-Preserving Cross-Domain Recommendation

WWW, 2022
Chaochao Chen*, Huiwen Wu*, Jiajie Su*, Lingjuan Lyu, Xiaolin Zheng*, Li Wang*

Cross Domain Recommendation (CDR) has been popularly studied to alleviate the cold-start and data sparsity problem commonly existed in recommender systems. CDR models can improve the recommendation performance of a target domain by leveraging the data of other source domains…

DADFNet: Dual Attention and Dual Frequency-Guided Dehazing Network for Video-Empowered Intelligent Transportation

AAAI, 2022
Yu Guo*, Wen Liu*, Jiangtian Nie*, Lingjuan Lyu, Zehui Xiong*, Jiawen Kang*, Han Yu*, Dusit Niyato*

Visual surveillance technology is an indispensable functional component of advanced traffic management systems. It has been applied to perform traffic supervision tasks, such as object detection, tracking and recognition. However, adverse weather conditions, e.g., fog, haze …

Protecting Intellectual Property of Language Generation APIs with Lexical Watermark

AAAI, 2022
Xuanli He*, Qiongkai Xu*, Lingjuan Lyu, Fangzhao Wu*, Chenguang Wang*

Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred bil…

Exploiting Data Sparsity in Secure Cross-Platform Social Recommendation

NeurIPS, 2021
Jamie Cui*, Chaochao Chen*, Lingjuan Lyu, Carl Yang*, Li Wang*

Social recommendation has shown promising improvements over traditional systems since it leverages social correlation data as an additional input. Most existing works assume that all data are available to the recommendation platform. However, in practice, user-item interacti…

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

NeurIPS, 2021
Yige Li*, Xixiang Lyu*, Nodens Koren*, Lingjuan Lyu, Bo Li*, Xingjun Ma*

Backdoor attack has emerged as a major security threat to deep neural networks(DNNs). While existing defense methods have demonstrated promising results on detecting and erasing backdoor triggers, it is still not clear if measures can be taken to avoid the triggers from bein…

Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

NeurIPS, 2021
Xu Xinyi*, Lingjuan Lyu, Xingjun Ma*, Chenglin Miao*, Chuan-Sheng Foo*, Bryan Kian Hsiang Low*

Collaborative machine learning provides a promising framework for different agents to pool their resources (e.g., data) for a common learning task. In realistic settings where agents are self-interested and not altruistic, they may be unwilling to share data or model without…

Data Poisoning Attacks on Federated Machine Learning

IEEE IoT-J, 2021
Gan Sun*, Yang Cong*, Jiahua Dong*, Qiang Wang*, Lingjuan Lyu, Ji Liu*

Federated machine learning which enables resource-constrained node devices (e.g., Internet of Things (IoT) devices, smartphones) to establish a knowledge-shared model while keeping the raw data local, could provide privacy preservation and economic benefit by designing an ef…

Joint Stance and Rumor Detection in Hierarchical Heterogeneous Graph

IEEE TNNLS, 2021
Chen li*, Hao Peng*, Jianxin Li*, Lichao Sun*, Lingjuan Lyu, Lihong Wang*, Philip Yu*, Lifang He*

Recently, large volumes of false or unverified information (e.g., fake news and rumors) appear frequently in emerging social media, which are often discussed on a large scale and widely disseminated, causing bad consequences. Many studies on rumor detection indicate that the…

FLEAM: A Federated Learning Empowered Architecture to Mitigate DDoS in Industrial IoT

IEEE TII, 2021
Jianhua Li*, Lingjuan Lyu, Ximeng Liu*, Xuyun Zhang*, Xixiang Lyu*

A Novel Attribute Reconstruction Attack in Federated Learning

IJCAI, 2021
Lingjuan Lyu, Chen Chen

Federated learning (FL) emerged as a promising learning paradigm to enable a multitude of partici- pants to construct a joint ML model without expos- ing their private training data. Existing FL designs have been shown to exhibit vulnerabilities which can be exploited by adv…

Blog

November 29, 2021 | Sony AI

Meet the Team #2: Lingjuan, Jerone and Roberto

What do privacy, pattern recognition, and percussion all have in common? They are concepts and creative endeavors that have inspired Sony AI team members Lingjuan, Jerone and Roberto. Read on to learn more about these three Sony…

What do privacy, pattern recognition, and percussion all have in common? They are concepts and creative endeavors that have insp…

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.