Authors
- Xiaoming Liu*
- Zhanwei Zhang*
- Lingjuan Lyu
- Zhaohan Zhang*
- Shuai Xiao*
- Chao Shen*
- Philip Yu*
* External authors
Venue
- TKDE
Date
- 2022
Traffic Anomaly Prediction Based on Joint Static-Dynamic Spatio-Temporal Evolutionary Learning
Xiaoming Liu*
Zhanwei Zhang*
Zhaohan Zhang*
Shuai Xiao*
Chao Shen*
Philip Yu*
* External authors
TKDE
2022
Abstract
Accurate traffic anomaly prediction offers an opportunity to save the wounded at the right location in time. However, the complex process of traffic anomaly is affected by both various static factors and dynamic interactions. The recent evolving representation learning provides a new possibility to understand this complicated process, but with challenges of imbalanced data distribution and heterogeneity of features. To tackle these problems, this paper proposes a spatio-temporal evolution model named SNIPER for learning intricate feature interactions to predict traffic anomalies. Specifically, we design spatio-temporal encoders to transform spatio-temporal information into vector space indicating their natural relationship. Then, we propose a temporally dynamical evolving embedding method to pay more attention to rare traffic anomalies and develop an effective attention-based multiple graph convolutional network to formulate the spatially mutual influence from three different perspectives. The FC-LSTM is adopted to aggregate the heterogeneous features considering the spatio-temporal influences. Finally, a loss function is designed to overcome the 'over-smoothing' and solve the imbalanced data problem. Extensive experiments show that SNIPER averagely outperforms state-of-the-arts by 3.9%, 0.9%, 1.9% and 1.6% on Chicago datasets, and 2.4%, 0.6%, 2.6% and 1.3% on New York City datasets in metrics of AUC-PR, AUC-ROC, F1 score, and accuracy, respectively.
Related Publications
Large Language Models (LLMs) and Vision-Language Models (VLMs) have made significant advancements in a wide range of natural language processing and vision-language tasks. Access to large web-scale datasets has been a key factor in their success. However, concerns have been …
Federated Learning (FL) is notorious for its vulnerability to Byzantine attacks. Most current Byzantine defenses share a common inductive bias: among all the gradients, the densely distributed ones are more likely to be honest. However, such a bias is a poison to Byzantine r…
The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients'…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.