SEMINAR

Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms

Friday, Oct 20 2023 - 10:48 am (GMT + 7)
Speaker
Nghia Hoang
Working
Washington State University
Timeline
Fri, Oct 27 2023 - 10:00 am (GMT + 7)
About Speaker

Dr. Hoang is an Assistant Professor at Washington State University. Dr. Hoang received his Ph.D. in 2015 from the National University of Singapore (NUS), worked as a research fellow at NUS (2015-2017) and then as a postdoctoral research associate at MIT (2017-2018). Following that, Dr. Hoang joined the MIT-IBM Watson AI Lab as a research staff member and principal investigator. Later, Dr. Hoang moved to the AWS AI Labs of Amazon as a Senior Research Scientist (2020-2022). In early 2023, Dr. Hoang joined the faculty of the Electrical Engineering and Computer Science school of Washington State University. Dr. Hoang has been publishing and serving very actively as PC, senior PC or Area Chair in the key outlets in ML/AI such as ICML, NeurIPS, ICLR, AAAI, IJCAI, UAI, ECAI. He is also an editorial member of Machine Learning Journal and an action editor of Neural Networks. His current research interests broadly span the areas of probabilistic machine learning, with a specific focus on distributed and federated learning.

Abstract

In this talk, I will discuss the threats of adversarial attack on multivariate probabilistic forecasting models and viable defense mechanisms. Our studies discover a new attack pattern that negatively impact the forecasting of a target time series via making strategic, sparse (imperceptible) modifications to the past observations of a small number of other time series. To mitigate the impact of such attack, we have developed two defense strategies. First, we extend a previously developed randomized smoothing technique in classification to multivariate forecasting scenarios. Second, we develop an adversarial training algorithm that learns to create adversarial examples and at the same time optimizes the forecasting model to improve its robustness against such adversarial simulation. Extensive experiments on real-world datasets confirm that our attack schemes are powerful and our defense algorithms are more effective compared with baseline defense mechanisms. This talk is based on our recent published work at ICLR-23.

Related seminars

Coming soon
Niranjan Balasubramanian

Stony Brook University

Towards Reliable Multi-step Reasoning in Question Answering
Fri, Nov 03 2023 - 10:00 am (GMT + 7)
Nghia Hoang

Washington State University

Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms
Fri, Oct 27 2023 - 10:00 am (GMT + 7)
Jey Han Lau

University of Melbourne

Rumour and Disinformation Detection in Online Conversations
Thu, Sep 14 2023 - 10:00 am (GMT + 7)
Tan Nguyen

National University of Singapore

Principled Frameworks for Designing Deep Learning Models: Efficiency, Robustness, and Expressivity
Mon, Aug 28 2023 - 10:00 am (GMT + 7)