Vulnerabilities in Data-Centered Decision Making
Thanh Nguyen is an Assistant Professor in the Computer and Information Science department at the University of Oregon (UO). Prior to UO, she was a postdoc at the University of Michigan and earned her PhD in Computer Science from the University of Southern California. Thanh’s work in the field of Artificial Intelligence is motivated by real-world societal problems, particularly in the areas of Public Safety and Security, Cybersecurity, and Sustainability. She brings together techniques from multi-agent systems, machine learning, and optimization to solve problems in those areas, with the focus on studying deception in security, and decision-focused adversarial learning. Thanh’s work has been recognized by multiple awards, including the IAAI-16 Deployed Application Award, and the AAMAS-16 Runner-up of the Best Innovative Application Paper Award. Her work in wildlife protection, in particular, has contributed to build PAWS, a well-known AI application for wildlife security, which has been deployed in multiple national parks around the world.
Many real-world problems require the creation of Artificial Intelligence (AI) models which include both learning (i.e., training a predicted model from data) and planning (i.e., producing high-quality decisions based on the predicted model). However, such AI models face increased threats from attacks to the learning component (via the exploitation of vulnerabilities of machine learning algorithms), which results in ineffective decisions in the end. In this talk, I will discuss the security of machine learning in a decision-focused multi-agent environment in which agents’ goals are to make effective action plans given some learning outcomes. In particular, I will describe algorithms that explore techniques in optimization research to directly optimize these attacks according to the decision goals of the agents while considering the intermediate learning layer.