Welcome

This year the AICS workshop emphasis will be on adversarial learning. The workshop will address technologies and their applications, such as, machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions. The workshop will emphasize research and applications of techniques to attack and defend machine learning systems, especially in the context of cyber security.

Machine learning capabilities have recently been shown to offer astounding ability to automatically analyze and classify large amounts of data in complex scenarios, in many cases matching or surpassing human capabilities. However, it has also been widely shown that these same algorithms are vulnerable to attacks, known as adversarial learning attacks, which can cause the algorithms to misbehave or reveal information about their inner workings. In general, attacks take three forms: 1) data poisoning attacks inject incorrectly or maliciously labels data points into the training set so that the algorithm learns the wrong mapping, 2) evasion attacks perturb correctly classified input samples just enough to cause errors in classification and 3) inference attacks which repeatedly test the trained algorithm with edge-case inputs in order to reveal the previously hidden decision boundaries. As machine learning based AI capabilities become incorporated into facets of everyday life, including protecting cyber assets, the need to understand adversarial learning and address it becomes clear.

The above discussion of adversarial learning highlights one of the main challenges in applying AI to problems in cyber security. Poisoning attacks which inject incorrectly labeled malicious traffic or data can be leveraged by the adversary to enable their attacks to go undetected while data evasion attacks can be used to cause false classification of benign traffic as malicious thereby eliciting a defense response. If AI is to succeed in helping cyber security, it must be secure and robust to attacks itself.

This year we are asking the AI for cyber security community to submit solutions to a challenge problem. The challenge problem is focused on solving an adversarial attack scenario based on redacted data.

Understanding and addressing challenges associated with adversarial learning requires collaboration between several different research and development communities including the artificial intelligence, cyber security, game theory, machine learning, as well as the formal reasoning communities. This workshop is structured to encourage a lively exchange of ideas between researchers in these communities from the academic, public and commercial sectors.