The human in the loop has been recognized as a common point of weakness in the defense of cyber systems by the cyber security community. One promising approach to mitigate this problem is to build models of human behavior and enable formal reasoning about how human beings interact with cyber systems. Game theory, in particular, has been used to model human behavior and leveraging game-theoretic models of defense for physical security, to suggest incentives and punishments that can be used to induce more secure behavior. Another promising approach is to design automated tools that can perform the task of the human in the loop. This approach mitigates security issues arising from sub-optimal security decision taken by humans. Further, automated tools can be formally specified more precisely, and with stronger guarantees, than human behavior; this enables stating and proving strong formal guarantees about security of the overall cyber-system.
Both of the above techniques for understanding how humans and systems interact have been extensively studied in AI, but not in the context of cyber security. Addressing these challenges requires collaboration between several different research and development communities including the artificial intelligence, cyber-security, game theory, machine learning and formal reasoning communities.
The workshop will focus on research and applications of artificial intelligence to cyber security, including machine learning, game theory, natural language processing, knowledge representation, and automated and assistive reasoning. The workshop will emphasize cyber systems and research on techniques to enable resilience in cyber security systems augmented by human-machine interactions.