Adversarial detection games in network security applications with imperfect and incomplete information

  1. Parras Moral, Juan
Zuzendaria:
  1. Santiago Zazo Bello Zuzendaria

Defentsa unibertsitatea: Universidad Politécnica de Madrid

Fecha de defensa: 2020(e)ko otsaila-(a)k 06

Epaimahaia:
  1. José Ramón Casar Corredera Presidentea
  2. Pedro José Zufiria Zatarain Idazkaria
  3. José María Barceló Ordinas Kidea
  4. Gustavo Bergantiños Cid Kidea
  5. Víctor Elvira Arregui Kidea

Mota: Tesia

Laburpena

This Ph.D. thesis deals with security problems in Wireless Sensor Networks. As the number of devices interconnected grows, the amount of threats and vulnerabilities also increases. Namely, in this thesis, we focus on two family of attacks: the backoff attack, which affects to the multiple access to a shared wireless channel, and the spectrum sensing data falsification attack, which arises in networks which try to make a decision about the state of a spectrum channel cooperatively. First, we use game theory tools to model the backoff attacks. We start by introducing two different algorithms that can be used to learn in discounted repeated games. Then, we motivate the importance of the backoff attack by showing analytically its effects on the network resources, which are not shared evenly as the attacking sensors receive a larger part of the network throughput. Afterwards, we show that the backoff attack can be modeled, under certain assumptions, using game theory tools, namely, static and repeated games, and provide analytical solutions and also algorithms to learn these solutions. A problem that arises for the defense mechanism is that it is possible that the agent is able to adapt to it. We then explore what happens if the agent knows the defense mechanism and acts in such a way that it is able to exploit the defense mechanism without being discovered. As we show, this is a significant threat to both attacks studied in this work, as the agent is able to successfully exploit the defense mechanism: in order to alleviate this attack, we propose a novel detection framework that is successful against such attack. However, we can even develop attack strategies that do not need the agent to know the defense mechanism: by means of reinforcement learning tools, it is able even to exploit a possibly unknown mechanism simply by interacting with it. Hence, these attack strategies are a significant threat against current defense mechanisms. We finally develop a defense mechanism against such intelligent attackers, based on inverse reinforcement learning tools, which is able to successfully mitigate the attack effects.