Praveen Paruchuri, Jonathan Pearce, Janusz Marecki, Milind Tambe, Fernando Ordóñez, Sarit Kraus
In Proceedings of the Nectar Paper Track of the 23rd AAAI Conference on Artificial Intelligence, 2008
Abstract
In a class of games known as Stackelberg games, one agent (the leader) must commit to a strategy that can be observed by the other agent (the adversary/follower) before the adversary chooses its own strategy. We consider Bayesian Stackelberg games, in which the leader is uncertain about the type of the adversary it may face. Such games are important in security domains, where, for example, a security agent (leader) must commit to a strategy of patrolling certain areas, and an adversary (follower) can observe this strategy over time before choosing where to attack. We present here two different MIP-formulations, ASAP (providing approximate policies) for Bayesian Stackelberg games. DOBSS is currently the fastest optimal procedure for Bayesian Stackelberg games and is in use by police at the Los Angeles International Airport (LAX) to schedule their activities.
BibTex