Vidal's library
Title: AWESOME: A General Multiagent Learning Algorithm that Converges in Self-Play and Learns a Best Response Against Stationary Opponents
Author: Vinvent Conitzer and Tuomas Sandholm
Book Tittle: Proceedings of the Twentieth International Conference on Machine Learning
Year: 2003
Abstract: A satisfactory multiagent learning algorithm should, at a minimum, learn to play optimally against stationary opponents and converge to a Nash equilibrium in self-play. The algorithm that has come closest, WoLF-IGA, has been proven to have these two properties in 2-player 2-action repeated games| assuming that the opponent's (mixed) strategy is observable. In this paper we present AWESOME, the first algorithm that is guaranteed to have these two properties in all repeated (finite) games. It requires only that the other players' actual actions (not their strategies) can be observed at each step. It also learns to play optimally against opponents that eventually become stationary. The basic idea behind AWESOME (Adapt When Everybody is Stationary, Otherwise Move to Equilibrium) is to try to adapt to the others' strategies when they appear stationary, but otherwise to retreat to a precomputed equilibrium strategy. The techniques used to prove the properties of AWESOME are fundamentally different from those used for previous algorithms, and may help in analyzing other multiagent learning algorithms also.

Cited by 34  -  Google Scholar

@InProceedings{conitzer03a,
  author =	 {Vinvent Conitzer and Tuomas Sandholm},
  title =	 {{AWESOME}: A General Multiagent Learning Algorithm
                  that Converges in Self-Play and Learns a Best
                  Response Against Stationary Opponents},
  booktitle =	 {Proceedings of the Twentieth International
                  Conference on Machine Learning},
  year =	 2003,
  googleid =	 {MWEPFhVK8dEJ:scholar.google.com/},
  abstract =	 {A satisfactory multiagent learning algorithm should,
                  at a minimum, learn to play optimally against
                  stationary opponents and converge to a Nash
                  equilibrium in self-play. The algorithm that has
                  come closest, WoLF-IGA, has been proven to have
                  these two properties in 2-player 2-action repeated
                  games| assuming that the opponent's (mixed) strategy
                  is observable. In this paper we present AWESOME, the
                  first algorithm that is guaranteed to have these two
                  properties in all repeated (finite) games. It
                  requires only that the other players' actual actions
                  (not their strategies) can be observed at each
                  step. It also learns to play optimally against
                  opponents that eventually become stationary. The
                  basic idea behind AWESOME (Adapt When Everybody is
                  Stationary, Otherwise Move to Equilibrium) is to try
                  to adapt to the others' strategies when they appear
                  stationary, but otherwise to retreat to a
                  precomputed equilibrium strategy. The techniques
                  used to prove the properties of AWESOME are
                  fundamentally different from those used for previous
                  algorithms, and may help in analyzing other
                  multiagent learning algorithms also.},
  keywords =	 {multiagent learning},
  url =		 {http://jmvidal.cse.sc.edu/library/conitzer03a.pdf},
  cluster =	 {15127954077739082033}
}
Last modified: Wed Mar 9 10:16:03 EST 2011