Vidal's library
Title: Computational Agents That Learn About Agents: Algorithms for Their Design and a Predictive Theory of Their Behavior
Author: José M. Vidal
Year: 1998
Abstract: We show how to build agents that learn about agents in Multi-Agent Systems (MAS) composed of similar learning agents. The problem is divided into the two subproblems of deciding how much an agent should think about what others think about what others think\ldots and the problem raised by the fact that if the other agents are learning and changing their behavior, an agent's model of them might never be accurate. We start by presenting a framework that can formally describe a MAS and the agents that inhabit it, along with their behavior and a measure of the correctness of this behavior. The framework is used to develop an algorithm (LR-RMM) that tells an agent when to stop thinking about other agents. The algorithm is implemented and its results verified. The framework is then extended to capture the agents' learning abilities and the degree to which they impact each others' behavior. This extended framework (CLRI) is used to predict the expected behavior of learning agents in MASs. Theoretical predictions from this framework are confirmed with experimental results from our research and with experimental results from the research literature. Finally, a specific market-based MAS is studied in detail. We confirm results predicted by the CLRI framework and present other findings specific to market-based MAS. These findings include the fact that learning agents make the system more robust to the presence of malicious agents, and the fact that agents can expect decreasing returns for increasing levels of strategic thinking.

Cited by 11  -  Google Scholar

@PhdThesis{vidal:thesis,
  author = 	 {Jos\'{e} M. Vidal},
  title = 	 {Computational Agents That Learn About Agents:
                  Algorithms for Their Design and a Predictive Theory
                  of Their Behavior},
  school = 	 {University of Michigan},
  year = 	 1998,
  url = 	 {http://jmvidal.cse.sc.edu/papers/diss/diss.pdf},
  postscript = 	 {http://jmvidal.cse.sc.edu/papers/diss/diss.ps},
  abstract = 	 {We show how to build agents that learn about agents
                  in Multi-Agent Systems (MAS) composed of similar
                  learning agents. The problem is divided into the two
                  subproblems of deciding how much an agent should
                  think about what others think about what others
                  think\ldots{} and the problem raised by the fact
                  that if the other agents are learning and changing
                  their behavior, an agent's model of them might never
                  be accurate.  We start by presenting a framework
                  that can formally describe a MAS and the agents that
                  inhabit it, along with their behavior and a measure
                  of the correctness of this behavior. The framework
                  is used to develop an algorithm (LR-RMM) that tells
                  an agent when to stop thinking about other
                  agents. The algorithm is implemented and its results
                  verified. The framework is then extended to capture
                  the agents' learning abilities and the degree to
                  which they impact each others' behavior. This
                  extended framework (CLRI) is used to predict the
                  expected behavior of learning agents in MASs.
                  Theoretical predictions from this framework are
                  confirmed with experimental results from our
                  research and with experimental results from the
                  research literature.  Finally, a specific
                  market-based MAS is studied in detail. We confirm
                  results predicted by the CLRI framework and present
                  other findings specific to market-based MAS. These
                  findings include the fact that learning agents make
                  the system more robust to the presence of malicious
                  agents, and the fact that agents can expect
                  decreasing returns for increasing levels of
                  strategic thinking.},
  googleid = 	 {1ufIQTcQBW0J:scholar.google.com/},
  keywords = 	 {multiagent learning},
  cluster = 	 {7855702954530629590}
}
Last modified: Wed Mar 9 10:14:20 EST 2011