A designer of a learning agent in a multi-agent system (MAS) must
decide how much his agent will know about other agents. He can choose
to either implement this knowledge directly into the agent, or to let
the agent learn this knowledge. For example, he might decide to start
the agent with no knowledge and let it learn which actions to take
based on it's experience, or he might give it knowledge about what to
do given the actions of other agents and then have it learn which
actions the others' take, or he might give it deeper knowledge about
the others (if available), etc. It is not clear which one of the many
options is better, especially if the other agents are also learning.
In this paper we provide a framework for describing the MAS and the
different types of knowledge in the agent. We characterize the
knowledge as nested agent models and analyze the complexity of
learning these models. We then study how the fact that other agents
are also learning, and thereby changing their behavior, affects the
effectiveness of learning the different agent models. Our framework
and analysis can be used by the agent designer to help predict how
well his agent will perform within a given MAS.
An example the reader can keep in mind is a market economy where
agents are buying and selling from each other. Some agents might
choose to simply remember the value they got when they
bought good x for y dollars, or how profit they made when offering
price z (remember, no sale equals zero profit). Others might choose
to remember who they bought/sold from and the value/profit
they received. Still others might choose to model how the other agents
think about everyone else, and so on. It is clear that increased
nested models require more computation, what is not so clear is how
and when these deeper models will benefit the agent.
Different research communities have run across the problem of agents
learning in a society of learning agents. The work of [5]
focuses on very simple but numerous agents and emphasizes their
emergent behavior. The work on agent-based modeling
[3, 1] of complex systems studies slightly more
complex agents that are meant as stand-ins for real world agents (e.g.
insects, communities, corporations, people). Finally within the MAS
community some work [4, 6, 7] has focused on
how artificial AI-based learning agents would fare in communities of
similar agents. We believe that our research will bring to the
foreground some of the common observations seen in these research
areas.
Jose M. Vidal
jmvidal@umich.edu
Thu Apr 24 15:00:31 EDT 1997