A real economic system provides monetary incentives to its
participants and dynamically allocates the available system resources
in part because the human beings and corporations that take part in it
are smart. They can recognize when they are charging too little or too
much, or when they are not getting the quality they expected. They
also do not spend all their time thinking about the economy, but only
do as much strategic thinking as is needed.
If we want our agent economy to be as robust as the real economy we
will need to have at least marginally intelligent agents. These agents
will need to know what to bid-- both when price has reached an
equilibrium (which is easy), and when the price is fluctuating. If an
agent is the only seller of a service then it should be able to take
advantage of its monopoly, while if the buyers find that a seller's
prices are too high for the service it sells, they should be able to
avoid buying from him. Agents will also need to have enough
computational power left over to actually deliver their service.
Using learning agents frees us from having to implemented some sort of
centralized ``police'' agents. While the UMDL ontology provides a way
for agents to characterize the services they sell, there is no
guarantee that all the goods sold at an auction for service x are
indeed instances of service x. This guarantee could be provided by
police agents that periodically check all the goods sold at all the
auctions. Unfortunately, this would a) be computationally taxing
solution on the system, b) give rise to thorny political problems
given the fact that the police agents would have special rights over
other agents and c) some agents might have specific subjective
preferences over a good which can not be expressed in the ontology
(and, therefore, not recognized by other agents). For example, while
all agents might agree that QPA1 does sell service x, one agent
might think that QPA1's service is faster, better, or more thorough
than the same service x as provided by QPA2. This being the agent's
subjective opinion, it is unlikely to be in agreement with all the
other agents, but the agent might still be willing to pay more for
service x from QPA1 than from QPA2. Giving agents the ability to
learn is also a first step towards the implementation of
``recommender'' agents that gather together agents of similar tastes.
In essence, learning provides a way for agents to build trust
among each other. For a market-system to work well, an agent needs to
be able to trust that the other agent's view of good x is the same
as his view of x. Similarly, an agent that uses recommender agents
needs to trust the recommendations they give. This trust can be
acquired by repeated iterations with the agents in question. Once the
trust is acquired the learning is no longer needed, that is, until the
trust is broken. This is why we argue that agents need the
capability of learning, even if this capability is not always
exercised.
Lastly, we propose that learning agents are not only useful, they are
inevitable. In a society of selfish agents, we can expect that the
designers will use every technology available to enhance the profits
of their agents. Learning is one such technique. By implementing
learning agents ourselves we can determine how much of an advantage
they will have and how they will affect the system.
Jose M. Vidal
jmvidal@umich.edu
Tue Sep 30 14:35:40 EDT 1997