   Next: Limited Rationality Up: Introduction Previous: Introduction ## The Recursive Modeling Method

The basic modeling primitives we use are based on the Recursive Modeling Method (RMM) (see  in this Volume)   . RMM provides a theoretical framework for representing and using the knowledge that an agent has about its expected payoffs and those of others. To use RMM, an agent is expected to have a payoff matrix where each entry represents the payoffs the agent expects to get given the combination of actions chosen by all the agents. Typically, each dimension of the matrix corresponds to one agent, and the entries along it to all the actions that agent can take. The agent can (but need not) recursively model others as similarly having payoff matrices, and them modeling others the same way, and so on.... The recursive modeling only ends when the agent has no deeper knowledge. At this point, a Zero Knowledge (ZK) strategy can attributed to the particular agent in question, which basically says that, since there is no way of knowing whether any of its actions are more likely than others, all of the actions are equally probable. If an agent does have reason to believe some actions are more likely than others, this different probability distribution can be used. RMM provides a method for propagating strategies from the leaf nodes to the root. The strategy derived at the root node is what the agent performing the reasoning should do.

Figure 1: This is an example RMM hierarchy for two agents A1 and A2. The leaves of the tree will either be Zero Knowledge (ZK) strategy or a sub-intentional model. Note that we consider the ZK strategy and the NO-INFO model in , as equivalent.

An example payoff matrix along with its RMM hierarchy, is shown in Figure 1. This RMM hierarchy represents a situation from agent A1's point of view and, therefore, has her payoff matrix at the root. Assuming that agent A1 knows something about how A2 represents the situation, then A1 will model A2 in order to predict his strategies, which will allow A1 to generate better strategies for herself. These models take the form of payoff matrices and are placed below the root node. The probability associated with each branch captures the uncertainty A1 has about A2. If A1 similarly knows something about what A2 might know about how A1 represents the situation, this could be further captured as more deeply nested payoff matrices, as implied in the left branch of the figure. If A1 knows something about what A2 expects A1 to do in the situation but not how A2 represents A1's thinking, then A1 could associate with A2 a sub-intentional model of A1 that summarizes A1's likely actions. Finally, If A1 believes that A2 has no knowledge of A1, this can be captures in a ZK strategy, as shown by the rightmost branch.

Jose M. Vidal
jmvidal@umich.edu
Sun Mar 10 12:52:06 EST 1996