next up previous
Next: Conclusions Up: Implementation of Pursuit Task Previous: Implementation of Pursuit Task

Results

  Our tests were run on an 20 by 20 grid with four predators randomly placed and one prey that always started in the middle of the grid. We started with tests where the predators used simple Breadth First Search of the situation hierarchy. When using BFS to one level, the predators could not predict the actions of the others and so, they never got next to the prey. As we increased the number of levels, the predators could better predict what the others would do, which made their actions more coordinated and they managed to capture the prey. The goal for our algorithm was to keep the same good results while expanding as few nodes as possible. For comparison purposes, we also tested a very simple greedy algorithm where each predator simply tries to minimize its distance to the prey, irrespective of where the other predators are.

We then ran our Limited Rationality RMM algorithm on the same setup. We had previously compiled a hash table which contained several entries (usually around 5, but no more than 10) for almost all situations. This was done by generating random situations and running RMM to 3 or 4 levels on them to find out the strategy. The results, as seen in Table 1, show that our algorithm managed to maintain the performance of a BFS to 4 levels while only expanding little more than four nodes on the average. All the results are the averages of 20 or more runs.

Another way to view these results is by plotting the total time it takes the agents, on average, to surround the prey as a function of the time it takes to expand a node. If we assume that the time for an agent to make a move is 1 unit and we let the time to expand a node be x, the average number of turns before surrounding the prey be , and the average number of nodes N, then the total time T the agents spends before surrounding the prey is approximately , as shown in Figure 6. LR RMM is expected to perform better than all others except when the time to expand a node is much smaller (.002 times smaller) than the time to perform an action.


next up previous
Next: Conclusions Up: Implementation of Pursuit Task Previous: Implementation of Pursuit Task

Jose M. Vidal
jmvidal@umich.edu
Sun Mar 10 12:52:06 EST 1996