The answers are in boldface.

  1. Decentralized learning means that:

  2. The credit assignment problem is made especially difficult in a MAS because:

  3. Which one of the following choices is not one of the assumptions made by the reinforcement learning problem formulation:

  4. In reinforcement learning a policy for an agent describes

  5. The Q-learning update formula is:

  6. The explore versus exploit dilemma happens when an agent

  7. In Q-learning the learning rate parameter

  8. Classifiers in a classifier system are divided into

  9. The bucket brigade algorithm used in classifier systems works by

  10. A classifier system uses a genetic algorithm in order to:

  11. A 1-level learning agent models other agents as 0-level. This means that:

  12. Experiments using 0, 1, and 2-level buyer and seller agents in a market simulation lead to the conclusion that

  13. The nodes in Usenet avoid sending messages that other nodes have already seen by

  14. If the all the root DNS servers went down, what will happen?

  15. The tragedy of the commons is that

  16. The TCP protocol implements explicit cooperation by:

  17. Which one of the following choices is not an existing limitation of the Gnutella protocol.

  18. In Gnutella, the Time To Live (TTL) parameter is used to

  19. Which one of the following statement is not a true statement about the Freenet search protocol:

  20. Which one of the following methods is not a reasonable way of subverting the Gnutella network.

  21. As software programs become larger and our need to control their complexity increases, the prevailing strategy used to handle complexity is (as explained in the "Go To the Ant" paper):

  22. Ants are able to sort the larvae, eggs, and food in their nest by:

  23. Termites are able to build theirs nests by:

  24. The fact that wasps achieve task differentiation is especially surprising since

  25. The basic flocking behavior of birds, fish, and RoboCup players, can be achieved by using:

  26. If you are implementing a MAS that does some function F, you should (according to the "Go To the Ant" paper):

  27. Maintaining agents small in scope (local sensing and action) is a good technique because:

  28. Which one of the following is not necessarily a way of achieving agent diversity?

    The architectures could implement the same behavior.

  29. Task differentiation in wasps is achieved by

  30. The following are all good reasons to keep your agents small, except which one?

  31. The EL Farol Bar problem is one where a number of agents try to determine which night to attend the bar. The agents decide which night of the week to attend:

  32. The COIN framework applies only to groups of agents:

  33. In the COIN framework, macrolearning refers to

  34. According to the COIN framework, a constraint-aligned system is one:

  35. The wonderful life utility define by COIN can be stated as:

  36. The COIN framework says that we should use the wonderful life utility

    There is no such thing as a "learning function".

  37. Let the utility of N agents attending El Farol Bar be at any night be U(N), let S(N) be the number of agents that attended at night N, and let n be the night that agent i attended. What is the wonderful life reward for i?

  38. In the COIN experiments it was clear that the wonderful life utility performed the best. Which one came in second?

  39. In the El Farol Bar problem in some cases (depending one which reward function was used) the optimal allocation was never reached. This was because:

  40. In the COIN experiments with the leader-follower problem the performance was greatly improved by using macrolearning. In this case macrolearning had the effect of:


Copyright © 2001 José M. Vidal. All Rights Reserved.