Value Iteration   

A demonstration of the value iteration algorithm as applied to a 2D world where a robot can move North, South, East, or West.

These are NetLogo models that demonstrate various well known MultiAgent algorithms and other related techniques. They have been developed by myself, my students, and others over the years.

Many of these models implement algorithms described in detail in my free textbook Fundamentals of Multiagent Systems.

If you need a consultant or just some help in building a NetLogo model for your research, let me know. Some of these models, and many others not shown here, are the results of collaborations with other researchers in various science and engineering disciplines.

Beernet

This is an implementation of an automated beer distrubution game in a large tree-like network. We can observe the Bullwhip effect in action. We test survivability and dynamic behavior of the supply network under various types of attacks or failures. More information on this model can be found in

Jose M. Vidal and Anand Nair

Container Port Simulation

A simulation of a container port. We model the movement of the cranes using various utility functions to determine what is the best behavior for the crane operators. The trucks have random arrival times. More details can be found at

  1. Nathan Huynh and Jose M. Vidal. An Agent-Based Approach to Modeling Yard Cranes at Seaport Container Terminals. In Proceedings of the Symposium on Theory of Modeling and Simulation, 2010.

  2. Jose M Vidal and Nathan Huynh. Building Agent-Based Models of Seaport Container Terminals. In Proceedings of 6th Workshop on Agents in Traffic and Transportation, 2010.

Jose M Vidal and Nathan Huynh

Asynchronous Backtracking

Generates and solves graph coloring problems using ABT.

Gary Fredericks

Graph Coloring using Asynchronous Weak-Commitment

This is the implementation of the Asynchronous Weak-Commitment Search for the graph coloring problem. We solve the graph coloring problem using the AWCS algorithm from

  1. Makoto Yokoo and Katsutoshi Hirayama. Algorithms for Distributed Constraint Satisfaction: A Review. Autonomous Agents and Multi-Agent Systems, 3(2):185--207, 2000.

Ionel Muscalagiu, Jose M. Vidal

Bayes-Ball Algorithm

This model implements the Bayes-Ball algorithm. The purpose to this simulation is to provide a visualization that readers of Shachter's article can explore in order to better understand the process of the algorithm as well as the results it elicits.

  1. Ross D. Shachter. Bayes-Ball: The Rational Pastime (for Determining Irrelevance and Requisite Information in Belief Networks and Influence Diagrams). In Proceedings of the Fourteenth Conference in Uncertainty in Artificial Intelligence, p. 480--487, 1998.

Alicia Ruvinsky

Combinatorial Auction

Generates and solves a combinatorial auction. The yellow circles represent the items for sale and the green squares represent the bids. Each bid is connected to the items it is over. The square bids with the squares around them are the ones in the revenue-maximizing solution, which we find upon "setup" by running a branch-and-bids algorithm. We implement a branch and bound algorithm on both the branch on items search tree and the branch on bids search tree.

The simple branch and bound algorithms implemented here are described in

If you want to learn more about combinatorial auctions I recommend

Jose M Vidal

Circle

A very simple application where the turtles move around in a circle.
It serves as a fun example of emergent behavior. It also raises
interesting questions: What is the most robust way to implement this? What if
different people implement different turtles? What if we add some
limitations to the environment? etc.

Jose M Vidal

COIN

This model implements the COllective INtelligence framework and attempts to reproduce the results from

It implements the wonderful life utility (WLU) clamped down to 0 (no attendance) and 1 (attend every day) and the aristrocatic utility function for the El Farol Bar problem. Since the paper does not mention the exact parameter values used I have been unable to exactly reproduce their results. Namely, this simulation seems to take longer to converge than theirs.

Each row in the main output window represents one agent and each of the 7 columns is a day of the week. The patch is set to blue when the agent attends that particular night. Note that the agent often converge if you let the model run over 1000 steps.

Optimal global utility is achieved, for num-nights=1, if 3 agents attend every night except one night when all the other agents attend.

Jose M Vidal

Congregating in Market Systems

We implement the congregating algorithm from

  • Chistopher H. Brooks and Edmund H. Durfee. Congregating and market formation. In Proceedings of the 1st International Joint Conference on Autonomous Agents and MultiAgent Systems, pages 96-103, 2002.

Jose M Vidal

The Coordination Game

This is the binary coordination game. Each agent can be either red
or blue. At each time step some of the agents change their color simultaneously.
The goal is to find out how long it takes, under different conditions, for the
population to converge to one color. This problem was presented in

  1. Yoav Shoham and Moshe Tennenholtz. On the emergence of social conventions: modeling, analysis, and simulations. Artificial Intelligence, 94 (139--166) 1997.

  2. Jordi Delgado. Emergence of social conventions in complex networks. Artificial Intelligence, 141. p. 171--185, 2002.

Jose M Vidal

Graph Coloring using Distributed Breakout

An implementation of the distributed breakout algorithm from

  1. Makoto Yokoo and Katsutoshi Hirayama. Algorithms for Distributed Constraint Satisfaction: A Review<. Autonomous Agents and Multi-Agent Systems, 3(2):185--207, 2000.

  2. Makoto Yokoo and Katsutoshi Hirayama. Distributed Breakout Algorithm for Solving Distributed Constraint Satisfaction Problems. In Proceedings of the Second International Conference on Multiagent Systems, p. 401--408, 1996.

Jose M Vidal

Distributed Recommender System

A prototype implementation of a distributed recommender system as seen in

Jose M Vidal

Evolutionary Game Theory

A visualization of a population of agents playing repeated games under the standard evolutionary game theory assumptions. We use a simplex to show how these populations evolve over time. Click the forever-go button and then click on the simplex to show how a population at that spot would evolve.

Jose M Vidal

Task Allocation

Solves a task allocation problem with the coalition formation algorithm from

William Wiles

Mailmen

This is the classis MAS mailmen problem where a set of mailmen
have to deliver a set of letters. Each letter must be delivered to
a different place and mailmen can exchange letters. The goal is to
minimize the total distance travelled by the mailmen. It is
an instance of a distributed resource allocation problem. The solution
provided here is only the most obvious. Better solutions are
possible. AFAIK, there is no agreement on what is the best
algorithm for solving this problem. This problem was first popularized in

  • Jeffrey S. Rosenschein and Gilad Zlotkin. Rules of Encounter. The MIT Press, Cambridge, MA, 1994.

Jose M Vidal

Marriage Problem

There are 64 single men and 64 single women, all heterosexual its is presumed, who want to get married. Each one ranks all possible mates. The problem is how to find the set of marriages so that no two people (one man one woman) that are not married to each other do, in fact, prefer each other over their assigned partners.

The solution we implement is the deferred acceptance algorithm from

In this model the woman are the pink circles and the men are blue (well, cyan). When a man connects to a woman (edge) it means he proposes to her. The numbers in the nodes represent the preference of that node for its partner. The algorithm works as follows

  1. while there are women without a proposal
  2. every man proposes to the his most preferred woman who has not rejected him
  3. every woman with multiple proposals rejects all but the one she prefers the most

When done, all women have one proposal which they accept and live happily ever after. The resulting marriages are stable.

The scatter plot shows the marriages that we find, where the x coordinate is the man's preference for the woman and the y coordinate is the woman's preference for the man (1 means her most preferred). Notice that even though women turn down men, the men do much better.

Jose M Vidal

NQueens Adopt

This program solves the n-queens problem using the Adopt algorith.
The Adopt algorithm appears in

Jose M Vidal

Reciprocity in Package Delivery

In the package delivery problem a the agents must deliver a set of packages from a central depot. The destinations lie along one of several spokes emaniting from the central depot. The agents can exchange packages only at the depot. The cost to an agent is proportional to the number of packages carried.

The optimal solution to this problem is to choose one agent to go on each spoke and deliver all the packages scheduled for it. However, this would require completely selfless agents. In this program we study the dynamics that emerege when using different populations of selfish, philantropic, reciprocal, and individual agents behave.

These results were first presented in

  1. Sandip Sen. Reciprocity: a foundational principle for promoting cooperative behavior among self-interested agents In Proceedings of the Second International Conference on Multiagent Systems, p.322--329, AAAI Press, Menlo Park, CA. 1996.

  2. Sandip Sen and Partha Sarathi Dutta. The evolution and stability of cooperative traits. In Proceedings of the First Intenational Joint Conference on Autonomous Agents and Multiagent Systems, pages 1114-1120. ACM Press, NY, NY, 2002.

  3. Sandip Sen. Believing others: Pros and cons. Artificial Intelligence, 142(2):179-203, December 2002.

Jose M Vidal

Pareto Learning

Description: An implementation of conditional joint action learners (CJAL) in the prisoner's dilemma as described in:

These agents reach the Pareto optimal (cooperate,cooperate) solution in repeated play.

Jose M Vidal

Path-Finding Using Pheromones

In this problem a series of drones tries to find a path from the source
to the target while staying away from the obstacles. They use pheromones.
I tried to implement something similar to

Jose M Vidal

Q-Learning

An implementation of the Q-learning algorithm for a simple path-finding domain.

  • Christopher J. C. H. Watkins and Peter Dayan. Q-Learning. Machine Learning, 8(3-4):279--292, 1992.

Seang Chan Ryu

Replicator Dynamics

A visualization of the dynamics of a populations of agents using replicator dynamics. The players are of one of three types, each type playing a pure strategy in a 3x3 game. The user can change the payoffs of the game (but this still needs some work, we also need to add arrow that show direction. You interested?). The results are displayes in a simplex plot where each corner corresponds to one of the strategies. Dark areas correspond to fast movement of the population while lite areas represent slow movement.

Scott Langevin

Shapebugs

This is an implementation of the Shapebugs algorithm from

My program begins (via the setup button) by randomly dispersing the agents throughout the display. All agents start in the lost state and assume that they are outside of the desired shape. An agent inside the shape is assumed to have been found.

Each agent's movement is as follows:

  • If a neighbor reaches a minimum distance the agent repels that neighbor
  • If an agent finds itself inside of the desired shape, it will follow the above rules as long as such movement does not take it outside of the shape.
  • If an agent is inside of the shape, and it determines that its next move will take it outside of the shape, it will instead do one of 2 things:
  • Stay still: 90% probability
  • Move either slightly to the right, left, up, or down: 10% probability

Agent colors:
* A red agent is an agent outside of the desired shape * A blue agent is an agent inside of the desired shape * A green agent is an agent that has been displaced

Agents can be added or removed dynamically via the slider. The shake button chooses a random square from the screen and moves all agents within that square in random directions away from their starting positions.

Note: I found that using 1,000+ agents seems to work best for modeling the shape.

Matt Baker

Supply Network Survivability

This model shows how various supply-chain network topologies fare under attack. The original model was developed to study the military's supply chain vulnerability to terrorist or military attacks, part of the Ultralog project.

Jose M Vidal

Tileworld

This is the classic tileworld problem. There are empty holes and tiles.
The agents must push the tiles so that they cover the empty holes. Agents
can push each other or more than one tile at once. The solution implemented
here is the obvious one. AFAIK, there is no consensus on what is the
best algorithm for solving this problem. The Tileworld was first introduced in

Jose M Vidal

Top Trading Cycle Algorithm

In this problem every node represents an agent with a house that he wants to trade. Every agent has an ordered preference of the other agents' houses he prefers (this list includes the agent's own house). The trick then is to get all agents to trade houses so that they all end up in a house that is at least as good, in their eyes, as the one they currently have.

The Top Trading Cycle Algorithm (TTCA) solves this problem using a very simple greedy method. It was first proposed in:

Jose M Vidal

Ant System

This model is an implementation of the Ant System algorithm, as described in here, that is being used to solve the Traveling Salesman Problem.

Chistopher Roach and Benito Mendoza

NQueens Asynchronous Weak-Commitment

This is the implementation of the Asynchronous Weak-Commitment Search for the n-queens problem.

It also shows the number of messages sent by each agent. It behaves different that the Adopt algorithm in which a few agents end up sending the great majority of the messages. Here, the messages sent are better distributed among the agents.

Rosa L. Zavala Gutierrez