RadSim Challenge Problem

Screen dump of Radsim simulator with our changes. The red circles are the targets. The green circles are our guesses at the targets' locations.

1. Problem Description

RadSim is a simulator which implements the Challenge Problem. The simplest Challenge Problem consists of a a field with radars and moving targets. Each radar can be positioned in one of three orientations. The radars return an amplitude and frequency (Doppler shift) from each target on their field of view. From these values we can determine and arc---the target can be anywhere on this arc.

The challenge is to determine the exact location of all targets as they move in the field while using the least amount of resources (i.e. battery power) possible.

2. General Approach

We divide the problem into three subproblems:
  1. Create one and only one Target agent for each mobile object in the field. See target creation and death.
  2. Target agent determines where it is using the returns from the sensors. See locating the target.
  3. Allocate the available sensors so that they cover as many targets as possible, with overlap if possible. This problem can be subdivided into two interrelated subproblems:
    1. Static problem: a service allocation problem. Assign each target agent a number of sensor to cover it, subject to the constraint that each sensor can only look in one sector (in the future, maybe sensors will only have a limited range or have obstructed views). See static problem below a formalization of this problem.
    2. Dynamic problem: we do not want to have to redo the previous service allocation every time there is a change, especially since we expect the mobile objects to move very little in between measurement.
    Both of these problems are solved by our Dynamic Constrained Service-Allocation with Teams protocol. While our protocol solves the two problems, we still separate the static and dynamic concerns because there are mathematical tools available for the analysis of the static problem. The dynamic problem can only be studied via simulation (which we will do).

2.1 Target Creation and Death

A target agent's sole purpose is to keep track of the location of the mobile object it represents. It embodies our desire to know the location of a target, as well as our know-how for determining this location from all the available data.

2.2 Locating the Target

Graphical representation of the radar data. Each line (path) represents one radar return. In this case, the target is located at the intersection of three paths (one from each sensor).

After using our exchange protocol, each target will receive a number of returns (i.e. readings) from the sensor agents from which it must extract its location. For our initial implementation we assume that:

Given these assumptions, we can use an equation to determine the target's location from two sensor readings from different sensors. The procedure is:
  1. Find all intersection points. That is, each amplitude implies a path. Find all the points where these paths intersect.
  2. Give each one a score equal to two times the number of times this point appears as an intersection minus the distance between this point and the last known target's location.
  3. Pick the point with the highest score.

In the future, we plan to relax some of the assumptions above. The target will then use Bayes Networks to determine a probability distribution over the set of possible locations it could be in.

2.3 The Static Resource Allocation Problem

Assuming that we have one target associated with each mobile object, we can formalize this problem using some notation. Let S be the set of sensors, where s1, s2, ... ,s|N|, are each of the individual sensors. Similarly, let T be the set of target agents and t1, t2, ... t|T|, be the individual targets. The problem now is to find an allocation A = {{si, sj ...}, {sl, sm ...}, ......} where the first subset is the set of sensors looking at t1 and so one.

The set of possible allocation is constrained by the location of the mobile objects and the fact that sensor can only look in one sector at a time. This set of constraints is given by C. C(A) is true if and only if the allocation A does not violate any constraints. Notice that the global set of constraints can be satisfied if each sensor satisfies its own constraints. That is, if all sensors find that A does not make them violate a constraint (e.g. looking at two sectors at the same time) then the global constraint is satisfied.

The solution is given by a global utility function G(A) which reaches a peak when all the targets are viewed by at least two sensors (we need at least two sensors to determine the location of a target, see locating the target), and decreases as more targets are left without cover. This function also assigns more value to an even allocation, that is, it is better to assign 5 sensors to one target and 5 to another than to give 2 to the first and 8 to the second (the target agents can take advantage of this redundancy to update their position more frequently and reduce errors).

Our task, as multiagent systems engineers, is to develop a protocol the agents can use to reach the A which maximizes G(A). Since we made the initial decision of implementing actual target agents, this task has been made much simpler. We simply assign each target agent a utility function Ut(A) which equals:

where T is the time that has passed since the target last updated its location. For the first round, we set this to some constant.

Given these individual utilities, we now need to define a protocol which the agents can use to try and change the current allocation to increase their individual utility but without violating any constraints, we show that protocol in the Dynamic Service Allocation section.

A great deal of insight can be had if we use the "fitness landscape" metaphor and picture each of the A's as one point in this plane whose height is given by G(A). Furthermore, the fact that our G(A) equals the sum of all Ut(A) means that this fitness landscape is very similar to Kauffman's NK landscapes, where it has been shown that a global optima can be found faster if groups of agents act selfishly as a group. It remains to be seen whether these results will hold in our domain, which differs from the NK model because of the set of constraints C that we must impose on the agents.

2.4 Dynamic Constrained Service-Allocation with Teams

In order for the sensor agents to determine which target they will look at, we have developed an extension to the basic contract-net protocol which makes it more suitable to dynamic service domains such as this one.

In our protocol, the target agents constantly send bids to the sensor agents. The bids are simply the marginal utility the agent expects to receive if the sensor agrees to take a reading in the target's direction. The longer its been since a target agent refreshed its location, the higher the utility will be. The sensors pick the bid with the highest utility (if any) and take that measurement. They then send the results to all the target agents that could benefit from it. Since this is a cooperative system, we want to allow the agents to share as much information as would be useful.

The agents' current algorithms are:

Algorithm: Sensor agent (Sensor.java)

Targets; //vector of known targets, includes location, bid,
         // and initial observation (if known)

while (true) {
foreach amplitude returned by radar
    If one of the known Targets is on/near the amplitude path
        then continue
    If the amplitude is similar to one of a target I just created
    but who does not know its position yet
        then continue
    Create a new Target agent, 
        start it running, 
	tell it about the observation (sensor location, amplitude)
	tell it about the locations of all sensors.

Foreach msg received from a Target agent
    If target is unknown, 
        add it to the list of known targets
    If its a request, 
        change the agent's current request to this one
    If its a location, 
        change the agent's current location to this one
    If its a dead message, 
        remove target from list of known targets.

If ready to send a new requestformeasurement
    pick the request with the highest utility and scan that sector
    If no utilities > -1, pick a sector randomly.

}

Algorithm: Target agent (Target.java)

Location; //where I think I am 
Observations; //vector of observations received by sensors, timestamped
	      // expire when old, new observations from sensor override old.

Tell all the sensors not in Observations to send me data

While (true){

Foreach msg received from sensor
    If msg is observation then add it to Observations

Expire old Observations;

If its time to refresh my location
    Send a bid to all sensors to look my way. Bid is amount of time
    since my location was updated.

Determine Location based on Observations;
   Given that we know the speed of the target, we can determine its
   location with amplitude and frequency readings from two sensors.
   There will be many intersections. Pick the one with the highest
   score = 2*(# times this point comes up as an intersection) -
     (distance from this point to old location).

Tell all sensors where I am.

If I have not been able to figure out where I am for a while, 
   tell all the sensor agents that Im dead.
   die.

}

One problem with this solution is that it might lead to "flip-flops". That is, in a situation where two targets each need one of two available sensors, the two sensors could decide to both handle the same target. In order to avoid this and other similar problems, we propose that targets change their utility function to be the sum of the utilities of the targets in their team. Where the team is defined by physical proximity or some other desirable heuristic.

The team extension has not been fully developed yet.

FAQ

  1. What are you doing as preparation for reporting the basic real-time baseline? (What are you going to measure, how do you measure it, how do you address the scale up?) We will measure the error rate and resource consumption. The error rate is the sum of the errors in the predictions over a series of runs. The resource consumption is the number of measurements the sensors take. We will measure these variables in various experiments where we will vary the ratio of sensors to targets, as well as the total number of sensors and targets. We do not expect scale to be a problem since we can instruct our agents to communicate only with others near them, localizing all communications. Still, it is possible that limiting agents to a very small neighborhood will decrease the solution quality.
  2. What are the emerging demo plans? How do you plan to incorporate the challenge problem? The service allocation protocol is being implemented and tested on the Radsim simulator. Since the simulator does not yet simulate noise, we have chosen to start the development of the Bayes net separately. We expect to demo the service allocation protocol in the radsim simulator.
  3. How do you address dynamics and stability issues?This is an area that is under study. We believe that our protocol is stable under almost all circumstance. However, we have managed to isolate a few pathological cases which lead to flip-flop behavior. Further adjusments might be needed in the future.
  4. Any interesting finding to report?No, we have not carried out extensive tests yet. Preliminary tests are very encouraging.

Current Status

The latest bug/feature list for the code can be found on the package description (first page) of our
API, in case you are interested in the gritty details.
Jose M. Vidal $Id: index.html,v 1.6 2000/03/04 15:22:21 jmvidal Exp jmvidal $
Last modified: Sun Mar 5 16:02:34 EST 2000