Implementing the domain knowledge of the TPA raises a lot of questions. First of all, it is clear that the choice of the agents' architecture is closely tied to the the choice of a domain knowledge representation. We believe that our choice of a BDI architecture will allow us to use various domain knowledge architectures (e.g. the agent can spend its idle time running learning algorithms on cached data while still being reactive to incoming messages, the agent can also create KAs that are opportunistic and take over when appropriate), and still have the agent behave in a rational way (i.e. act when appropriate). But, there are still many questions about the nature of the domain knowledge that we are working on: How specialized or general should the agent's knowledge be? What are the appropriate areas for specialization? How adaptive should the agent be? What kind of knowledge and learning mechanism (e.g. reinforcement learning, genetic algorithms, Bayesian networks, etc.) will work best? We do not expect to answer all of these question (in fact, we expect that the economic market will give the last word on what works and what doesn't), but we hope to investigate some of them.
Currently, the domain knowledge is implemented directly into the agent's KAs. We assume that a query is answered by the achievement of successive goals; that is, we wrote a repertoire of plans for achieving the task of answering a query within various sets of task parameters. A subset of the goal structure formed by our TPA is shown in Figure 3. The different goals and associated KAs have been gathered from continuing meetings with librarians and other Information and Library Sciences faculty. From these meetings, it has become clear that the knowledge used by a librarian to guide a student to the appropriate collections is sometimes elusive. This implies that, even though there does seem to be a process or plan that can reliably map queries to the appropriate collections, the execution of this plan often requires the use of common-sense knowledge or of shallow domain knowledge. We believe that the amount of common-sense knowledge required for our limited domain is small enough that it can be encoded in the TPA's KAs, and we are hoping that the other ``specialist'' agents will provide the needed domain knowledge.
Figure 3: A subset of the goal structure that is formed when a request to recommend-all comes in. The arrows represent the goal/sub-goal relationships. The first names in the nodes are the goals, while the others are the names of KAs that can achieve these goals.
Our basic strategy consists of iterating over a three step loop. The first step takes the current query and delivers it to the RA (in the future it might be delivered, instead, to specialized domain agents). The second step is to ascertain the quality of the results gathered so far. This is done by counting the number of CIA names that have been gathered so far, along with any relevant information about them, and using a function to determine how well they match the task parameters. If the quality is deemed high enough then the results are returned. Otherwise, we proceed to the third step where the query is modified in an effort to produce a new query that will produce the desired or missing results. In order to do this modification, a few simple techniques have been implemented.
A common technique is to find a broader or related topic to search to the one(s) given. For example, if the query was about ``clouds'', this modification technique might also add ``water vapor'', or ``storms'', or both. The semantic knowledge of what terms are related, and which terms are broader or narrower than others, is given by the BSOA. It is important to note that these relations can only be trusted within a specific knowledge domain. That is, our BSOA only deals with themes related to ``space and earth sciences.'' If the user was interested in literature and asked about ``clouds'' then a BSOA agent specializing in literature might give related terms such as ``obscures,'' ``vague.'' The TPA will need to possess enough domain knowledge and profile information on the user to determine the general area to which the query probably refers.
Another similar technique is to ask the Thesaurus Agent for synonyms or words related to the ones in the query. The TA can also be specialized in some subject area, and its content is considered to be much more extensive and flexible than the BSOA. For example, the TA contains terms like ``cirrus clouds'' and ``dust clouds'' while the BSOA is limited to much more general terms. This technique can be used when the UIA provides terms that are too specific to be subjects but are likely to be words in the actual documents (i.e. keywords). Another way of achieving this same effect is by asking the collections themselves if they contain the specified words. However, this is only feasible if we have trimmed the number of applicable collections.
We can also apply similar techniques to other fields in the query such as ``conceptual level,'' ``audience,'' ``language,'' etc. Eventually, these techniques will be combined to work better together. For example, if a high-school student asks about the moon then the process will first go through general references and introductory texts (or collections of such) about the moon and planetary bodies in general, and only after this will it try more advanced or scholarly articles on the moon and astrophysics.
Finally, if all else fails, we can eliminate terms from the specified query. Note that we are not allowed to eliminate terms from other parts of the task description, only from the part of it that contains the query.
The TPA's world knowledge consists of the knowledge the agent needs for achieving its goals of effectively communicating with the other agents, forming deals and, eventually, turning a profit. Specifically, the TPA needs to know the following:
The first item is a first step towards the modeling of other agents. The TPA might find that simply having a mapping between agent types and capabilities is not enough and that it needs more detailed models of other agents (e.g. a TA might be designed primarily for grade-school students, a BSO agent might always take a long time to answer, a TPA might always deliver results of poor quality). For these cases the TPA will need to expand its knowledge base to include models of the other agents. These models should include, the agents' specialty, how long they took to do their task, the quality of their results and the price they charged for their services. The TPA will probably start with some basic generic models which it will improve and specify using its observations. Of course, if we assume that the other agents are just as complex, then we must also assume that they are building similar models. This situation leads to a potentially infinite recursive modeling. The TPA will need to be able establish tradeoffs between modeling and taking action. It must determine who, when, and how deep to model, and when to use this knowledge.
Next: Implementation Up: Task Planner Agent Previous: TPA language