Vidal's library
Title: Believing Others: Pros and Cons
Author: Sandip Sen, Anish Biswas, and Sandip Debnath
Book Tittle: Proceedings of the Fourth International Conference on Multiagent Systems
Pages: 279--286
Year: 2000
Abstract: In open environments there is no central control over agent behaviors. On the contrary, agents in such systems can be assumed to be primarily driven by self interests. Under the assumption that agents remain in the system for significant time periods, or that the agent composition changes only slowly, we have previously presented a prescriptive strategy for promoting and sustaining cooperation among self-interested agents. The adaptive, probabilistic policy we have prescribed promotes reciprocative cooperation that improves both individual and group performance in the long run. In the short run, however, selfish agents could still exploit reciprocative agents. In this paper, we evaluate the hypothesis that the exploitative tendencies of selfish agents can be effectively curbed if reciprocative agents share their opinions of other agents. Since the true nature of agents are not known a priori and is learned from experience, believing others can also pose other hazards. We provide a learned trust-based evaluation function that is shown to resist both individual and concerted deception on the part of selfish agents.

Cited by 45  -  Google Scholar

@InProceedings{sen00a,
  author =	 {Sandip Sen and Anish Biswas and Sandip Debnath},
  title =	 {Believing Others: Pros and Cons},
  googleid =	 {9o0Ox8094xoJ:scholar.google.com/},
  booktitle =	 {Proceedings of the Fourth International Conference
                  on Multiagent Systems},
  pages =	 {279--286},
  year =	 2000,
  abstract =	 {In open environments there is no central control
                  over agent behaviors. On the contrary, agents in
                  such systems can be assumed to be primarily driven
                  by self interests. Under the assumption that agents
                  remain in the system for significant time periods,
                  or that the agent composition changes only slowly,
                  we have previously presented a prescriptive strategy
                  for promoting and sustaining cooperation among
                  self-interested agents. The adaptive, probabilistic
                  policy we have prescribed promotes reciprocative
                  cooperation that improves both individual and group
                  performance in the long run. In the short run,
                  however, selfish agents could still exploit
                  reciprocative agents. In this paper, we evaluate the
                  hypothesis that the exploitative tendencies of
                  selfish agents can be effectively curbed if
                  reciprocative agents share their opinions of other
                  agents. Since the true nature of agents are not
                  known a priori and is learned from experience,
                  believing others can also pose other hazards. We
                  provide a learned trust-based evaluation function
                  that is shown to resist both individual and
                  concerted deception on the part of selfish agents.},
  keywords =     {multiagent learning game-theory},
  url =		 {http://jmvidal.cse.sc.edu/library/sen00a.pdf},
  cluster = 	 {1937460218716655094}
}
Last modified: Wed Mar 9 10:14:57 EST 2011