Vidal's library
Title: Believing others: Pros and cons
Author: Sandip Sen
Journal: Artificial Intelligence
Volume: 142
Number: 2
Pages: 179--203
Month: December
Year: 2002
DOI: 10.1016/S0004-3702(02)00289-8
Abstract: In open environments there is no central control over agent behaviors. On the contrary, agents in such systems can be assumed to be primarily driven by self interests. Under the assumption that agents remain in the system for significant time periods, or that the agent composition changes only slowly, we have previously presented a prescriptive strategy for promoting and sustaining cooperation among self-interested agents. The adaptive, probabilistic policy we have prescribed promotes reciprocative cooperation that improves both individual and group performance in the long run. In the short run, however, selfish agents could still exploit reciprocative agents. In this paper, we evaluate the hypothesis that the exploitative tendencies of selfish agents can be effectively curbed if reciprocative agents share their opinions of other agents. Since the true nature of agents is not known a priori and is learned from experience, believing others can also pose its own hazards. We provide a learned trust-based evaluation function that is shown to resist both individual and concerted deception on the part of selfish agents in a package delivery domain.

Cited by 45  -  Google Scholar

@Article{sen02b,
  author =	 {Sandip Sen},
  title =	 {Believing others: Pros and cons },
  googleid =	 {9o0Ox8094xoJ:scholar.google.com/},
  journal =	 {Artificial Intelligence},
  year =	 2002,
  volume =	 142,
  number =	 2,
  pages =	 {179--203},
  month =	 {December},
  abstract =	 {In open environments there is no central control
                  over agent behaviors. On the contrary, agents in
                  such systems can be assumed to be primarily driven
                  by self interests. Under the assumption that agents
                  remain in the system for significant time periods,
                  or that the agent composition changes only slowly,
                  we have previously presented a prescriptive strategy
                  for promoting and sustaining cooperation among
                  self-interested agents. The adaptive, probabilistic
                  policy we have prescribed promotes reciprocative
                  cooperation that improves both individual and group
                  performance in the long run. In the short run,
                  however, selfish agents could still exploit
                  reciprocative agents. In this paper, we evaluate the
                  hypothesis that the exploitative tendencies of
                  selfish agents can be effectively curbed if
                  reciprocative agents share their opinions of other
                  agents. Since the true nature of agents is not known
                  a priori and is learned from experience, believing
                  others can also pose its own hazards. We provide a
                  learned trust-based evaluation function that is
                  shown to resist both individual and concerted
                  deception on the part of selfish agents in a package
                  delivery domain.},
  keywords =     {game-theory multiagent learning},
  url =		 {http://jmvidal.cse.sc.edu/library/sen02b.pdf},
  doi =		 {10.1016/S0004-3702(02)00289-8},
  comment =	 {masrg},
  cluster = 	 {1937460218716655094}
}
Last modified: Wed Mar 9 10:15:32 EST 2011