Vidal's library
Title: Q-Learning
Author: Christopher J. C. H. Watkins and Peter Dayan
Journal: Machine Learning
Volume: 8
Number: 3-4
Pages: 279--292
Year: 1992
DOI: 10.1023/A:1022676722315
Abstract: Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for Q,-learning based on that outlined in Watkins (1989). We show that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many Q values can be changed each iteration, rather than just one.

Cited by 859  -  Google Scholar

@Article{	  watkins92a,
  author =	 {Christopher J. C. H. Watkins and Peter Dayan},
  title =	 {Q-Learning},
  googleid =	 {3Y3Leyb3Hs8J:scholar.google.com/},
  journal =	 {Machine Learning},
  volume =	 8,
  number =	 {3-4},
  pages =	 {279--292},
  year =	 1992,
  abstract =	 {Q-learning (Watkins, 1989) is a simple way for
                  agents to learn how to act optimally in controlled
                  Markovian domains. It amounts to an incremental
                  method for dynamic programming which imposes limited
                  computational demands. It works by successively
                  improving its evaluations of the quality of
                  particular actions at particular states. This paper
                  presents and proves in detail a convergence theorem
                  for Q,-learning based on that outlined in Watkins
                  (1989). We show that Q-learning converges to the
                  optimum action-values with probability 1 so long as
                  all actions are repeatedly sampled in all states and
                  the action-values are represented discretely. We
                  also sketch extensions to the cases of
                  non-discounted, but absorbing, Markov environments,
                  and where many Q values can be changed each
                  iteration, rather than just one.},
  keywords =     {ai reinforcement learning},
  url =		 {http://jmvidal.cse.sc.edu/library/watkins92a.pdf},
  doi =		 {10.1023/A:1022676722315},
  cluster = 	 {14924637959810158045}
}
Last modified: Wed Mar 9 10:13:48 EST 2011