Behavior in the Limit
- Consider $p(x)$ defines probability that instance $x$ will be labeled 1 (positive) versus 0 (negative).
- Nearest neighbor: As number of training examples
goes to $\infty$, approaches Gibbs Algorithm
(i.e., with probability $p(x)$ predict 1, else 0)
- k-Nearest neighbor: As number of training examples
goes to $\infty$ and $k$ gets large, approaches Bayes
optimal (i.e., if $p(x)>.5$ then predict 1, else 0)
José M. Vidal
.
5 of 18