Instance Based Learning

This talk is based on

1 Introduction

2 k-Nearest Neighbor Learning

near neighbors

2.1 When to Use K-Nearest

2.2 Voronoi Diagram

Voronoi diagram

A Voronoi diagram

2.3 Behavior in the Limit

2.4 Distance-Weighted kNN

2.5 Curse of Dimensionality

3 Notation

4 Locally Weighted Regression

  1. We can minimize the squared error over $k$ nearest neighbors \[E_{1}(x_q) \equiv \frac{1}{2} \sum_{x \in k nearest nbrs of x_q} (f(x) - \hat{f}(x))^2 \]
  2. We can minimize the instance-weighted squared error over all neighbors \[E_{2}(x_q) \equiv \frac{1}{2} \sum_{x \in D} (f(x) - \hat{f}(x))^2 K(d(x_{q}, x)) \]
  3. Or, we can combine these two \[E_{2}(x_q) \equiv \frac{1}{2} \sum_{x \in k nearest nbrs of x_q} (f(x) - \hat{f}(x))^2 K(d(x_{q}, x)) \]

4.1 Local Regression Descent

5 Radial Basis Functions

5.1 Radial Basis Function Networks

RBF Network
  1. The number $k$ of hidden units is determined and each unit $u$ is defined by choosing the values of $x_u$ and $\sigma_u^2$ that define its kernel function $K_u(d(x_u,x))$ (e.g., with EM).
  2. The weights $w_u$ are trained to maximize the fit of the network to the data using the global squared error criterion.

6 Case-Based Reasoning

6.1 CADET

6.2 CADET Answers

6.3 CADET vs. k-Nearest

6.4 CBR Summary

7 Lazy and Eager Learning

URLs

  1. Machine Learning book at Amazon, http://www.amazon.com/exec/obidos/ASIN/0070428077/multiagentcom/
  2. Slides by Tom Mitchell on Machine Learning, http://www-2.cs.cmu.edu/~tom/mlbook-chapter-slides.html
  3. CADET Homepage, http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/cadet/ftp/docs/CADET.html
  4. CBR Web, http://www.cbr-web.org/

This talk available at http://jmvidal.cse.sc.edu/talks/instancelearning/
Copyright © 2009 José M. Vidal . All rights reserved.

04 March 2003, 11:13AM