Locally Weighted Regression
- The kNN approximates the target function for a single
query point.
- Locally Weighted Regression is a generalization
of that approach. It constructs an explicit approximation to
$f$ over a local region surrounding $x_q$.
- We can try to form an explicit approximation $\hat{f}(x)$
for the region surrounding $x_q$.
- Assume that
\[
\hat{f}(x) \equiv w_0 + w_1 a_1(x) + \cdots + w_n a_n(x)
\]
- We can minimize the squared error over $k$ nearest neighbors
\[E_{1}(x_q) \equiv \frac{1}{2} \sum_{x \in k nearest nbrs of x_q} (f(x)
- \hat{f}(x))^2 \]
-
We can minimize the instance-weighted squared error over all neighbors
\[E_{2}(x_q) \equiv \frac{1}{2} \sum_{x \in D} (f(x) - \hat{f}(x))^2 K(d(x_{q}, x)) \]
- Or, we can combine these two
\[E_{2}(x_q) \equiv \frac{1}{2} \sum_{x \in k nearest nbrs of x_q} (f(x) - \hat{f}(x))^2 K(d(x_{q}, x)) \]
- Eq. 2 two is more aesthetically pleasing but Eq. 3 is a
good approximation with costs that depend on $k$.
José M. Vidal
.
9 of 18