Computational Learning Theory
This talk is based on
Tom M. Mitchell.
Machine Learning.
McGraw Hill. 1997. Chapter 7.
and his
slides
.
1
Introduction
2
Probably Approximately Correct
2.1
Concept Learning
2.1.1
Sample Complexity
*
2.2
Problem Setting
2.3
The Error of a Hypothesis
2.4
PAC Learnability
3
Sample Complexity for Finite Hypothesis Spaces
3.1
How Many Examples Needed To Exhaust VS?
3.2
Agnostic Learning
3.3
Conjunctions of Boolean Literals are PAC-Learnable
3.4
The Unbiased Concept Class is Not PAC-Learnable
4
Sample Complexity for Infinite H
4.1
Shattering a Set of Instances
4.2
The Vapnik-Chervonenkis Dimension
4.3
VC Dimension of Linear Decision Surfaces
4.4
Sample Complexity from VC Dimension
4.5
VC Dimension for Neural Nets
5
Mistake Bound Model of Learning
5.1
Mistake Bound Model: Find-S
*
5.2
Mistake Bound Model: Halving Algorithm
*
5.3
Optimal Mistake Bounds
6
Summary
Entire Presentation with Notes
Copyright © 2009
José M. Vidal
.
All rights reserved.
27 February 2003, 01:30PM