Computational Learning Theory
This talk is based on
Tom M. Mitchell.
Machine Learning.
McGraw Hill. 1997. Chapter 7.
and his
slides
.
Introduction
Probably Approximately Correct
Concept Learning
Sample Complexity
*
Problem Setting
The Error of a Hypothesis
PAC Learnability
Sample Complexity for Finite Hypothesis Spaces
How Many Examples Needed To Exhaust VS?
Agnostic Learning
Conjunctions of Boolean Literals are PAC-Learnable
The Unbiased Concept Class is Not PAC-Learnable
Sample Complexity for Infinite H
Shattering a Set of Instances
The Vapnik-Chervonenkis Dimension
VC Dimension of Linear Decision Surfaces
Sample Complexity from VC Dimension
VC Dimension for Neural Nets
Mistake Bound Model of Learning
Mistake Bound Model: Find-S
*
Mistake Bound Model: Halving Algorithm
*
Optimal Mistake Bounds
Summary
All in one page
Index on left
Copyright © 2003
José M. Vidal
.
All rights reserved.
01 April 2003, 11:18AM