Computational Learning Theory

This talk is based on

  1. Introduction
  2. Probably Approximately Correct
    1. Concept Learning
      1. Sample Complexity*
    2. Problem Setting
    3. The Error of a Hypothesis
    4. PAC Learnability
  3. Sample Complexity for Finite Hypothesis Spaces
    1. How Many Examples Needed To Exhaust VS?
    2. Agnostic Learning
    3. Conjunctions of Boolean Literals are PAC-Learnable
    4. The Unbiased Concept Class is Not PAC-Learnable
  4. Sample Complexity for Infinite H
    1. Shattering a Set of Instances
    2. The Vapnik-Chervonenkis Dimension
    3. VC Dimension of Linear Decision Surfaces
    4. Sample Complexity from VC Dimension
    5. VC Dimension for Neural Nets
  5. Mistake Bound Model of Learning
    1. Mistake Bound Model: Find-S*
    2. Mistake Bound Model: Halving Algorithm*
    3. Optimal Mistake Bounds
  6. Summary
All in one page
Index on left