Bayesian Learning
This talk is based on
Tom M. Mitchell.
Machine Learning.
McGraw Hill. 1997. Chapter 6.
and his
slides
.
1
Introduction
2
Bayes Theorem
2.1
Choosing a Hypothesis
2.2
Example
2.3
Probability Formulas
3
Brute Force Bayes Concept Learning
3.1
Relation to Concept Learning
3.2
MAP Hypothesis and Consistent Learners
4
Learning A Real-Valued Function
5
Learning To Predict Probabilities
6
Minimum Description Length Principle
7
Bayes Optimal Classifier
*
7.1
Bayes Optimal Classification
8
Gibbs Algorithm
9
Naive Bayes Classifier
9.1
Naive Bayes Classifier
9.2
Naive Bayes Algorithm
9.3
Naive Bayes Example
9.4
Naive Bayes Issues
10
Learning to Classify Text
10.1
Text Attributes
10.2
Learn Naive Bayes Text
10.3
Classify Naive Bayes Text
11
Bayesian Belief Networks
11.1
Conditional Independence
11.2
Bayesian Belief Network
11.3
Inference in Bayesian Networks
11.4
Learning Bayesian Networks
11.4.1
Learning Bayesian Networks
11.4.2
Gradient Ascent for Bayes Nets
11.5
The Expectation Maximization Algorithm
11.5.1
When To Use EM Algorithm
11.5.2
EM Example: Generating Data from k Gaussians
11.5.3
EM for Estimating k Means
11.5.4
EM Algorithm
11.5.5
General EM Problem
11.5.6
General EM Method
11.6
Summary of Bayes Nets
Entire Presentation with Notes
Copyright © 2009
José M. Vidal
.
All rights reserved.
25 February 2003, 02:16PM