Artificial Neural Networks
This talk is based on
Tom M. Mitchell.
Machine Learning.
McGraw Hill. 1997. Chapter 4.
and his
slides
.
1
Introduction
1.1
The Human Brain
1.2
Neural Network Representation
2
When to Use Neural Networks
2.1
ALVINN
3
Perceptrons
3.1
Representational Power of Perceptrons
*
3.2
Perceptron Training
3.3
Perceptron Training Rule Convergence
3.4
Gradient Descent
3.4.1
Gradient Descent Landscape
3.4.2
Calculating the Gradient Descent
3.4.3
Gradient Descent Algorithm
3.5
Perceptron Learning Summary, so far
3.6
Incremental (Stochastic) Gradient Descent
3.6.1
Stochastic versus Batch Gradient Descent
4
Multilayer Networks
4.1
Sigmoid Unit
4.2
Error Gradient for Sigmoid Unit
4.3
Backpropagation
4.3.1
Backpropagation Details
4.3.2
Hidden Layer Representation
*
4.3.3
8-3-8 Plots
4.3.4
Backpropagation Convergence
4.4
Representational Power of ANNs
4.5
Overfitting ANNs
5
Face Recognition Example
6
Alternative Error Functions
7
Recurrent Networks
8
Dynamically Modifying Network Structure
9
Summary
Entire Presentation with Notes
Copyright © 2009
José M. Vidal
.
All rights reserved.
01 February 2003, 09:41AM