Combining Inductive and Analytical Learning

This talk is based on

1 Motivation

Inductive Learning Analytical Learning
Goal Hypothesis fits data Hypothesis fits domain theory
Justification Statistical inference Deductive Inference
Advantages Requires little prior knowledge Learns from scarce data
Pitfalls Scarce data, incorrect bias Imperfect domain theory

1.1 What We Want

We want a learning method such that:
  1. Given no domain theory it should be as good as purely inductive methods.
  2. Given a perfect domain theory it should be as good as analytical methods.
  3. Given imperfect domain theory and imperfect data it should combine the two and do batter than both inductive and analytical.
  4. Accommodate an unknown level of error in training data.
  5. Accommodate an unknown level of error in domain theory.

2 The Learning Problem

The learning problem: Determine

2.1 Best Fits?

So, what does "best fits" mean?

2.2 Hypothesis Space Search

As always, we view the problem as that of search over H where We see that we have several possible approaches:


3.1 KBANN Algorithm

KBANN(domainTheory, trainingExamples)
domainTheory: set of propositional non-recursive Horn clauses
  1. for each instance attribute create a network input.
  2. for each Horn clause in domainTheory, create a network unit
    1. Connect inputs to attributes tested by antecedents.
    2. Each non-negated antecedent gets a weight W.
    3. Each negated antecedent gets a weight -W
    4. Threshold weight is -(n - .5), where n is the number of non-negated antecedents.
  3. Make all other connections between layers, giving these very low weights.
  4. Apply Backpropagation using trainingExamples

3.2 KBANN Example

3.3 KBANN Example Network


3.4 After Training


3.5 KBANN Results

3.6 Hypothesis Space Search


4 TangentProp

4.1 TangentProp Example

4.2 TangentProp Search


5.1 EBNN Example


5.2 EBNN Summary


6.1 FOCL Example

FOCL Example

6.2 FOCL Search

FOCL search


  1. Machine Learning book at Amazon,
  2. Slides by Tom Mitchell on Machine Learning,
  3. KBANN paper,
  4. TangentProp paper,
  5. EBNN paper,
  6. citeseer:pazzani92utility,

This talk available at
Copyright © 2009 José M. Vidal . All rights reserved.

17 April 2003, 12:26PM