The growth in the number of required training examples
with problem size is called the sample complexity
of the learning problem.
We will consider only consistent learners,
which are those that maintain a training error of 0.
We can derive a bound on the number of training examples
required by any consistent learner!
Fact: Every consistent learner outputs a hypothesis
belonging to the version space.
Therefore, we need to bound the number of examples needed
to assure that the version space contains no unacceptable
hypothesis.
The version space $VS_{H,D}$ is said to be
ε-exhausted with respect to $c$ and
$\cal{D}$, if every hypothesis $h$ in $VS_{H,D}$ has error
less than ε with respect to $c$ and $\cal{D}$.
\[(\forall h \in VS_{H,D}) error_{\cal{D}}(h) < \epsilon \]