If the hypothesis space $H$ is finite, and $D$ is a sequence of $m \geq 1$
independent random examples of some target concept $c$, then for any $0 \leq
\epsilon \leq 1$, the probability that the version space with respect to $H$
and $D$ is not ε-exhausted (with respect to $c$) is less than \[|H|
e^{-\epsilon m} \]
...see book for proof.
This bounds the probability that any consistent learner
will output a hypothesis $h$ with $error(h) \geq \epsilon$ If
we want to this probability to be below $\delta$
\[|H|e^{- \epsilon m} \leq \delta \]
then
\[m \geq \frac{1}{\epsilon}(\ln|H| + \ln(\frac{1}{\delta})) \]
So, any consistent learner can PAC-learn any
target concept in $H$ with $m$ examples, where PAC-learn means
that the hypothesis is probably ($1 - \delta$) approximately
(within error ε) correct.