A learner is called weak if it frequently induces models whose performance is only slightly better than random. Boosting is based on the observation that
finding many rough rules of thumb (i.e., weak learning) can be a lot easier than finding a single, highly accurate prediction rule (i.e., strong learning).
Boosting process then assumes that a weak learner can be made strong by frequently running it on various distributions Di over the training data T (i.e., varying the focus of the learner), and then combining the weak classifiers into a single composite classifier, as illustrated below: