What is Ensemble Machine Learning? — using stories and pictures
--
“Storytelling is the most powerful way to put ideas into the world.” -Robert Mckee
In this article, using small stories, I will try to explain the concepts of ensemble machine learning.
In recent times, I haven’t found any Kaggle competition-winning solution which doesn’t have ensemble machine learning. So, it might be a good way to understand the basic concepts of ensemble machine learning using some examples.
Ensemble machine learning
Suppose you want to buy a house. To understand if this is the perfect house for you or not, you will ask questions to your friends who have bought a house, real-estate brokers, neighbors, colleagues, and your parents. You will give weights to each of the answers and try to arrive at the final answer to your question. Exactly, this is ensemble learning.
Ensemble machine learning is an art to create a model by merging different categories of learners together, to obtain better prediction and stability.
Naive Ensemble machine learning techniques are:
- Max voting — Based on the previous example, if you have asked 10 people about the house and 7 people told not to buy the house. Your answer is not to buy the house based on max voting.
- Averaging — If each of these people gives the probability that you should buy this house or not (Like your parents say that this house will be 70% suitable for you), you take an average of all these probabilities and take the decision to buy the house.
- Weighted averaging — Suppose, you have a trust issue and you trust more your parents and close friends than any other. You give some higher weights(suppose 60%) to the probabilities given by these people and lower weights(40%) to others. Then you will take the weighted average and take the final probability.
Advance Ensemble machine learning techniques are:
- Bagging — Also known as Bootstrap Aggregating.