Decision Trees - Optimization

Overfitting is one of the key challenges faced while modeling decision trees. If there is no limit set of a decision tree, it will give you 100% accuracy on training set because in the worse case it will end up making 1 leaf for each observation. Thus, preventing overfitting is pivotal while modeling a decision tree and it can be done in 2 ways:

  • Setting constraints on tree size
  • Tree pruning

Source: Analytics Vidhya

Setting Constraints on Tree Size

This can be done by using various parameters which are used to define a tree. First, lets look at the general structure of a decision tree:

Test Image

The parameters used for defining a tree are further explained below. The parameters described below are irrespective of tool. It is important to understand the role of parameters used in tree modeling. These parameters are available in R & Python.

Minimum samples for a node split

  • Defines the minimum number of samples (or observations) which are required in a node to be considered for splitting.
  • Used to control over-fitting. Higher values prevent a model from learning relations which might be highly specific to the particular sample selected for a tree.
  • Too high values can lead to under-fitting hence, it should be tuned using CV.

Minimum samples for a terminal node (leaf)

  • Defines the minimum samples (or observations) required in a terminal node or leaf.
  • Used to control over-fitting similar to min_samples_split.
  • Generally lower values should be chosen for imbalanced class problems because the regions in which the minority class will be in majority will be very small.

Maximum depth of tree (vertical depth)

  • The maximum depth of a tree.
  • Used to control over-fitting as higher depth will allow model to learn relations very specific to a particular sample.
  • Should be tuned using CV.

Maximum number of terminal nodes

  • The maximum number of terminal nodes or leaves in a tree.
  • Can be defined in place of max_depth. Since binary trees are created, a depth of ‘n’ would produce a maximum of 2^n leaves.

Maximum features to consider for split

  • The number of features to consider while searching for a best split. These will be randomly selected.
  • As a thumb-rule, square root of the total number of features works great but we should check upto 30-40% of the total number of features.
  • Higher values can lead to over-fitting but depends on case to case.

Tree Pruning

As discussed earlier, the technique of setting constraint is a greedy-approach. In other words, it will check for the best split instantaneously and move forward until one of the specified stopping condition is reached.

Tree Pruning:

  • We first make the decision tree to a large depth.
  • Then we start at the bottom and start removing leaves which are giving us negative returns when compared from the top. Suppose a split is giving us a gain of say -10 (loss of 10) and then the next split on that gives us a gain of 20. A simple decision tree will stop at step 1 but in pruning, we will see that the overall gain is +10 and keep both leaves. Note that sklearn’s decision tree classifier does not currently support pruning. Advanced packages like xgboost have adopted tree pruning in their implementation.

Are tree based models better than linear models?

“If I can use logistic regression for classification problems and linear regression for regression problems, why is there a need to use trees”? Many of us have this question. And, this is a valid one too.

Actually, you can use any algorithm. It is dependent on the type of problem you are solving. Let’s look at some key factors which will help you to decide which algorithm to use:

  • If the relationship between dependent & independent variable is well approximated by a linear model, linear regression will outperform tree based model.
  • If there is a high non-linearity & complex relationship between dependent & independent variables, a tree model will outperform a classical regression method.
  • If you need to build a model which is easy to explain to people, a decision tree model will always do better than a linear model. Decision tree models are even simpler to interpret than linear regression!
Written on January 5, 2018
]