- Decision trees are non-parametric. They can model arbitrarily complex relations between inputs and outputs, without any a priori assumption;
- Decision trees handle heterogeneous data (ordered or categorical variables, or a mix of both);
- Decision trees intrinsically implement feature selection, making them robust to irrelevant or noisy variables (at least to some extent);
- Decision trees are robust to outliers or errors in labels;
- Decision trees are easily interpretable, even for non-statistically oriented users.
Thursday, June 22, 2017
The Success of Decision Tree
The success of decision trees is explained by several factors that make them quite attractive in practice:
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment