Decision Tree

Binary classification using recursive tree splitting with entropy or Gini impurity

Points:
Class 0
Class 1
Region 0
Region 1

Tree Parameters

1 (Simple)8 (Complex)
2 (Precise)20 (General)

About Decision Trees

Time Complexity:

  • Training: O(n × m × log n)
  • Prediction: O(log n)

Space Complexity: O(n)

Interpretable: Yes

How It Works

Decision Trees recursively partition the feature space by choosing the best split at each node based on information gain or Gini impurity reduction.

Gini Impurity: Measures how often a randomly chosen element would be incorrectly labeled. Lower is better.

Entropy: Measures the randomness in the labels. Splits maximize information gain.

💡 Tip: Try different max depths to see how tree complexity affects decision boundaries!