Decision Trees recursively partition the feature space by choosing the best split at each node based on information gain or Gini impurity reduction.
Gini Impurity: Measures how often a randomly chosen element would be incorrectly labeled. Lower is better.
Entropy: Measures the randomness in the labels. Splits maximize information gain.
💡 Tip: Try different max depths to see how tree complexity affects decision boundaries!