Random Forest combines multiple decision trees trained on different random subsets of data (bootstrap sampling). Each tree votes on the final prediction.
Bagging: Each tree is trained on a random sample (with replacement), creating diverse trees.
Diversity: Shows how much trees disagree. Higher diversity often leads to better performance.
💡 Tip: Try adjusting the number of trees and sample ratio to see how ensemble diversity affects accuracy!