Skip to Content
Learn
Random Forests
Review

Nice work! Here are some of the major takeaways about random forests:

  • A random forest is an ensemble machine learning model. It makes a classification by aggregating the classifications of many decision trees.
  • Random forests are used to avoid overfitting. By aggregating the classification of multiple trees, having overfitted trees in a random forest is less impactful.
  • Every decision tree in a random forest is created by using a different subset of data points from the training set. Those data points are chosen at random with replacement, which means a single data point can be chosen more than once. This process is known as bagging.
  • When creating a tree in a random forest, a randomly selected subset of features are considered as candidates for the best splitting feature. If your dataset has n features, it is common practice to randomly select the square root of n features.
Folder Icon

Sign up to start coding

Already have an account?