Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (33.1k points)

I'd like to ask everyone a question about how correlated features (variables) affect the classification accuracy of machine learning algorithms. With correlated features, I mean a correlation between them and not with the target class (i.e the perimeter and the area of a geometric figure or the level of education and the average income). In my opinion, correlated features negatively affect the accuracy of a classification algorithm, I'd say because the correlation makes one of them useless. Is it truly like this? Does the problem change with respect to the classification algorithm type? Any suggestions for papers and lectures are very welcome! Thanks

1 Answer

0 votes
by (33.1k points)

The xgb.cv function in your code performs k-fold cross validation only. I think you misunderstood this function, it does not change the value of any parameter.

According to your problem if you want to find the best parameter in R, then there are two methods.

To find the best parameters in R's XGBoost, there are some methods. These are 2 methods:

  1. Use mlr package: This package can be used with XG-BOOST to find the best optimal hyperparameter of your model.

  2. Find the best parameters manually: You can try setting up different parameters of your model manually to get the best accuracy. It is a bit time-consuming approach, but sometimes it works early.

Hope this answer helps.

If you wish to learn more about R programming, visit this R programming Tutorial.

Browse Categories

...