Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Data Science by (17.6k points)

Before scikit-learn 0.20 we could use result.grid_scores_[result.best_index_] to get the standard deviation. (It returned for exemple: mean: 0.76172, std: 0.05225, params: {'n_neighbors': 21})

What's the best way in scikit-learn 0.20 to get the standard deviation of the best score ?

1 Answer

0 votes
by (41.4k points)

The grid_scores_ is renamed as cv_results_ in the newer version.

So, according to the documentation,your requirements are:

best_index_ : int

The index (of the cv_results_ arrays) which corresponds to the best > 

  candidate parameter setting.

The dict at search.cv_results_['params'][search.best_index_] gives the > 

  parameter setting for the best model, that gives the highest mean

  score (search.best_score_).

So in your case, you should use:

Best params :- result.cv_results_['params'][result.best_index_] OR result.best_params_

Best mean score :- result.cv_results_['mean_test_score'][result.best_index_] OR result.best_score_

Best std :- result.cv_results_['std_test_score'][result.best_index_]

Learn Scikit Learn with the help of this Scikit Learn Tutorial.

Related questions

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...