0 votes
1 view
in Data Science by (17.6k points)

Before scikit-learn 0.20 we could use result.grid_scores_[result.best_index_] to get the standard deviation. (It returned for exemple: mean: 0.76172, std: 0.05225, params: {'n_neighbors': 21})

What's the best way in scikit-learn 0.20 to get the standard deviation of the best score ?

1 Answer

0 votes
by (38.2k points)

The grid_scores_ is renamed as cv_results_ in the newer version.

So, according to the documentation,your requirements are:

best_index_ : int

The index (of the cv_results_ arrays) which corresponds to the best > 

  candidate parameter setting.

The dict at search.cv_results_['params'][search.best_index_] gives the > 

  parameter setting for the best model, that gives the highest mean

  score (search.best_score_).

So in your case, you should use:

Best params :- result.cv_results_['params'][result.best_index_] OR result.best_params_

Best mean score :- result.cv_results_['mean_test_score'][result.best_index_] OR result.best_score_

Best std :- result.cv_results_['std_test_score'][result.best_index_]

Learn Scikit Learn with the help of this Scikit Learn Tutorial.

Welcome to Intellipaat Community. Get your technical queries answered by top developers !


Categories

...