The sklearn package has many metrics to evaluate the quality of the classification.

Is there a need for a specific task to use a specific metric that is not in the package?

Is this situation considered in the package itself?

Closed due to the fact that the question is too general for the participants Wiktor Stribiżew , aleksandr barakin , 0xdb , AK , user192664 6 Oct '18 at 4:47 .

Please correct the question so that it describes the specific problem with sufficient detail to determine the appropriate answer. Do not ask a few questions at once. See “How to ask a good question?” For clarification. If the question can be reformulated according to the rules set out in the certificate , edit it .

    1 answer 1

    The question is formulated in such a way that it is difficult to give an unequivocal answer.

    On the one hand, the SciKit-Learn library offers a lot of different metrics for classification tasks , and on the other hand, it may always be necessary to use your own metric, which is most useful for this particular task.

    In this case, you can use sklearn.metrics.make_scorer () :

     >>> from sklearn.metrics import fbeta_score, make_scorer >>> ftwo_scorer = make_scorer(fbeta_score, beta=2) >>> ftwo_scorer make_scorer(fbeta_score, beta=2) >>> from sklearn.model_selection import GridSearchCV >>> from sklearn.svm import LinearSVC >>> grid = GridSearchCV(LinearSVC(), param_grid={'C': [1, 10]}, ... scoring=ftwo_scorer) 

    Another usage example from the English version of SO