In default mode, ClassifierAttributeEval actually shows the *improvement* in merit obtained by building a classifier based on the selected predictor attribute when compared to not using any predictor attributes at all (e.g., in regression problems, using the mean target value observed in the training data as the prediction).
That is why the "average merit" in your case is greater than one for the best predictor even though the correlation coefficient is always <= 1.
> On 13/11/2019, at 8:57 AM, Jovani T. de Souza <[hidden email]> wrote:
> Could you help me with the following question:
> I used the ClassifierAttributeEval with evaluation measure: Correlation Coefficent (My class attribute is numeric), Search Method: Ranker, and got the following results:
> average merit average rank attribute
> 1.042 + - 0.018 1 + - 0 124 Land area (sq0 km)
> 0.969 + - 0.026 2 + - 0 161 Population, total
> 0.932 + - 0.02 3.2 + - 0.6 122 Labor force, total
> I would like you to understand this average merit obtained. What are the "force" ranges of merit? Exist? For example, from 0 to 0.5 is strong and from 0.5 to 1 very strong.
> The image of the results obtained is attached.
> Thank you very much!
> Remetente notificado por
> Mailtrack 12/11/19 16:56:23
> Wekalist mailing list -- [hidden email] > Send posts to: To unsubscribe send an email to [hidden email] > To subscribe, unsubscribe, etc., visit
> https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz > List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
But can you tell me if there are the "force" ranges of merit?
For example, in general, the Pearson's correlation coefficient is classified
as: negligible correlation (0.00-0.10), weak correlation (0.10-0.39),
moderate correlation (0.40-0.69), strong correlation (0.70-0.89), and very
strong correlation (0.90-1.00).
Wouldn't there be these "forces" ranges of merit too ??
As discussed, the "merit" in ClassifierAttributeEval (when run in default mode with leaveOneAttributeOut=false) is the improvement in score (e.g., correlation coefficient) compared to the score obtained with ZeroR (the regressor that does not use any predictors). Thus, if we had ZeroR's score, we could just add it to the "merit" to get the raw score of the classifier (e.g., the raw correlation coefficient, so that you can apply your "force" rules).
I believe you can get ZeroR's score by running WrapperSubsetEval with the desired scoring measure (i.e., "evaluationMeasure") and applying GreedyStepwise as the search method (setting "generateRanking" to true). In this case, in the ranking, every attribute will get the same merit because ZeroR ignores all attributes anyway. Now, the key is that WrapperSubsetEval does *not* subtract ZeroR's score to get this merit, in contrast to ClassifierAttributeEval, so the merit that is output is the score of ZeroR that we need for the above adjustment.