Attribute Selection (ClassifierAttributeEval)

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Attribute Selection (ClassifierAttributeEval)

Jovani Souza
Friends,

Could you help me with the following question:

I used the ClassifierAttributeEval with evaluation measure: Correlation Coefficent (My class attribute is numeric), Search Method: Ranker, and got the following results:

average merit            average rank attribute
1.042 + - 0.018        1 + - 0 124 Land area (sq0 km)
0.969 + - 0.026        2 + - 0 161 Population, total
0.932 + - 0.02         3.2 + - 0.6 122 Labor force, total
 .........

I would like you to understand this average merit obtained. What are the "force" ranges of merit? Exist? For example, from 0 to 0.5 is strong and from 0.5 to 1 very strong.

The image of the results obtained is attached.

Thank you very much!



Mailtrack Remetente notificado por
Mailtrack 12/11/19 16:56:23

_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html

Result.png (105K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Attribute Selection (ClassifierAttributeEval)

Eibe Frank-2
Administrator
In default mode, ClassifierAttributeEval actually shows the *improvement* in merit obtained by building a classifier based on the selected predictor attribute when compared to not using any predictor attributes at all (e.g., in regression problems, using the mean target value observed in the training data as the prediction).

That is why the "average merit" in your case is greater than one for the best predictor even though the correlation coefficient is always <= 1.

Cheers,
Eibe

> On 13/11/2019, at 8:57 AM, Jovani T. de Souza <[hidden email]> wrote:
>
> Friends,
>
> Could you help me with the following question:
>
> I used the ClassifierAttributeEval with evaluation measure: Correlation Coefficent (My class attribute is numeric), Search Method: Ranker, and got the following results:
>
> average merit            average rank attribute
> 1.042 + - 0.018        1 + - 0 124 Land area (sq0 km)
> 0.969 + - 0.026        2 + - 0 161 Population, total
> 0.932 + - 0.02         3.2 + - 0.6 122 Labor force, total
>  .........
>
> I would like you to understand this average merit obtained. What are the "force" ranges of merit? Exist? For example, from 0 to 0.5 is strong and from 0.5 to 1 very strong.
>
> The image of the results obtained is attached.
>
> Thank you very much!
>
>
>
>   Remetente notificado por
> Mailtrack 12/11/19 16:56:23
>
> <Result.png>_______________________________________________
> Wekalist mailing list -- [hidden email]
> Send posts to: To unsubscribe send an email to [hidden email]
> To subscribe, unsubscribe, etc., visit
> https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
> List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Attribute Selection (ClassifierAttributeEval)

Jovani Souza
Hello,

Thanks Eibe for the answer.

But can you tell me if there are the "force" ranges of merit?

For example, in general, the Pearson's correlation coefficient is classified
as: negligible correlation (0.00-0.10), weak correlation (0.10-0.39),
moderate correlation (0.40-0.69), strong correlation (0.70-0.89), and very
strong correlation (0.90-1.00).

Wouldn't there be these "forces" ranges of merit too ??




--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Attribute Selection (ClassifierAttributeEval)

Jovani Souza
In reply to this post by Eibe Frank-2
Hi

Any answers to my question if there are the "force" ranges of merit?


Thank you again.



--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Attribute Selection (ClassifierAttributeEval)

Eibe Frank-2
Administrator
As discussed, the "merit" in ClassifierAttributeEval (when run in default mode with leaveOneAttributeOut=false) is the improvement in score (e.g., correlation coefficient) compared to the score obtained with ZeroR (the regressor that does not use any predictors). Thus, if we had ZeroR's score, we could just add it to the "merit" to get the raw score of the classifier (e.g., the raw correlation coefficient, so that you can apply your "force" rules).

I believe you can get ZeroR's score by running WrapperSubsetEval with the desired scoring measure (i.e., "evaluationMeasure") and applying GreedyStepwise as the search method (setting "generateRanking" to true). In this case, in the ranking, every attribute will get the same merit because ZeroR ignores all attributes anyway. Now, the key is that WrapperSubsetEval does *not* subtract ZeroR's score to get this merit, in contrast to ClassifierAttributeEval, so the merit that is output is the score of ZeroR that we need for the above adjustment.

Cheers,
Eibe

On Wed, Nov 27, 2019 at 12:20 PM Jovani Souza <[hidden email]> wrote:
Hi

Any answers to my question if there are the "force" ranges of merit?


Thank you again.



--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html

_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Attribute Selection (ClassifierAttributeEval)

Jovani Souza
Perfect Eibe, now I understand better.

Thank you very much!

Best regards.



--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html