Ranking Attributes Using ClassifierAttributeEval

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Ranking Attributes Using ClassifierAttributeEval

Asim
I find it difficult to understand the approach followed to rank attributes
applying ClassifierAttributeEval.

In my case I used SMOreg as a classifier since my target output is
continuous.
And the evaluation measure was RMSE.

Could it be that the attributes were ranked/evaluated based upon the
classifier performance measured by RMSE?, Does that mean that the attributes
evaluation was done based upon each attribute influence on the predictive
model in terms of decreasing or increasing RMSE? On the description of this
evaluation method (ClassifierAttributeEval) it says 'Evaluates the worth of
an attribute by using a user-specified classifier.', What does that mean?
What formula was used to compute each attribute worth?

Thank you



--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Ranking Attributes Using ClassifierAttributeEval

Eibe Frank-3
In ClassifierAttributeEval's default mode, the data is reduced to contain just the attribute being evaluated and the class attribute; then (potentially repeated) k-fold cross-validation is run on this reduced data with the selected classifier to estimate the chosen performance measure. This is implemented by running WrapperSubsetEval on the reduced data. WrapperSubsetEval returns the worth (aka merit) of the predictor attribute subset (in this case a 1-element set) based on the chosen performance measure. If the performance measure is an error measure, WrapperSubsetEval will return the additive inverse of this measure to turn the error value into a value of "merit"; otherwise, e.g., if the measure is classification accuracy, it will simply return the value of the performance measure as the merit. The final merit that is output by ClassifierAttributeEval in default mode is obtained by subtracting from this merit the merit obtained when not using any predictor attributes at all to estimate the class attribute's value.

However, often, we want to know how much an attribute contributes to predictive accuracy when used in conjunction with the other attributes in the data. How much does performance drop if we drop the chosen attribute from the model? This can be measured by setting the leaveOneAttributeOut property to the value true to switch ClassifierAttributeEval into a different mode. In this mode, only the chosen attribute is removed from the data and (potentially repeated) k-fold cross-validation is run on this reduced data to estimate performance. As above, this is done by running WrapperSubsetEval on the reduced data, which returns an estimate of merit. The final merit that is output by ClassifierAttributeEval in leaveOneAttributeOut mode is obtained by subtracting this merit from the merit obtained when using all predictor attributes in the data to estimate the class attribute's value.

Cheers,
Eibe

On Thu, Dec 19, 2019 at 9:18 AM Asim <[hidden email]> wrote:
I find it difficult to understand the approach followed to rank attributes
applying ClassifierAttributeEval.

In my case I used SMOreg as a classifier since my target output is
continuous.
And the evaluation measure was RMSE.

Could it be that the attributes were ranked/evaluated based upon the
classifier performance measured by RMSE?, Does that mean that the attributes
evaluation was done based upon each attribute influence on the predictive
model in terms of decreasing or increasing RMSE? On the description of this
evaluation method (ClassifierAttributeEval) it says 'Evaluates the worth of
an attribute by using a user-specified classifier.', What does that mean?
What formula was used to compute each attribute worth?

Thank you



--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html

_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Ranking Attributes Using ClassifierAttributeEval

Asim
Thank you Eibe very much



--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to: To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit
https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html