Alternating Model Tree parameters

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Alternating Model Tree parameters

forky
In the paper describing the AMT algorithm

https://www.cs.waikato.ac.nz/~eibe/pubs/alternating_model_trees_sac.pdf

Says: "It has three arguments: the base regression learner to use, the
number of iterations to perform, and the value of the shrinkage parameter
used to control overfitting". However, in the implemented Weka package, we
can only tune number of iterations I, and shrinkage H. Why is it not
possible to select the base regression learner? Will it be available soon?

Thanks.



--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to [hidden email]
To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Alternating Model Tree parameters

Eibe Frank-2
Administrator
That part of the paper is referring to the basic algorithm for additive regression using forward stagewise additive modeling. It is available in WEKA as weka.classifiers.meta.AdditiveRegression.

The alternating model tree uses simple linear regression as the "base learner" so that a simple linear model is obtained at each prediction node.

Cheers,
Eibe

On Fri, Feb 26, 2021 at 12:11 PM forky <[hidden email]> wrote:
In the paper describing the AMT algorithm

https://www.cs.waikato.ac.nz/~eibe/pubs/alternating_model_trees_sac.pdf

Says: "It has three arguments: the base regression learner to use, the
number of iterations to perform, and the value of the shrinkage parameter
used to control overfitting". However, in the implemented Weka package, we
can only tune number of iterations I, and shrinkage H. Why is it not
possible to select the base regression learner? Will it be available soon?

Thanks.



--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to [hidden email]
To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html

_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to [hidden email]
To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Alternating Model Tree parameters

forky
Understood. I found the AdditiveRegression in Weka. I've run AMT with good
results (both accuracy and time consumption) and I'm wondering if it makes
sense to check all possible techniques within AR or AMT is somehow
optimized?




--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to [hidden email]
To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Alternating Model Tree parameters

Eibe Frank
You could try to use weka.classifiers.trees.M5P as the base learner in AdditiveRegression. This normally works at least as well as AMT in terms of accuracy (although the model will be larger and less interpretable).

In both, AMT and AdditiveRegression, tuning the shrinkage parameter is normally required for best results. Of course, the number of iterations also matters. With smaller values of the shrinkage parameter, more iterations are normally required.

The IterativeClassifierOptimizer is the best tool to automatically tune the number of iterations for AMT and AdditiveRegression.

Cheers,
Eibe

On Fri, Feb 26, 2021 at 11:41 PM forky <[hidden email]> wrote:
Understood. I found the AdditiveRegression in Weka. I've run AMT with good
results (both accuracy and time consumption) and I'm wondering if it makes
sense to check all possible techniques within AR or AMT is somehow
optimized?




--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to [hidden email]
To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html

_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to [hidden email]
To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html