They also introduced more advanced ideas about feature importance, for example a (model-specific) version that takes into account that many prediction models may predict the data well. The permutation feature importance measurement was introduced by Breiman (2001) 43 for random forests.īased on this idea, Fisher, Rudin, and Dominici (2018) 44 proposed a model-agnostic version of the feature importance and called it model reliance. We measure the importance of a feature by calculating the increase in the model’s prediction error after permuting the feature.Ī feature is “important” if shuffling its values increases the model error, because in this case the model relied on the feature for the prediction.Ī feature is “unimportant” if shuffling its values leaves the model error unchanged, because in this case the model ignored the feature for the prediction.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |