Interpretable Random Forests via Rule Extraction - Centre de mathématiques appliquées (CMAP) Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Interpretable Random Forests via Rule Extraction

Résumé

We introduce SIRUS (Stable and Interpretable RUle Set) for regression, a stable rule learning algorithm which takes the form of a short and simple list of rules. State-of-the-art learning algorithms are often referred to as "black boxes" because of the high number of operations involved in their prediction process. Despite their powerful predictivity, this lack of interpretability may be highly restrictive for applications with critical decisions at stake. On the other hand, algorithms with a simple structure-typically decision trees, rule algorithms, or sparse linear models-are well known for their instability. This undesirable feature makes the conclusions of the data analysis unreliable and turns out to be a strong operational limitation. This motivates the design of SIRUS, which combines a simple structure with a remarkable stable behavior when data is perturbed. The algorithm is based on random forests, the predictive accuracy of which is preserved. We demonstrate the efficiency of the method both empirically (through experiments) and theoretically (with the proof of its asymptotic stability). Our R/C++ software implementation sirus is available from CRAN.
Fichier principal
Vignette du fichier
sirus_reg_arxiv.pdf (623.14 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02557113 , version 1 (28-04-2020)
hal-02557113 , version 2 (08-06-2020)
hal-02557113 , version 3 (06-10-2020)
hal-02557113 , version 4 (08-02-2021)

Identifiants

Citer

Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet. Interpretable Random Forests via Rule Extraction. 2020. ⟨hal-02557113v3⟩
528 Consultations
624 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More