On Explaining Decision Trees - Université Toulouse 1 Capitole Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

On Explaining Decision Trees

Résumé

Decision trees (DTs) epitomize what have become to be known as interpretable machine learning (ML) models. This is informally motivated by paths in DTs being often much smaller than the total number of features. This paper shows that in some settings DTs can hardly be deemed interpretable, with paths in a DT being arbitrarily larger than a PI-explanation, i.e. a subset-minimal set of feature values that entails the prediction. As a result, the paper proposes a novel model for computing PI-explanations of DTs, which enables computing one PI-explanation in polynomial time. Moreover, it is shown that enumeration of PI-explanations can be reduced to the enumeration of minimal hitting sets. Experimental results were obtained on a wide range of publicly available datasets with well-known DT-learning tools, and confirm that in most cases DTs have paths that are proper supersets of PI-explanations.
Fichier principal
Vignette du fichier
paper.pdf (256.04 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03312480 , version 1 (02-08-2021)

Licence

Paternité

Identifiants

Citer

Yacine Izza, Alexey Ignatiev, Joao Marques-Silva. On Explaining Decision Trees. 2021. ⟨hal-03312480⟩
94 Consultations
365 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More