Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
Knowing the class distinguishing abilities of the features, to build better decision-making models
Payel Sadhukhan, Kausik Sengupta, Sarbani Palit, Tanujit Chakraborty
Explainability allows end-users to have a transparent and humane reckoning of an ML scheme's capability and utility. ML model's modus opernadi can be explained via the features which trained it. To this end, we found no work explaining the features' importance based on their class-distinguishing abilities. In a given dataset, a feature is not equally good at distinguishing between the data points' possible categorizations (or classes). This work explains the features based on their class or category-distinguishing capabilities. We estimate the variables' class-distinguishing capabilities (scores) for pair-wise class combinations, utilize them in a missing feature context, and propose a novel decision-making protocol. A key novelty of this work lies in the refusal to render a decision option when the missing feature (of the test point) has a high class-distinguishing potential for the likely classes. Two real-world datasets are used empirically to validate the explainability of our scheme.
AuthorConnect Sessions
No sessions scheduled yet