Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
Evaluating Interpretable Models for Financial Fraud Detection
Victoria Gonzalez
Financial fraud involves deceitful manipulation of financial records, posing significant challenges for auditors and regulators. Despite recent efforts to automate financial audits using predictive analytics, deep learning models lack transparency, deterring effective fraud detection. To address this, the implementation of two widely used explainable models, LIME and SHAP, is proposed to assess their effectiveness in fraud detection. LIME constructs interpretable models around predictions, while SHAP assigns importance values to features, enhancing transparency. This research aims to improve understanding of deep learning techniques in fraud detection, offering insights to regulators, investors, and auditors. By implementing interpretable models, the study aims to produce valuable findings to bolster confidence in predictions generated by deep learning models.
AuthorConnect Sessions
No sessions scheduled yet