Speaker
Description
Machine learning (ML) algorithms, in credit scoring, are employed to distinguish between borrowers classified as class zero, including borrowers who will fully pay back the loan, and class one, borrowers who will default on their loan. However, in doing so, these algorithms are complex and often introduce discrimination by differentiating between individuals who share a protected attribute (such as gender and nationality) and the rest of the population. Therefore, to make users trust these methods, it is necessary to provide fair and explainable models. To solve this issue, this paper focuses on fairness and explainability in credit scoring using data from a P2P lending platform in the US. From a methodological viewpoint, we combine ensemble tree models with SHAP to achieve explainability, and we compare the resulting Shapley values with fairness metrics based on the confusion matrix.
Classification | Both methodology and application |
---|---|
Keywords | Fairness; Explainable Artificial Intelligence; Credit Scoring |