Loading…

A Gaussian process‐based approach toward credit risk modeling using stationary activations

The task of predicting the risk of defaulting of a lender using tools in the domain of AI is an emerging one and in growing demand, given the revolutionary potential of AI. Various attributes like income, properties acquired, educational status, and many other socioeconomic factors can be used to tr...

Full description

Saved in:
Bibliographic Details
Published in:Concurrency and computation 2022-02, Vol.34 (5), p.n/a
Main Authors: Mahajan, Shubham, Nayyar, Anand, Raina, Akshay, Singh, Samreen J., Vashishtha, Ashutosh, Pandit, Amit Kant
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The task of predicting the risk of defaulting of a lender using tools in the domain of AI is an emerging one and in growing demand, given the revolutionary potential of AI. Various attributes like income, properties acquired, educational status, and many other socioeconomic factors can be used to train a model to predict the possibilities of nonrepayment of a loan or its chances. Most of the techniques and algorithms used in this regard previously do not submit any attention to the uncertainty in predictions for out of distribution (OOD) in a dataset, which contributes to overfitting, leading to relatively lower accuracy for predicting these data points. Specifically, for credit risk classification, this is a serious concern, given the structure of the available datasets and the trend they follow. With a focus on this issue, we propose a robust and better methodology that uses a recent and efficient family of nonlinear neural network activation functions, which mimics the properties induced by the widely‐used Matérn family of kernels in Gaussian process (GP) models. We tested the classification performance metrics on three openly available datasets after prior preprocessing. We achieved a high mean classification accuracy of 87.4% and a lower mean negative log predictive density loss of 0.405.
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.6692