Abstract

Model risk is one of the three largest risk types affecting the financial services industry, in addition to credit risk and market risk. It is particularly relevant concerning Artificial Intelligence (AI) technique-based models, unlike deterministic statistical, mechanical, and engineering models (or freely interpretable rule-based models). The latter three types of models have been long established and accepted for risk and capital measurement; where the interpretation of the mechanisms is more or less clear and comprehensible, it is broader than probability density estimation. Machine learning-based models might not have learned what the human mind believes worth predicting and how this should be done for regulatory purposes to be interpreted, scrutinized, explained, and validated.

Comprehensibility/explainability/interpretability of naturally black-boxed AI and machine learning models, especially deep learning-based ones, are frequently considered as essential constraints for the model approval process. Here, the black box issue must be considered the number of parameters, since even stochastic optimal control and voting factor models’ degree of freedom might induce non-comprehensibility (especially in case of many dummy variables). In compliance with the self-extracting effort, propensity score-based non-comprehensibility might successfully be addressed.

Here, it is analyzed how comprehensibility/explainability/interpretability constraints are currently regulated. Accordingly, for AI techniques, it might be deemed astonishing that these points are only sideshows, focused more on documentation than process and praise than well-defined rigor. Hence, it develops a concept to improve this significantly. Last but not least, it analyzes the damned question of who qualifies for generating and approving such models.

Keywords

  • Financial risk management
  • intelligent risk management
  • financial services
  • explainable artificial intelligence
  • active monitoring
  • risk prediction
  • risk estimation
  • supervised machine learning
  • cloud computing
  • cloud services
  • scalable machine learning Platforms
  • financial regulation
  • financial oversight

References

  1. 1. Arner, D. W., Barberis, J., & Buckley, R. P. (2017). Fintech and regtech: Impact on regulators and banks. Journal of Banking Regulation, 19(4), 1–14. https://doi.org/10.1057/s41261-017-0038-3
  2. 2. Bouveret, A. (2018). Cyber risk for the financial sector: A framework for quantitative assessment. International Monetary Fund. https://www.imf.org/en/Publications/WP/Issues/2018/01/18/Cyber-Risk-for-the-Financial-Sector-A-Framework-for-Quantitative-Assessment-45520
  3. 3. Chen, M., Mao, S., & Liu, Y. (2014). Big data: A survey. Mobile Networks and Applications, 19(2), 171–209. https://doi.org/10.1007/s11036-013-0489-0
  4. 4. Ghosh, S., & Bhattacharya, S. (2019). Machine learning for credit risk modeling: A review. Risk Management, 22(3), 145–165. https://doi.org/10.1057/s41283-020-00052-1