Explainable AI seems to be at the heart of the current debate on the responsible use of AI in the financial sector. As financial firms increasingly deploy advanced data analytics (such as deep neural nets) in their business processes, the need we feel to understand what is going on ‘under the hood’, naturally, becomes stronger. After all, while techniques such as deep learning are able to achieve extremely high levels of accuracy, they are also at the root of the ‘black box’ problem. While it is easy to agree on the importance of explainable AI, agreement on what explainability entails can be a bit more challenging.
In a collaborative project De Nederlandsche Bank, together with the Autoriteit Financiële Markten, Hogeschool Utrecht and four major Dutch Banks, explored the issue of explainable AI in banking, by developing a conceptual framework and applying this framework to three practical use cases: consumer credit, anti-money laundering and credit risk management.
During this talk, Joost van der Burgt, senior policy advisor at De Nederlandsche Bank, will present the main insights and take-aways from this project, and will share a regulatory perspective on explainable AI in banking.