Machine learning models are designed to predict event outcomes based on input data. As the scale of data increases, it becomes more and more difficult for humans to understand the logic behind a decision.
As research progresses, machine learning models become ever more complex, and play bigger roles in our financial lives, financial institutions will need to be able to honestly and transparently weigh the evidence behind any system’s predictions.
In this video, Dr David Sutton, explains what model explainability is, why it is important, and how the next generation of explainable AI needs to evolve to keep pace with the growth and evolution of models themselves.
Read more:
Deep learning and the new frontiers of model explainability
Path Integrals for the Attribution of Model Uncertainties
Share
About the speaker
David is responsible for Featurespace's research and innovation program. Prior to joining the company in 2015, David was a Research Associate in observational cosmology at the Kavli Institute for Cosmology in the University of Cambridge, where he developed algorithms to study the Cosmic Microwave Background.