Automated decisioning now pervades our daily lives. Nowhere is this more true than in our financial matters, from applying for a loan through to walking through the metro turnstile with your smart phone. But imagine finding out that your transaction was being refused by an automated decisioning system on the basis of your gender or ethnicity. Our society, and our laws, would not stand for it. Automated systems should not be allowed to reflect the worst human biases that society has tried so hard to move on from.
In the past, bias was an issue of analysts knowingly writing discriminatory rule logic.
I saw examples of this occasionally early on in my career, where legacy rules from customers were sometimes blatantly discriminatory, for example scoring people based on the letters in their surname. Polish sounding names were in one case scored as very risky. A data scientist colleague of mine, who was Polish, was understandably very annoyed.
The way organizations have dealt with this in the past was by committing the business to non-discriminatory practices, having good oversight, and hiring a diverse team.
Currently, a major concern is that bias can arise unintentionally, because the models are the result of a machine learning process. Even a diverse and unbiased team or system can unintentionally produce biased models. A big reason for this is that the data we use to train models can itself be biased.
For example, Amazon trained a recruitment AI to scan CVs and rank them based on likelihood of an offer being made. Unfortunately, Amazon’s hiring had been male-dominated for over a decade, so a bias against women was encoded into the data. Amazon ended up scrapping the tool, and dispersing the team that created it.
One (discredited) approach is to make sure your data doesn’t contain any protected attributes (race, gender, age etc…), the logic being that if your model doesn’t know someone’s race then it can’t be racist.
The reason this approach has been discredited is… that it doesn’t work. There can be other variables that are correlated with the protected attributes (eg: name, address etc…) and if your data is innately biased, then your models will find a way to use these to reproduce the data’s bias. That’s one of the determinations of what the Amazon model did.
It is our view that you need a twofold approach to tackling model bias:
- explainable models, so that you can vet decisioning for prohibited logic
- fairness testing, both as you develop the model and on an ongoing basis once its live (to make sure it stays fair and does not impact any group unfairly).
There are a wide variety of fairness measures. The one that seems to me to be most aligned to fair finance legislation is called ‘equality of opportunity’ (Hardt et al. 2016). However, not everyone would agree with me. Choosing the right fairness measure is a matter of ethics and law, rather than science…
There are other, more advanced, things that can be done to encourage models to be fair. In our view, these need to build on top of the foundation provided by the twofold approach. The extra things you can do include:
- pre-processing your data to forcibly remove the biases before you train. Generative AI can be leveraged to assist with this.
- change the model training process to punish unfair models (ie: forcing them to be fairer at the cost of performance)
- post-processing your model outputs to remove or mitigate biases.
Approaches like these are gaining traction in the machine learning industry. A new wave of regulation is targeting model bias in automated decisioning systems. Legislation like the EU AI Act introduces a requirement to proactively demonstrate equal treatment and equal outcomes in production. This is a good thing. Well-designed machine learning systems make the world a safer place to transact, and this shift puts the onus on model builders to design their systems well.
Learn more Model Fairness in Fraud Prevention
- Model bias 101 >>
- Human Intervention and Practical Solutions >>
- Put your money where your ML is: building trust in business-critical AI >>
Share