What is algorithmic bias?
A systematic error in computer algorithms which leads to unfair outcomes and giving privileges to one of the groups of users over all others.
An algorithm is a set of instructions designed to perform a specific task. This can be a simple process, such as multiplying two numbers, or a complex operation, such as playing a compressed video file or image recognition. Therefore, programmers usually seek to create the most efficient algorithms possible [source].
An example of this is the algorithm for the New Zealand passport telling an applicant of Asian decent to open eyes when checking to see that the photo meets the criteria for an online application. Now that this bias has been found, the algorithm can be changed to better accommodate eye shape.
There’s different types of algorithmic bias:
- Pre-existing. This re-introduces existing social injustices into the computer program i.e. prejudice.
- Technical. Appears due to limitations of a program, like the output order, or other system constraints. i.e. relying on fairness of a random number generator, which isn’t truly random.
- Emergent. The result of the use and reliance on algorithms across new or unanticipated contexts, types of emergent bias are:
- Correlations. Machines find correlations which are discriminating or misinterpret correlations.
- Unanticipated users. When programs are designed for a certain group of users on assumption of their abilities and skills, which allow them to interpret the outcomes correctly, while inaccessible to others.
- Feedback loop. When real world response to systems outcomes are fed back into the algorithm.
How do you detect a bias in an algorithm?
Not all unequal outcomes are unfair, however you can detect biases by:
- Comparing outcomes for different groups. This could be done through simulations before applying them to real-life scenarios
- Looking at the equality of error rates, and whether there are more mistakes for one group of people than another.
How can you mitigate algorithmic bias?
- Fix bugs
- Additional training data may be required if it was identified that the initial data set is under-representative of certain groups
- Handling of sensitive information should be transparent
- Operators of algorithms should determine if the social costs of thee trade-offs are justified, the stakeholders involved are amenable to a solution through algorithms, or if human decision-makers are needed to frame the solution.
Why should you care about algorithmic bias?
Just like how we don’t want unethical bias in the real world, we don’t want it in technology either. We do however need to make sure that where there is bias in the data (or the real world) that is also reflected in the algorithm so it’s like for like.
Bias can be present in anything from online recruitment tools (take Amazon’s abandoned project to build an AI recruitment tool which engineers found was discriminating against females) and have an impact in how society shapes itself (e.g. if data shows a high number of arrests in a particular area, an algorithm may assign more police patrols to that area, which could lead to more arrests).
Algorithmic bias could well be having an affect on your life. You should be aware of it, and hold companies accountable to the fairer outcomes and discussions that happen around Artificial Intelligence. Just like discrimination in any other aspect of society, call it out when you see it, that’s one way to train the algorithm to do better.
- 4 stages of ethical AI: Algorithmic bias is not the problem but part of the solution
- This is how AI bias really happens – and why it’s so hard to fix
- Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms
- Why it’s totally unsurprising that Amazon’s recruitment AI was biased against women
- Ethics guidelines for trustworthy AI
Herding all the nerds,