What is algorithmic bias and why should you care?

By
Bronwyn Scott
February 20, 2020

What is algorithmic bias?

A systematic error in computer algorithms which leads to unfair outcomes and giving privileges to one of the groups of users over all others.

An algorithm is a set of instructions designed to perform a specific task.  This can be a simple process, such as multiplying two numbers, or a complex operation, such as playing a compressed video file or image recognition. Therefore, programmers usually seek to create the most efficient algorithms possible [source].

An example of this is the algorithm for the New Zealand passport telling an applicant of Asian decent to open eyes when checking to see that the photo meets the criteria for an online application.  Now that this bias has been found, the algorithm can be changed to better accommodate eye shape.

There’s different types of algorithmic bias:

  1. Pre-existing.  This re-introduces existing social injustices into the computer program i.e. prejudice.
  2. Technical.  Appears due to limitations of a program, like the output order, or other system constraints. i.e. relying on fairness of a random number generator, which isn’t truly random.
  3. Emergent.  The result of the use and reliance on algorithms across new or unanticipated contexts, types of emergent bias are:
  4. Correlations.  Machines find correlations which are discriminating or misinterpret correlations.
  5. Unanticipated users.  When programs are designed for a certain group of users on assumption of their abilities and skills, which allow them to interpret the outcomes correctly, while inaccessible to others.
  6. Feedback loop.  When real world response to systems outcomes are fed back into the algorithm.

How do you detect a bias in an algorithm?

Not all unequal outcomes are unfair, however you can detect biases by:

  • Comparing outcomes for different groups.  This could be done through simulations before applying them to real-life scenarios
  • Looking at the equality of error rates, and whether there are more mistakes for one group of people than another.

How can you mitigate algorithmic bias?

  • Fix bugs
  • Additional training data may be required if it was identified that the initial data set is under-representative of certain groups
  • Handling of sensitive information should be transparent
  • Operators of algorithms should determine if the social costs of thee trade-offs are justified, the stakeholders involved are amenable to a solution through algorithms, or if human decision-makers are needed to frame the solution.

Why should you care about algorithmic bias?

Just like how we don’t want unethical bias in the real world, we don’t want it in technology either.  We do however need to make sure that where there is bias in the data (or the real world) that is also reflected in the algorithm so it’s like for like.

Bias can be present in anything from online recruitment tools (take Amazon’s abandoned project to build an AI recruitment tool which engineers found was discriminating against females) and have an impact in how society shapes itself (e.g. if data shows a high number of arrests in a particular area, an algorithm may assign more police patrols to that area, which could lead to more arrests).

Algorithmic bias could well be having an affect on your life.  You should be aware of it, and hold companies accountable to the fairer outcomes and discussions that happen around Artificial Intelligence.  Just like discrimination in any other aspect of society, call it out when you see it, that’s one way to train the algorithm to do better.

Further resources

Herding all the nerds,

Bronnie

Bronnie is a woman of many talents; you’ll find her managing contracts, people and our marketing, she also flies planes and manages events.

Connect with Bronnie on LinkedIn or read some of her other blogs here.

Copyright © 2019 OptimalBI LTD.