Bias – a capability challenge

By
Victoria Maclennan
September 28, 2018

It seems bias in algorithms is a very hot topic at the moment. In a surprisingly honest move Twitter admits their algorithms aren’t always impartial:

Twitter chief executive Jack Dorsey has told US lawmakers the company’s algorithms have not always been “impartial”. He said the platform “unfairly” reduced the visibility of 600,000 accounts, including some members of Congress.

Then in quick succession last week both Google and IBM announced new tools to expose bias in algorithms:

Comfreak / Pixabay

  • Google’s What If Tool, will provide Analysis of Performance and Algorithmic Fairness via Code-free probing of Machine Learning models
  • IBM’s AI Fairness 360 Kit, will analyse why and how algorithms make decisions in real time scanning for signs of bias and recommending adjustments.

Both of these have the potential to really assist in diagnosis of this challenge – that said we haven’t played with either of these tools yet, so watch this space for our findings soon.

The Capability Gap

In Bias a very wicked problem we explored how extensively programatic bias has entered our lives and some of the approaches we as society and those producing algorithms can take to prevent bias creeping into automated decision making – both conscious and unconscious.
One of the major contributing factors the OptimalBI team observe on consulting assignments is the capability (or lack of capability) of the people writing, replicating and deploying algorithms – may seem harsh so bear with me. Not everyone is trained as a data scientist or statistician or highly trained specialist – so lifting the capability and maturity of the organisation, the data teams and individuals who are involved in developing programmes and algorithms is a really effective way to reduce or eliminate bias.
One typical scenario involves a programme that works for a specific function, the original developer is long gone or long been forgotten due to lack of a version control system and left no inline comments in the code or documentation. There is also no change control process wrapped around these programmes as they are maintained by “analysts” in “the business” rather than within an IT structure. This scenario is common in, but not limited to, SAS environments, many of which have been in place for a decade or two.
Along comes our untrained or self trained Analyst. This Analyst knows a great deal about the organisation and it’s processes, will often dig around in data and pull out nuggets of results impressing management – all with great intentions as she is very good at her job. Our Analyst is asked to modify the typical scenario programme to augment the results – so she duplicates/copies that programme and makes some “tweaks” or modifications. She doesn’t really understand the code but knows it produces results that management like.
The reality with this scenario is she doesn’t know enough to even consider the programme could be flawed – it might contain assumptions, it might contain bias, it might contain logic flaws, it might contain hard coded result sets, it might contain any number of other limitations that mean the results are flawed. All of those possibilities go untested to the untrained Analysts eye. Remembering she is unsupported in terms of process, documentation and governance in making this change as well.

Lifting Capability is a team sport

Like any maturity programme the first step is understanding what your current state is from a range of perspectives including Data and Information:

  • Governance
  • Analysis and collection
  • Lineage
  • Infrastructure
  • Customer service
  • Integration
  • Reporting
  • Version and release controls
  • Processes and methods (eg Testing)

OptimalBI work with customers to assess these and other quadrants to develop a maturity roadmap and provide coaching at one, two or all of these levels:

  • an organisational level
  • at a data and information team level
  • at an individual level

A maturity hierarchy drawn on one of our office walls

In our experience lifting capability across the board is key to avoiding our typical scenario which can propagate and entrench bias. The work involved is always well received by team members who gladly embrace the coaching and training opportunities, by the data and information team as their engagements across all aspects of their organisation are enhanced, and at the executive level by introducing trust. IT even like this approach as they can see clarity of change, release and testing processes taking place.
Taking an organisation level approach does require good sponsorship and ownership at all levels. Becoming conscious of what our algorithms and programmes do with data is as fundamental as it was in the 90’s when organisations moved from “information belonging to the individual staff member in silo’s” to  “information is an organisational asset”. Next chapter soon, Vic.

Victoria spends much of her time focusing on Digital Inclusion, Digital Literacy and Digital Rights.  

You can read her OptimalBI blogs here, or connect with her on LinkedIn.

Copyright © 2019 OptimalBI LTD.