Reducing Bias in AI powered Supply Chain applications

Roger OakdenLogistics Management, Operations Planning, Procurement, Supply Chains & Supply Networks1 Comment

Supply Chains connected

Is AI good or bad?

Many articles portray Artificial Intelligence (AI) as heralding a new chapter in the development of civilisation or doom-laden. Although in development for more than 70 years, its potential is now being more broadly discussed.

It was recently reported that a software firm demonstrated a Non-disclosure Agreement negotiation, completed in minutes, between the software company and a supplier. The negotiation was between two computers loaded with AI software that contained legal domain data and information. The role of the lawyers was to sign the completed agreement – an indication of possible change in Procurement.

On the same day it was reported that over a three week period, a person had 13 appointments made on their behalf to fix an appliance at their house; they also had six visits from technicians. However, the customer did not have a problem with the appliance and had not contacted the supplier. The supplying business stated that the problem was caused by a ‘technical malfunction’, which reads like code for an AI fault.

Bias is a ‘core’ challenge

The heart of an organisation’s supply chains is Operations Planning, which the industry research firm Gartner stated in 2019 had ‘four evils’ against a successful function – uncertainty, bias, data and model. The ‘evil’ that can influence the other three is bias, because it is human derived. Bias in a supply chain is the interpretation of data by groups or individuals, that aligns with their preferred position.

It is therefore not a coincidence that a concern of AI is bias, because software structures and algorithms are also human derived. As stated in a recommended 2017 article from the consulting firm McKinsey, If biases affect human intelligence, then what about artificial intelligence? Is the software biased? The answer, of course, is yes, for some basic reasons. Machine learning can perpetuate and amplify behavioural biases. Machine-learning algorithms are prone to incorporate the biases of their human creators.

The article also notes that Algorithmic bias is one of the biggest risks because it compromises the very purpose of machine learning. This often-overlooked defect can trigger costly errors and, left unchecked, can pull projects and organizations in entirely wrong directions. Examples of biases within algorithms that can affect or influence the outputs from AI applications are:

  • Expectation – belief that an output decision is correct, without analysis, measurement or testing the conclusion
  • Incomplete data – limitations in the data set will bias outcomes, which can influence business decisions
  • Anchoring – gives extra weight to particular outcomes, based on personal experiences
  • Availability – to use familiar assumptions that have served adequately in the past, but may not in new situations
  • Confirmation – select evidence that supports pre-conceived beliefs. It provides a (usually false) feeling of ‘knowing’ the future. People tend to ‘skim’ read or over-trust the correctness of specifications used for system validation because they ‘know’ what it is about
  • Parameters – algorithms formalize corporate or department parameters – ‘the way things are done here’
  • Peer pressure – building algorithms the same way as the peer group makes decisions. These algorithms can perpetuate the corporate ‘way’ or ‘group think’
  • ‘Bandwagon’ effect – an output decision is correct because ‘everyone’ is doing it
  • Stability – to not consider the possibility of significant change in an uncertain environment. Also to prefer using the same data sets that human decision makers use to predict outcomes
  • Self-fulfilling prophecy – an algorithm forces an incorrect outcome for a business
  • Stereotype – hold poorly informed views about age, gender and ethnicity e.g. using one (good or bad) attribute to form an overall view. Or could have a preference for cars over pedestrians. This is linked to
    • Gender bias. An example from Canada, where an AI algorithm allocated snow-plough machines to first clear roads for cars (often driven by men). The snow was pushed onto pavements along which children’s prams were pushed (often by women)
  • Loss aversion – places a higher value to avoid a loss than make a gain
  • Sunk costs – outcomes influenced by the amount of losses already incurred
  • Selective effect – allow poor performing operations to continue in the belief they will ‘come good’ in time

Reducing bias in algorithms

Training AI development specialists to understand and detect bias would appear to be an essential pre-requisite for AI solutions. Also question the dependency of using historical data for domain specific AI. This is because in the domain of supply chains there are instances of people using incorrect analysis and techniques, because they have learnt from others (over the shoulder learning). To upload this historical data without question would be of little value to the improvement of an organisation’s supply chains.

To even start the process of change, employees using the AI system must trust the output generated by the algorithm and feel confident making decisions that in the past would require the sign-off of a manager. Although it appears that AI solutions are a ‘black box’ with unknown functionality, users must understand the reasoning and construction of the algorithms they may depend on.

However, this can be difficult when the AI engine is is a purchased part built into a supply chain application, and the AI developer does not allow access to third parties. The user is then trusting the application developer that the AI performs to requirements. In this case, it is best to identify the legal liability for an application that is required to be ‘fit for purpose’.

For senior management, the lure of videos, brochures and PowerPoint slides can be overwhelming. Instead, take a step back and ask what AI/ML is expected achieve through the organisation’s supply chains that current technology and techniques (correctly used) are unable to achieve.

AI is discussed as most effective when applied to problems that have myriad sources of data, spread across disciplines. For supply chains, the Tactical level is the obvious area, commencing with Sales & Operations Planning (S&OP). The aim should be to move an organisation’s structure and motivation from teams working in silos to interdisciplinary and collaborative work. This is a Flow based structure.

Share This Page

About the Author

Roger Oakden

LinkedIn X Facebook

With my background as a practitioner, consultant and educator, I am uniquely qualified to provide practical learning in supply chains and logistics. I have co-authored a book on these subjects, published by McGraw-Hill. As the program Manager at RMIT University in Melbourne, Australia, I developed and presented the largest supply chain post-graduate program in the Asia Pacific region, with centres in Melbourne, Singapore and Hong Kong. Read More...

One Comment on “Reducing Bias in AI powered Supply Chain applications”

Leave a Reply

Your email address will not be published. Required fields are marked *