Skip to content
It looks like you're using an unsupported browser, which may impact upon your experience. It is strongly recommended that you switch to the latest version of Chrome, Firefrox, Safari, Edge or another modern browser.

AI is growing: how do we create fair technology we can trust in, and minimise the risks of gender bias?

news published date 6 May 2021
  • Thoughts & Opinions

This year marks 20 years since Spielberg’s sci-fi / drama movie came to our screens: A.I. Artificial Intelligence – and in 2021, AI is no longer the work of science fiction. In the last 20 years, AI uses have been explored for human benefits in numerous industries. AI has played a key role in developing vaccines against COVID-19. AI functions can be found in modern connected car safety features, such as in automatic braking and assisted lane functionality, and in more besides . It is technology used to automate a number of functions on your smartphone – from predictive text, (which can learn the words you commonly use), to voice-activated assistants, like Siri and Alexa. And AI also plays a significant role in the finance and banking industries.

A recent World Economic Forum article referenced a SAS Institute survey in which two-thirds of banks said they use AI chatbots and almost 63% said they used AI for fraud detection. The McKinsey Global Institute believes AI could add around 16% or $13 trillion to global output by 2030 – and COVID-19 has since then, further accelerated AI use.

AI has tremendous potential. How to build it to minimise the risks of gender bias?

AI is important. It can take away and do more efficiently, work that takes significant numbers of hours manually. It can also be perceived to do so in an objective way – after all, as a machine it will follow the instructions provided, without risk of human prejudice or subjective opinion. Right?

Not quite. As it turns out, unintended bias can be designed in, if we are not careful.

In 2018, Reuters published the story of Amazon’s AI recruiting tool – and the discovery by their machine-learning specialists of a big problem – that the new recruiting engine did not like women. The team had been building computer programs since 2014, with the aim of automating review of applicants’ resumes to identify top talent. But by 2015, it realised its new system was not rating candidates – for software development and other technical posts – in a gender-neutral way. This was because the models were trained to vet applicants (or to determine what “good looks like”) by observing patterns in submitted resumes from over the last 10 years. And the problem was, the majority of these came from men. In effect, as an unintended consequence of male dominance across the tech industry: Amazon’s system had taught itself to prefer male candidates.

This problem has not gone away. According to the Alan Turing institute, in the UK, women represent 47% of the workforce, yet hold less than 17% of all available tech jobs. This imbalance also lends to wider diversity and inclusion losses in the marketplace. Many academic researchers have underlined the benefits of workplace gender equality. (Woolley et al. (2010) found the presence of women increases a group’s problem-solving abilities and (Sastre (2014), that it drives innovation According to (Herring (2009), gender diversity is also associated with higher sales revenues, larger numbers of customers, as well as greater relative profits.

As AI continues to grow, it is helpful that Government, regulators and industry are continuing their work to assess the issue. In the UK, Government last year released its Review into Bias in Algorithmic Decision-Making, with recommendations to address the risk of bias, and to support algorithmic fairness and equality. The ICO (the UK’s Data Protection regulator) just last month closed its consultation on its AI and data protection risk mitigation and management toolkit . And just two weeks ago, the EU published its proposal for a Regulation laying down harmonised rules on artificial intelligence, the (Artificial Intelligence Act) . This is a significant development in the promotion of responsible AI, and a proposal heralded by the EU Commission as being the first-ever legal framework on AI.

Join us to continue the discussion!

Against this backdrop – this topic becomes ever more topical. To learn more, and join in the discussion, we welcome you to the WIBF Hot Topic event running on the 18th May – on fair AI and the value of trust: How did we get here and where do we go next? Joining information can be found here

The event is free for WIBF members.

We look forward to welcoming you, to an inclusive and engaged discussion!