Deliver Your News to the World

How to avoid building a biased AI


WEBWIRE

As artificial intelligence becomes a bigger presence in all of our lives, one of the age old issues – that of ethics and biases within AI – remains a pertinent concern. The world we inhabit is filled with biases. Diversity and inclusion within the workplace is a very current topic – one that still requires a lot of work and progress – especially within parts of the tech sector. Just this year, Tech Nation released a Diversity and Inclusion Toolkit to help tech companies to become more inclusive.

But what are the AI companies of today doing to make sure their systems, algorithms and ways of working aren’t replicating, and therefore deepening the unconscious biases and assumptions that many of us hold? Much of our world is built on historical and systemic racism, and patriarchal ideals. Our world tends to favour the neurotypical over the neuro diverse, and large amounts of wealth is controlled by a small minority. So, it follows that our world can’t be the example that teaches a computer how to think. How do you build an AI that is fairer than the world in which it exists? 

We asked some of the founders from the Applied AI 1.0 and 2.0 programmes – Dr Anil Bandhakavi of Logically, Dr Zara Nanu of Gapsquare, Georgina Kirby from Vinehealth, Dhruv Ghulati of Factmata, Sakthy Selvakumaran from Bkwai and Maya Pindeus from Humanising Autonomy – for their take on the issue.

Learning from the past

The problem of bias within AI isn’t an imagined one. There have been several famous examples of AI causing issues for the companies who use it. 

Dr Anil Bandhakavi from Logically noted that recruitment has been a particular culprit, especially when it comes to age and race. “Studies have highlighted discriminatory algorithms being applied in recruitment, where applicants with white-sounding names were more likely to be called for an interview than those with black-sounding names. Studies also suggest that call-back rates tend to fall substantially for workers in their 40s and beyond”. 

Companies such as Facebook and Amazon are also at fault, as Dr Zara Nanu points out “Amazon had to scrap an AI recruitment tool that discriminated against women. The tool actively downgraded CV’s with the word “woman”, such as if someone had “women’s chess club” on their CV. Gender and racial discrimination has also been found in Facebook’s ad algorithm, including ads for employment and housing, which negatively affected women and people of colour”. The think-tank, Diversity AI, found more cases of AI discrimination against people of colour, such as wrinkle-tracking software not working on people with darker skin.

When these sorts of biases are carried over to healthcare, the implications can be life-threatening. An algorithm, sold by healthcare company Optum to manage the healthcare of millions of Americans showed biases against black patients. Georgina Kirby from Vinehealth told us, “Due to an imbalanced dataset, the algorithm ended up triaging white patients over black patients; an example of racial bias in AI that could have affected millions of people in the US each year had it not been found”.

Black box algorithms

If an issue with the algorithm is undetectable, it makes tackling it impossible. A problem that was raised time and again when we chatted to our AI founders, was black box algorithms. Black box algorithms are when a computer or researcher can’t really explain a decision the algorithm has made – meaning there is no means for recourse if an issue arises. It can also raise questions when it comes to client or consumer trust. Where the data contains traces of discrimination, bias or inequity, black box algorithms become deeply problematic, as well as potentially dangerous.

Highly publicised incidents, such as Instagram blocking specific types of pictures and not others, are often brushed under the carpet by big corporations, and it’s even unclear whether they understand why the problem occurred or not. Dhruv Gulati of Factmata believes we could all benefit from a more open dialogue “If big organisations revealed those biases to the public and made them open to scrutiny, users would have more control and there could be a more informed public debate”.

There is a problem currently occuring in the USA where AI algorithms are being used to govern credit decisions that implicate access to Medicaid, private goods and services like cars, homes and employment, and even public benefits like healthcare, unemployment and child support. There is currently no system of recourse if a negative decision is made, along with no explanation available. Often these situations can benefit from a nuanced, holistic approach, taking into account factors like mental health, domestically coerced debt and more. The whole ‘computer says no’ approach could be interpreted as inhumane. 

Sakthy Selvakumaran from Bkwai said “The problem if you don’t understand fundamentally what is going on is that you lose the critical spirit that you must keep to properly assess the results. It is outsourcing the intelligence to the machine. AI tools can bring an intelligence in terms of crunching volume (given a set of pre-defined rules), but it still needs to be combined with “consciousness” – that is not something people can programme into machines”. 

Who builds the algorithms?

80% of AI academics are men, only 10% of AI researchers at Google are women, and people of colour only make up 25% of staff at major tech companies. It is for these reasons that diversity becomes a particularly potent issue when it comes to teams forming AI algorithms. As Dr Zara Nanu from Gapsquare points out, “the field is still extremely white and male, and while some AI proponents believe that machines are less biased than humans, the machine itself is not separate from its programmer – and programmers are inherently biased”. 

Nanu continues, “A homogenous workforce tends to develop solutions that work best for its own demographic”. Researchers at Columbia University found that the less diverse a team is, the more likely they are to make a prediction error – and all male teams were especially likely to make the same prediction error. The study provides sound evidence to support the argument for diverse developer teams. It isn’t just a ‘nice to have’ or the ‘right’ thing to do, it actually makes AI algorithms more accurate. 

Analysing the data sets

When we look a little deeper into bias within AI algorithms, it becomes clear that the devil is in the data. Whilst an AI algorithm itself will often be neutra, if the data used to train it is flawed or insufficiently representative, then the conclusions drawn will be biased and discriminatory. As Dr Zara Nanu points out, “While it’s impossible to have a completely unbiased dataset, the importance of treating biased data and knowing what data to use cannot be understated”. 

When we spoke to Sakthy Selvakumaran from Bkwai, she pointed out the historical issues within the science sector in general “Science has a terrible historic relationship with race and gender. Scientists in the past often skewed simple statistics with bias. With AI we run the risks of the same narratives, and people making poor decisions because they mistakenly think that the data ‘proves it’s true’”. 

Analysing data sets is another area where the key is hiring diverse programmers, or ones that know how to effectively spot biases. Although being trained in Diversity and Inclusion might not be the most obvious trait to look for when hiring a programmer, hiring someone who can spot discriminatory bias could end up being the difference between an effective AI algorithm and a disastrous one. If your team is diverse enough in the first place, different perspectives can be considered when data analysis takes place. As Sakthy Selvakumaran said, “Diversity within programmers is not enough to tackle discrimination, but it provides a good starting point in creating fairer AI”.

Prioritising explainability

As the ethical issues with black box algorithms become more clear, AI companies are being more thoughtful in how they create algorithms in the first place.

Maya Pindeus from Humanising Autonomy told us how her company has developed what they call a ‘white box’ approach “We have a built in explainability within our artificial intelligence systems. This means that there’s a clear understanding of how decisions are made. We are able to extract key human actions from video footage in real time. Each of those actions are explainable, and then these are used to create higher level, more complex behavioural models”. 

Having an AI algorithm that can be explained is so important when it comes to building public understanding and trust. It allows the public, and AI users alike to understand the capabilities and limitations of the system they are utilising. Building in interpretability by design allows for safer and more ethical decisions. 

What more can the AI community do?

While bias within AI algorithms is largely an issue for the AI community to tackle, it’s not that simple. The fundamental decisions about what is fair and what is right can’t be left to the AI community alone.

“We have a duty to engage with society as a whole. There are many different ideas, concepts and definitions of fairness. Introducing AI (multiple layers of intricate decision-making systems) only adds to that complexity. Unfortunately there is no formula or algorithm for a correct answer”. Sakthy Selvakumaran from Bkwai

It’s neither ethical nor sensible to leave it up to the AI community to decide what is ‘fair’; the decision needs to have the backing and the understanding of broader society behind it. The people building the algorithms need to have a deep understanding of the assumptions and trade-offs involved in producing a specific algorithm. The decision making process should also be transparent enough so that users and customers can make informed decisions about utilising the results, and the balance of risk and opportunity in doing so.


( Press Release Image: https://photos.webwire.com/prmedia/7/277677/277677-1.jpg )


WebWireID277677





This news content was configured by WebWire editorial staff. Linking is permitted.

News Release Distribution and Press Release Distribution Services Provided by WebWire.