AI Ethics and Engineering

Homepage AI Ethics and Engineering

Recently, Witekio took part in a panel on AI ethics and engineering at this year’s Engineering Design Show.

We are not specialists in Artificial Intelligence or play a huge role in the development of that field, but we are embedded software experts – meaning we are working with the devices that use the technology.

We see first-hand the highs and lows of the ever-evolving AI world and work hard to ensure end-users get the best experience possible.

In this article, we will be outlining those highs and lows, as well as discussing some of the ways you can ensure your ethics when it comes to AI are crystal clear.

It’s important to note when we are talking about AI in this article, we are also including subsets like machine learning and computer algorithms.

What is the discussion around AI and Ethics?

It is all about bias. Whether purposeful or unintended, bias can have a massive impact on a user’s-experience.

AI Bias occurs when an algorithm produces results that are systemically prejudiced in one way or another.

A perfect example is your chosen voice-activated virtual personal assistant and how it responds to you. Research has found that even in the most accurate device there is still a 13% disparity between voice activation and understanding male and female voices.

The bias could have happened for multiple reasons including unintended cognitive biases or real-life prejudices from the people who are designing or training the AI.

In this case, bias happened because voice recognition software encodes voices through a database which often contains far more male voices than female ones.

Types of AI Bias

  • Algorithm – There is a problem with how the computer figures it out.
  • Exclusion – Where important data is left out for some reason, sometimes because you do not realise it’s significant.
  • Measurement – Data is not accurate enough, so for example always rounding things up could introduce a bias
  • Prejudice – the data or algorithms reflect existing prejudices, stereotypes and/or faulty societal assumptions, thereby introducing those same real-world biases into the machine learning itself.
  • Sample – data is not large enough or representative enough.

The other major problem with AI bias stems from the often-cited concern that “no one knows how AI works”.

The workings of the neural net are well understood. What is really meant by these statements is that in any given AI system which has been trained, it is unclear which aspects of the data the AI has identified as patterns for making matches.

That can often cause unwanted correlations in the data to be noticed and used for decision-making when it shouldn’t be. You won’t have a filter that excludes people of colour from job applications, but your AI algorithm could learn to exclude CVs of minorities due to a correlation of unsuccessful CVs.

Why is bias an issue and why does it need to be addressed?

AI is more and more relied on for life-changing situations such as facial recognition technology for law enforcement systems.

If an AI has a bias in use cases such as mugshot identification, or even authenticating IDs, it will have a negative impact on the group that the AI deems bias against.

What do organisations need to do to understand and fix bias?

1. ​You need to acknowledge that AI systems are trained on historical data which contains mistreatments/imbalances/biases due to complex societal problems and that when given this information, the AI will automatically continue replicating those unwanted results into the future.

2. To fix this issue you will need recommendation systems and filtering systems, it sounds obvious when you step back and spell it out – but the data used to train the AI needs to be curated to be representative of the type of results you want.

3. When using AI for live applications that affect people’s lives, ensure a lengthy trial period before going to market, where humans check all results. The AI should be phased in to replace this human check, and periodic reviews need to happen to evaluate whether to continue the trial or stop and rework the solution.

4. Invest in independent testers, providers of good training and test data and the involvement of high-level product owners/stakeholders to discuss, test and update any bias found.

5. Increase diversity in training data. You need a diverse base to ensure hive thinking does not happen.

For example, IBM took note of its negative results for facial recognition bias, went away and worked on its algorithm and came back to achieve closer to 99% accuracy for people of all genders and races. They did that by increasing the diversity of training data.

Conclusion

To move forward and get the real benefits of AI, without any of the ethical question marks, you need transparency.

You need to be able to explain AI to your stakeholders and end-users.

How it generates its predictions, what features of the data it is using to make decisions and assessing the result to show if they stand up.

It needs to be designed with inclusivity in mind and trained with the most complete and representative dates possible. Then tested thoroughly and often.

 

** Authors note: Thank you to Ed Langley – Witekio UK & US Spinoff Engineering Manager for the support and help with this topic **

Have an AI enabled device that you need help with?
Patricia Fieldhouse - Senior Project Manager at Witekio
28 October 2022