Who's going to keep an eye on AI?

Artificial intelligence (AI) is in the news as it begins to be used in the systems we encounter on a daily basis, with many people undecided if it will be good or bad for society. IBM has announced a tool that will examine AI decisions - could that help acceptance?

default.jpg

Computing technology, programming and internet speeds have now reached a point where AI systems are being used to help decision-making.

Speech recognition is one area where AI has allowed great improvements and we're seeing that in the number of voice activation systems on smartphones, home control devices and when phoning call centres.

Successes in health

Another success has been in medical diagnosis, for example in one experiment where AI systems made faster and more accurate decisions when analysing heartbeat scans. Doctors show a one-in-five failure rate when trying to spot heart disorders but a system being tested by researchers in Oxford picks up details that doctors can't reliably spot.

The savings from spotting problems that would otherwise be missed are huge, and in these cases the test results will be passed on to a specialist for further checks, so there is a human overseeing the process.

What are AI systems deciding?

The problems come when there isn't any clear visibility of the rules an AI system is creating for itself. Many of them work in a 'black box' way, where the rules they are creating can't be seen and can only be inferred from the results they deliver.

High-profile examples have been face recognition systems that only work well with white faces; in fact Google apologised after one of its systems, in 2015, identified African-Americans as gorillas. UK police forces were warned last year about using systems that decided whether people could be criminals based on gender, postcode and personal data.

IBM offers open tool

The Fairness 360 kit being launched by IBM will analyse how and why systems make decisions, look for signs of bias and make recommendations for adjustment. It is to be open-source, which means other companies will be able to use it.

IBM is not the only one. Google launched a 'what-if' tool in September that will help users see how AI models are learning, Microsoft has said it's working on a bias-detection toolkit for programmers and Facebook has announced testing of a tool to determine bias in its algorithms.

Openness is a must

If people are going to trust the increasing use of AI, being transparent is imperative. IBM's initiative and others like it may go a long way toward providing that trust.

Related articles

Monthly Newsletter

Not signed up to our monthly newsletters and would like to keep up to date with a variety of products? Select from below which products you would like to receive.