in Features

Boosting trust in AI

Posted 12 July 2021 · Add Comment

Dr Nira Chamberlain, Professional Head of Discipline for Data Science, Atkins, asks how can we be sure artificial intelligence (AI) can be relied upon to drive automated military vehicles?



Above: Dr Nira Chamberlain.
Courtesy Atkins


The defence sector has been using automated vehicles for decades and a new generation of automated vehicles, powered by AI, is on the increase as we put more trust in AI algorithms. Yet can we expect to have full confidence in these vehicles when we do not fully understand in any detail how AI works?

AI-supported automated vehicles will soon be operating widely in the defence space. You only need to look at R&D, such as AIMM, the US Army’s flagship AI research programme for manoeuvrability, or the UK Government’s assertion that deploying AI will provide an advantage over adversaries in military planning.

However the fact remains that it is quite hard for most of us humans to understand the actual rationale behind predictive AI decision-making. How does it work behind the scenes? And what happens if these robots make the wrong decision? Won’t that have an impact from a legal, ethical and safety critical point of view?

Surely, we need to start interrogating their responses and not settle for the computer saying 'yes' or 'no' – and us not really understanding why.

When it comes to automated vehicles, many of these are in the prototype phase and can be prone to making unusual - and erratic - decisions. To understand why, we must look into what is driving this decision-making.

While any good mathematician is expected to show their calculations, when it comes to AI – which is powered by systems generating predictions based on billions of calculations every minute – it is not so straightforward. One response can be to develop a reverse engineering algorithm, which, when wrapped around the black box that drives the automated vehicle, can approximate the logic and rationale for every prediction made.

In a defence context, this will prove to be a highly valuable asset. By reverse engineering the AI process, we can reveal with more granularity how decisions are arrived at, such as why automated vehicles are instructed by the algorithm to turn left, right, stop, start, or go straight ahead.

Reverse engineering can help us understand how tasks are accomplished through deductive reasoning. Take a washing machine, as an example: we know you put dirty laundry in it at point A, and at point B clean laundry comes out. To understand better how that process works, you can work backwards, stripping away at each stage of the process, until you are right back to the blueprint. Once there, you have revealed in reverse the chain of events that, going forward, result in clean laundry.

So, similarly, with AI algorithms we can reverse engineer to provide a picture of the logic of each decision made in that process. Also, while we will never have a totally full picture – using a 1,000-piece jigsaw as a comparative example, reverse engineering algorithms give us the means to see where around 750 of the pieces go – it does mean we can pinpoint a higher degree of certainty, around 75%, than ever before.

In effect, the black box driving the vehicle is a mathematical model: a simplified representation of a complex system. So effectively, what we have done is produce a mathematical model of a mathematical model. This allows us more trust in deployment of AI-driven automated vehicles but more confidence in our investment decisions as we can be assured that AI is safe and reliable in this context.

It is not enough for major manufacturers to claim that AI algorithms cannot be challenged. They can. Also, by gaining a better understanding of how those AI-driven black boxes reach the decisions they do, we can put protocols and processes in place to mitigate against any perceived risk. This will be a gamechanger to us harnessing the power of AI where any critical decision needs to be made; which, within the defence context, is a matter of course.


Dr Nira Chamberlain, PhD HonDSc, is the Professional Head of Discipline for Data Science for Atkins, a member of the SNC-Lavalin Group. He is the current President of the Institute of Mathematics and its Application (IMA) and is a Visiting Fellow of Loughborough University Mathematical Sciences Department. In 2019 the Inclusive Tech Alliance named Nira as one of the Top 100 Most Influential Black, Asian and Minority Ethnic leaders in the UK’s Tech. Nira has two mathematical doctorates and has been listed by the PowerList Top 100 Most Influential people of African or African Caribbean heritage in UK for four years running (2018-2021).



Other Stories
Advertisement
Latest News

Horizon Technologies releases BlackFish

Horizon Aerospace Technologies has announced the release of a new airborne SIGINT product, BlackFish, a small airborne qualified, tri-band Sat Phone SIGINT system, which simultaneously monitors the Iridium, Thuraya and Inmarsat

BAE Systems' advanced APKWS enhances rocket range and impact

BAE Systems, Inc. has developed an advanced version of its combat-proven AGR-20 Advanced Precision Kill Weapon System (APKWS) guidance kit that offers enhanced strike distance and precision strike lethality.

Why the circular economy is taking off

James Domone, Principal Engineer, Atkins, looks at the adoption of circular economy principles in aerospace and how their application can help decarbonise the UK defence sector.

Menzies Aviation adds three senior members to MEAA team

Menzies Aviation has today announced the appointment of three senior members to its Middle East, Africa and Asia (MEAA) team: John Henderson as Senior Vice President Operations MEAA, Gemma Sinclair as Vice President HR MEAA and Kevin

Blighter to showcase A800 3D multi-mode radar at DSEI 2021

Blighter Surveillance Systems will be showcasing its innovative A800 3D multi-mode drone detection radar in the ADS Group UK Pavilion at DSEI 2021, from Tuesday 14th September to Friday 17th September, at the ExCeL exhibition centre in

British engineers produce pioneering PPE

Engineers from BAE Systems' Submarines business, along with local company Lancastle and staff from University Hospitals of Morecambe Bay NHS Foundation Trust, have developed pioneering new Personal Protective Equipment (PPE) - the

ODU UK SK0902311221
See us at
DSEI BT2606170921RAF Museum BT