Home Training & Simulation How far away are we from fully automated defence?

How far away are we from fully automated defence?

Defence has been effectively digitised in many ways. But there are still certain elements of combat that require good old fashioned human reflexes and intuition – for now.

Last August, an AI went against a human F-16 pilot in a virtual dogfight, and won! In fact, it completely trounced the human combatant (a skilled District of Columbia Air National Guard pilot) in five out of five attempts. The AI in question was developed by Heron Systems, as part of a DARPA-led initiative to create AI for fighter planes.

The initiative took the form of a year-long round robin tournament to find the most effective combat AI. Heron went up against several defence contractors, startups and university laboratories, and pulled out way ahead of the pack. What they were trying to achieve with the ‘AlphaDogFight’ initiative was to highlight the tactical advantages of AI-controlled weapon systems.

The theory behind the win is that the AI was able to move and make decisions more quickly without being subject to the physical impact of g-force, which can lead to disorientation, discomfort and pain in humans, even those trained to withstand it. For an AI, those physical factors are simply irrelevant.

However, in real-world combat, an AI would need to deal with more grey areas than it did during this virtual mission. What if, for example, there was a scenario where a hit isn’t guaranteed? The Heron Systems ‘Falcon’ AI also knew everything there was to know about its opponent, but how would an AI with substantially less information on the enemy know how to engage? With these questions still lingering, it seems likely that the AI algorithms of the near-future will be more wingmen then pilots, but that’s not to say there isn’t still work being done.

Check out our other defence stories

How far away is autonomous defence, really?

This is a large and very complicated question, with various countries and forces at very different stages, and when it comes to the notion of total AI warfare, there isn't a single one that could even come close. Nonetheless, here we’ll be focusing on a few examples from the US (currently the leader in the field) that offer greater insight into what autonomous defence might look like in the coming years.

 

Slamcore - Novatech blog - automated defence

Air

After the initial AI test proved such an astounding success back in August, DARPA immediately moved on to the next phase of its autonomous aerial dogfighting (or Air Combat Evolution) programme. The end game here is for the ‘ACE’ programme to contribute to the idea of ‘Mosaic Warfare’ which would see a vast and varied fleet of autonomous weapons being supervised by humans.

This (in theory) matches the best aspects of the AI with the best aspects of human pilots, with the AI ‘wingman’ allowed to focus on more logical jobs while the human ‘captain’ concentrates on less binary problems.

Land

The US Army has always been quite forward-thinking when it comes to AI and, as such, it’s already being used in hundreds of day-to-day applications. According to John Fossaceca from the US Army Research Laboratory, predictive maintenance is one of the most important ways in which AI is currently being used. This means the AI is able to “predict when vehicle parts need to be replaced or serviced before the vehicle breaks down.” It’s used to aid talent management by “identifying the competencies and attributes that lead to successful performance that can then be used to find potential candidates for positions in the Army.”

More profoundly, however, the army isn’t as interested in using AI to pilot tanks and robotic fighters (easy there James Cameron) as it is in using AI to help it make the most of the real battleground of the 21st century - big data. Of course, there is still active research into autonomous vehicles, electronic warfare and augmented reality but, right now, it’s all about that data!

Sea

Not only content with dipping their toes in the land and air, DARPA have also invested a significant amount in the AI capabilities of the US Navy. The 132-foot-long  Sea Hunter has been in development since 2016 and has been designed to travel the oceans for months at a time with not a single human soul on board. The idea is that it will act as a vessel that searches for and reports enemy submarines to human operators.

Not only will the ability to deploy fleets of unmanned submarines save the Navy countless millions, but it will also mean they are able to deploy them in potentially dangerous situations without risking the loss of any human life. When utilised in tandem with human operatives it’s the stuff of Tom Clancy’s dreams!

The drawbacks

Of course, it’s never going to be completely plain sailing when it comes to artificial intelligence. First, there’s the aforementioned data issue to overcome: Obtaining and sharing massive datasets is always going to be a challenge for an institution that, by design, needs to classify and restrict access to that same data.

From a regulatory perspective, there are also those countries that have flat-out banned these weapons from being used. The UN Secretary-General, António Guterres, also urged states back in 2018 to prohibit weapons systems that could, by themselves, target and attack human beings.

What does the future of AI warfare look like?

While it is a bit of an arms race, the next step isn’t a world of robot soldiers. AI is already being used in creative ways to partner with humans. In air combat, where the most obvious human/AI allegiances have been formed already, it’s been very much the case of the AI being used more as a ‘wingman’ than as a captain or pilot. This is not the machine controlling itself without oversight, this is the AI being used as a tool and, for the foreseeable future, that’s probably where it will begin and end.

As these technologies develop and evolve, however, it will eventually allow for more advanced unmanned options to boost the safety and efficiency of modern warfare. Whether or not regulations will evolve enough to allow for this to be a possibility remains to be seen but these are still early days as far as this technology is concerned, and with time comes understanding.

How can human beings train and prepare for this autonomous future? That’s a very good question. One that deserves an entirely separate discussion of its own. For now though, the key appears to be in finding ways that humans and machines can work together. Half of the battle is deciding exactly what tasks AI is best suited for. Algorithms can be trained to recognise patterns, but when it comes to visuals and multitasking, humans still have the upper hand, after all.

Ultimately, AI is imperfect, but then so are human beings. By putting them together in the right way then, perhaps they can mitigate each other’s imperfections and scratch each other’s backs.

If you have a project you’d like to discuss, call our training and simulation team on 02932 322500

Images copyright TitanIM

Working on a project that might benefit from our AI and deep learning-enabled GPUs? Drop us a message using the form below, or call us on 02392 322 500.

 

Posted in Training & Simulation

Author -

Published on 01 Mar 2021

Last updated on 02 Mar 2021

Recent posts