Giant Fake Ladybugs on Tanks? The Future of Warfare in the Age of Artificial Intelligence and the Need for Ethics

In September this year, Chief Software Officer for the U.S. Air Force Nicholas Chaillian, unexpectedly resigned.

The reason for his resignation? To protest the slow pace of technological transformation taking place in the U.S. military, and where he argued the U.S. had already lost the race for AI dominance to China.

In today’s competitive climate around AI development, many warn about the consequences of being reckless in developing and using these technologies in warfare. However, fully understanding what AI looks like in warfare is complex, which makes it even more challenging when thinking about the potential ethical and legal implications of these technologies.

U.S. soldier uses the tactical robotic controller to control an expeditionary modular autonomous vehicle.
U.S. Army photo by Sgt. Marita Schwab

Artificial Intelligence (AI) in itself is a broad concept, referring to types of software that can enable machines to function without direct human intervention.

Due to the broad umbrella of technologies that fall under AI, it is hard to grasp the full scope and degree of impact that these technologies will have in a military setting. Much of the debate and focus has been on autonomous weapons systems (AWS), and whether these technologies should be banned. AI’s role in military preparedness and operations is however considerably broader than AWS, as AI supports decision-making in a variety of ways.

Understanding how AI can contribute to more effective – and safer – military operations is of increasing importance as its use becomes more widespread. In this respect we must not sidestep careful consideration of the attendant ethical and legal challenges.

“Algorithmic Warfare Group (AWG 2.0)”

In this recent article for War on the Rocks, August Cole presents one of the many different ways that AI may change warfare. Cole is an author who has written extensively on military technologies. Among his other interesting credentials, he has coauthored a widely read novel about future warfighting, Ghost Fleet. Cole heads the Strategy Team for the Warring with Machines Project at PRIO.

In the article Cole describes how the U.S. could implement an Algorithmic Warfare Group (AWG 2.0). This group would be similar to the Asymmetric Warfare Group which was formed during the early 2000’s to help the army learn from new and emerging threats in the battlefield. The AWG 2.0 would in theory work to pair PhD’s who specialized in AI with many branches of the military as advisors. In this role, they would help the military think about how they can use algorithms and AI to better support their work, and also how they can counter and trick other militaries AI enabled systems in the field.

In one example that Cole outlines, “AWG advisors could help dispersed Army units spoof machine-vision software used by an adversary’s low-flying artillery-spotter drones. AI-powered systems on those drones will be common as commanders drown in more video feed and visual data than human analysts can keep up with. Tricking those machine-vision systems will help U.S. forces hide in plain sight, or at least buy them time to leave an area or prepare for contact. This might be done by literally crafting glue-on three-dimensional objects to confound machine vision or simpler visual tricks that result in pixel spoofing that can ’turn a car into a dog’”.

Ethics in a shifting combat landscape

However, as the article outlines, members of the AWG 2.0 group would also work to determine what data is most useful and relevant in a combat context. This is key, because as many AI developers highlight, data is vital to the success of AI and the algorithms which guide these systems. This presents both a challenge and an opportunity to think how this data can and should be used ethically.

We have seen many examples of the way in which data that is biased results in AI systems that are biased, and can entail negative gendered and racial implications in their use. As I have written before for PRIO blogs, these examples of bias raise a number of social and ethical concerns about the use of AI in warfare. As Cole points out, groups such as the AWG 2.0 should observe and learn from these mistakes to inform how data is integrated and used in future warfare. This also presents an opportunity as Cole concludes “for the U.S. and its allies to set a precedent and standard for the responsible use of data and AI, which private companies then might even strive to follow.”

Understanding the multidimensional impact that these new technologies will have in warfare and society more broadly is complex. Having a nuanced understanding of the many different ways these technologies could be implemented is key as nations and institutions around the world struggle to create the ethical and legal frameworks needed.

 

 

Share this:

2 Comments

Giant Fake Ladybugs on Tanks? The Future of Warfare in the Age of Artificial Intelligence and the Need for Ethics – PRIO Blogs – Peace Research Institute Oslo (PRIO) – Machine.Vision

[…] Giant Fake Ladybugs on Tanks? The Future of Warfare in the Age of Artificial Intelligence and the Ne…  Peace Research Institute Oslo (PRIO) “machine vision” – Google News amzn_assoc_placement = "adunit0"; amzn_assoc_search_bar = "true"; amzn_assoc_tracking_id = "9ja-20"; amzn_assoc_search_bar_position = "bottom"; amzn_assoc_ad_mode = "search"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_title = "Shop Related Products"; amzn_assoc_default_search_phrase = "machine vision"; amzn_assoc_default_category = "All"; amzn_assoc_linkid = "d41fbe8484fd59cd28e255a2144026de"; […]

Comments are closed