Artificial Intelligence, Warfare, and Bias

Image by Gerd Altmann from Pixabay

When you think about Artificial Intelligence (AI) and war, you might find yourself thinking about killer robots, like those we have seen in movies such as The Terminator.

In reality, AI and warfare looks quite different from these popularized images, and today we see many countries around the world exploring the use of AI and implementing AI systems into their militaries and defense programs. With this increased interest in AI, there has also been a growing debate about the ethics and legality of using AI in warfare. While there are many concerning aspects about AI being utilized in warfare, one that is particularly troubling, but has also received less attention, is that of biased AI systems.

Examples of biased AI

Certain lessons can be learnt by looking at examples of biased AI in non-military settings. It has become increasingly clear from a number of investigations and studies that the biases that exist within our society will also become embedded into AI. This involves facial recognition programs such as the one developed by Amazon that had little trouble recognizing white men’s faces but was considerably less accurate in identifying Black women and other groups of people. Another example is the program used in US courts which falsely predicted Black individuals to be twice as likely to commit a crime than white individuals. It is clear that biased AI can have serious and real consequences in society.

Different entry points of bias

A key challenge is that there are many entry points for bias to be introduced into AI. This can result from the often unintentional biases held by the program developers, a real issue in a workforce that is lacking in diversity. It can also occur if data that is used to train a program is under-representative or over-representative of a group, such as what happened with Amazon’s facial recognition program. There are many other ways in which bias can become embedded in AI-based programs, further underscoring the ethical and social dilemmas connected to the use of AI in warfare.

Three considerations for biased AI and warfare

Drawing upon some of these examples of biased AI in non-combat settings, we can consider three main themes or concerns that emerge when considering the way in which biased AI could unfold in warfare and combat settings.

The first revolves around how AI might play a role in threat assessments. These threat assessments would be based upon algorithms in which certain attributes are selected to help determine how much of a threat the target is. One of the issues behind this is the role that biases often play in shaping who we might see as a threat in the first place. Sarah Shoker, a researcher examining gender bias and drone warfare, argued that under the Bush and Obama administrations there was a lower threshold for killing men than women with drones, and that this occurred due to biases connected to gender, age, ethnicity, and religion. Documents from the Trump administration show that similar decisions were made about men being seen as more of a threat than women, and thus being seen as more “killable” regardless of them actually being combatants or not. This raises concerning questions about what attributes are included for a threat assessment, and how they may falsely identify individuals as threats.

Another concern revolves around the trustworthiness of facial and object recognition programs when it comes to targeting systems for the battlefield. As military development moves further into developing autonomous vehicles and weapon systems, they will be using a variety of AI programs to maneuver and engage in the combat space. When it comes to targeting, and whom to target, the key to these systems will be facial and object recognition. These can be less than accurate at recognizing certain racial or ethnic groups, often due to biases that are built into the systems’ software.

A third consideration revolves around cybersecurity defense and hacking. Many gender researchers have argued that the very idea of security is gendered, and this has different implications in terms of defense. Based upon gendered norms, that which is seen as masculine within society, and thus seen as more important, is also prioritized in cybersecurity and defense. An outcome of this is that organizations viewed as more masculine, such as the military and large corporations, are given the tools and resources for building up cybersecurity defense, whereas other industries such as healthcare, education, and non-profits are left more vulnerable. It has already become clear that these types of institutions are vulnerable and targeted in cybersecurity attacks, such as what happened in the UK in 2017, which crippled the healthcare system. As cybersecurity operations become increasingly reliant on AI, it should also be considered how they might also inherit these forms of bias in terms of what is prioritized in terms of response.

The need for further investigation

Research on the military implications of bias is still in its early stages when it comes to AI-based systems. While there is a growth of reports and literature which outline the ethical challenges of using AI in warfare, and evidence of biased AI in civilian settings, there is little dialogue and interaction between these two themes. There are important lessons to be taken from examples of biased AI in civilian settings, and hopefully the growing body of insights in this domain will enable a better understanding of gender bias in AI military applications.

Share this: