Can Videogames Shape Public Understandings of Weaponized Artificial Intelligence?


Image by Lukas from Pixabay

Millions of consumers play videogames like Call of Duty and Tom Clancy’s Ghost Recon. The battlefields these series portray increasingly feature drones, tanks, and robots that select and engage targets on their own, thanks to artificial intelligence (AI) enhancements. Although prior research suggests that pop-culture portrayals like these can shape public understandings of AI-weaponization, our study on the production and consumption of military-themed videogames suggests that this influence has its limits.

Since the 2010s, military-themed videogames have increasingly featured autonomous weapons systems that are enhanced by AI. Robots reminiscent of Boston Dynamic’s BigDog stalk arctic battlefields in Ghost Recon: Future Soldier. Meanwhile, drone swarms haunt a futuristic 2054 Seoul warzone in Call of Duty: Advanced Warfare. Weaponized-AIs like these are portrayed as existential threats to humankind in videogame narratives, intended to be warnings of dystopian futures by game developers.

In our Security Dialogue article, we explore how such videogames can shape players’ understandings of weaponized-AI through an actor-network theory lens. To do so, over six months we conducted netnographic fieldwork in which we watched hours of YouTube footage in which consumers played military-themed videogames and commented on their gaming experiences in real-time. In addition, we immersed ourselves in the imagery and texts of fan Wiki pages, Reddit forums, press releases by videogame developers, and gaming media stories associated with the 19 military-themed videogames we studied.

Based on our data, we find that videogame portrayals are a common way in which many consumers first learn about AI’s potential warfare applications. In these instances, players are confronted with realizations that future battlefields (and conflicts that may reach their doorsteps) can be made more deadly by AI-weapons. This part of our study echoes prior research that shows how pop-culture can influence public understandings of new political developments (e.g., films, TV shows), including how the meanings contained in such materials can be mobilized to support or oppose the regulation of AI-weapons.

However, we also find that what makes a particular form of pop-culture mass-marketable can limit the kinds of meanings it can offer to consumers. In our context, the mass-marketability of videogames depends on how fun they are to play. However, enemy AI-weapons that easily kill consumers who are role-playing as human soldiers rarely enable fun gameplay experiences. In our data, we observe that developers consequently tone down how challenging AI-weapons are to play against. What results are players becoming proficient in destroying scores of AI-augmented (but less dangerous) drones, tanks, and robots – blunting meanings over how lethal these are supposed to be to humans in the same videogames’ narratives.

Through our work, we aspire for readers to consider the limits of what makes a pop-culture form both widely influential on the public and mass-marketable. Analogous to videogames and fun gameplay, must late-night satire harmonize informativeness and humor? Or must films abide by a three-act structure with a happy ending to be widely influential? We hope that our study opens doors for future scholarship that lies in these intersections between pop-culture and politics.

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *