Contesting the AI-Cybersecurity Nexus: Lessons Learned from the United Kingdom

In an age where so-called artificial intelligence (AI) seems to revolutionise every corner of our lives, it’s no surprise that its intersection with cybersecurity has become a major focus for governments worldwide. Where cybersecurity and AI were previously seen as separate entities, they are increasingly merging together in policy discourse. The UK, who promotes itself as a forerunner of technological innovation, offers an intriguing insight into how policy and practice is shifting in the evolving realm of AI and security.

The AI-cybersecurity nexus illuminates trends in the broader technology-security context. The integration of AI to enhance cybersecurity frameworks parallels trends in other domains where technology evolves dynamically, such as autonomous systems and real-time data analytics. The AI-cybersecurity interplay becomes even more critical in new technologies “on the go,” such as drones.  AI-driven intrusion detection systems are increasingly sold and promoted to prevent unauthorized hijacking of military drones or safeguard delivery drones from interference. While increased precision and faster decision making is often named as reasons for the integration of AI systems, the AI systems used in drone fighting in Ukraine have a 20% failure rate probability. As AI is increasingly used in all areas of society, it is essential that we ask what is the goal of using AI? This question mirror the broader concerns outlined in our current research and in this blog: Who shapes AI’s role in securing these technologies? What narratives dominate, and with what consequences?

 

 

The Rise of AI and Cybersecurity Convergence

 

AI’s integration into cybersecurity isn’t entirely new. For years, machine learning algorithms have been quiet workhorses in cybersecurity, automating data processing and strengthening network defence. However, the launch of OpenAI’s ChatGPT in late 2022 marked a transformative moment—what some UK officials called a “Sputnik moment”—that catapulted AI into public and political discourse. By providing an imaginary – an idea or image of what AI could and can be –ChatGPT created a space for policymakers to engage with AI.  Since then, a merging of AI and cybersecurity has occurred in UK policy and the rest of the world. Policymakers, caught off guard by the public excitement and concerns surrounding generative AI, have, since the launch of ChatGPT, scrambled to align AI with existing (cyber)security frameworks. The result? The birth of a new policy direction: AI-cybersecurity as a linchpin of national security.

 

Visions at Work

 

The convergence of AI and cybersecurity didn’t occur in a vacuum. The UK’s ambitious AI strategies of the past decade laid the groundwork, which emphasised AI’s transformative potential across sectors. However, the advent of ChatGPT reshaped these narratives, linking AI more directly with security concerns. Policymakers’ interactions with the private sector, particularly in Silicon Valley, further cemented this shift. The prominence of external organisations, such as the AI visions promoted by private companies such as OpenAI, highlights the UK’s reliance on industry narratives to inform policy.

 

Technocratic visions of desirable futures of policy, technology, and governance all play into the governmental shift and attention towards AI. In the UK, the merger of AI-cybersecurity is oriented to a future where advanced AI ensures a safe, resilient digital society. The UK has taken several steps towards fusing AI and cybersecurity. A global summit on AI in Bletchley Park, an AI strategy and a myriad of other events all point towards the UK taking clear strides towards AI in all levels of government. Initiatives like the National Cyber Security Centre’s (NCSC) Guidelines for Secure AI System Development offer practical frameworks to integrate AI into cybersecurity practices. These guidelines reflect a pragmatic approach, focusing on immediate applications like securing AI systems against vulnerabilities. AI is becoming integrated into all levels of security policy discourse, from societal safety to secure systems, creating some confusion over what the inclusion of AI is supposed to mean.

 

The contestation and confusion over what AI means has led to some prioritising AI safety—guarding against misuse—while others emphasise AI security, focusing on defending systems against novel threats. This wide spectre of AI implementation reflects a broader tension in how governments negotiate the balance between innovation and regulation. AI is not only a technology. It is deeply political. How AI is understood and who gets to shape the narrative holds large societal consequences. It is as such essential that we question the different types of narratives of AI as an all-changing technology, were do the stories and narratives about AI changing security practices come from? Who benefits from them? vs. what we see in practice taking place? These questions are essential to ask, not only in the realm of cybersecurity but in all wider dual-use technology. As private sector actors continue to dominate technological development, it is essential that we question what the technology enables, who it enables and with what consequences for current security practices.

 

 

The Bigger Picture: Reflections on the cybersecurity-AI nexus for broader technological developments

 

What does the merging of AI-cybersecurity mean for peace and security? On one hand, it represents a proactive effort to harness technology for societal resilience. On the other, it risks entrenching power dynamics that privilege private sector visions over democratic oversight. The reliance on technocratic expertise and industry partnerships can limit alternative ways of understanding AI and engaging with it, narrowing the space for debate about AI’s broader implications, and where it is used, in for example technology such as drones and targeting operations in war.

 

As states around the world continue to integrate AI into their security frameworks, the challenge will be to ensure that the technology used remains inclusive, democratic and adaptive. For global security, it’s essential that AI policies prioritise not only safety but also accountability. As technology continues to intersect with security on multiple fronts, the lessons from the UK’s evolving policies underline the need for transparency and inclusivity in AI governance, ensuring it supports equitable technological progress. It is crucial that we question why AI is being integrated, at what costs and the consequences our society is willing to take at the price.

 

A deep examination of the factors that drive AI cybersecurity policy is needed. This can improve our understanding of the forces that shape our technological future and the worlds they create. At the Peace Institute of Oslo, reflections on the factors that drive technology advances remind us that the narratives we construct around technology are as much about power and politics as they are about innovation.

 

 

***

This blog is written as a part of Regulair and the NFR CYKNOW project

 

***

Lilly Pijnenburg Muller is a Lecturer in War Studies at Kings College London and a visiting researcher at PRIO.

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *