Tech-Based States of Emergency: some key takeaways

The COVID-19 pandemic has triggered the acceleration of pre-existing technological trends. As states introduce new rules and technological solutions to fight the pandemic, it can be tempting to view these technological applications as neutral scientific decisions. However, we must critically examine these decisions because times of crisis set standards which can last long after the states of emergencies end. For example, it is clear that the 9/11 attacks on the World Trade Center in 2001 fundamentally changed many societies’ perception of “normal” in issues such as surveillance or in smaller routines when it comes to air travel, for example. These routines remain to this day, two decades after the fact, and in many cases some of the surveillance practices have only grown bigger.

Technology is a fascinating field of study because on the outside it is almost disguised as something objective; technology is often perceived as “clean”. However, as Martin Heidegger argued, technology is not only an instrument under human control, but a way of revealing the world. It makes us see things a certain way, and it shapes us as much as we shape it. That is why, even – or especially – in a state of emergency, we must be intentional and critical of technological decisions and solutions. This critical approach to what may seem like objective-based action through “data-driven decision making” will help in resisting techno-solutionism and the hype that often surrounds it. For example, Bruno Oliveira Martins, Chantal Lavallée and Andrea Silkoset outline in Global Policy how  resorting to drones to fulfil a number of COVID-related tasks has generated excitement around the world but also impacts societies in many ways.

Evgeny Morozov describes techno-solutionism as “the idea that given the right code, algorithm and robots, technology can solve all of mankind’s problems”. However, as Raluca Csernatoni argues in her piece for the PRIO blog, these technologies are not a “silver bullet” to solve deeper sociopolitical problems that have become clearer due to the pandemic, challenges such as income inequality, healthcare gaps, and bias in our systems.

Techno-solutionism: Deepening pre-existing inequalities

As Bruno Oliveira Martins argued during PRIO’s virtual webinar Tech-Based States of Emergency: Public Responses and Societal Implications, a digitalized world does not guarantee solutions to systemic inequalities. Instead, a digitalized world more often than not deepens pre-existing inequalities, and therefore the adoption of digital solutions should consider the social and economic contexts that it will have an impact on. One of the ways this has happened during the pandemic is through the perception of who the “standard human” is.  In “Solutionism, Surveillance, Borders and Infrastructures in the “datafied Pandemic”, Philip Di Salvo sheds light on Singapore’s technological success.

Singapore’s contact tracing program introduced a contact tracing app which was hailed by Harvard epidemiologists as the “gold standard of the near perfect detection”. The app in question was developed with the “standard human” in mind, which tends to be a reflection of those in power.  In “The Rise of the Data Poor: The COVID-19 Pandemic Seen From the Margins”  Stefania Milan explains that the “distorted idea of a “standard human”[is] based on a partial and exclusive vision of society and its components, which tends to overlook alterity and inequality”.

In the context of the pandemic, this focus on the “standard human” became apparent when over 300,000 low-wage foreign workers from India and Bangladesh who lived in dormitories were separated from the main data report.  Their infection rates were presented as “Number of cases in dorms” while the “standard human” was classified under “Number of Cases in Community”, further deepening inequality. Mohan Dutta, professor of Communication at Massey University told BBC that the data was communicated through “the idea of reporting two different numbers in Singapore…  [these] make the inequalities even more evident. One might even go so far as to say its [an example of] ‘othering’”.

Stefania Milan, professor at the University of Amsterdam, describes the COVID-19 pandemic as the first pandemic that appears in the “datafied society”. She explains that a “datafied society” is one in which our social and economic systems are based on the systems of data collection, exploitation and monetization of personal data. This datafied society thrives on information for capitalist gain and general function of “evidence-based” policy making. She raises concern over the universalization of the virus for two key reasons. The first is the reliance on the ideal type of human or community, which was the case in Singapore. The second is data poverty.

Milan explores this issue in her “The Rise of the Data Poor” article,  illustrating that privacy rights are “often luxury problems” for vulnerable populations. In vulnerable populations in the global south, algorithmic decision-making may determine vital outcomes such as the distribution of welfare subsidies. As such, being “invisible” to the state can mean the lack of access to critical resources such as food or shelter.

Canadian COVID Alert: On the ground

On July 31, 2020 Canada launched the COVID Alert app which it hoped would reduce the spread of infection while protecting users’ privacy. Sean Boots, Policy Advisor with the Canadian Digital Service (CDS), was on the ground floor responding to the pandemic. From his presentation, it is evident that there was a clear intention from the start to prioritize privacy above data-collection. Despite regional pressure to collect data through the app in order to support traditional contact tracing efforts, the team made a deliberate choice to not collect data for two main reasons. The first is the Canadian political system, which is set up in such a way that personal health information is the regional responsibility of the provinces, which would make a federal app that collects health data problematic. The second is the strategic decision to design the app in a way that would prioritize app adoption.

A key pillar to app adoption was transparency. Transparency was supposedly achieved by building the app on a public platform called “GitHub” which allowed the public to view the developers’ work every day. Boots reflects on how this created a high-pressure environment around the team as anyone with an account was able to make suggestions and comment on the code. He stated that it felt like “working under a microscope” which would eventually lead to public trust in the end product. This decision to take a proactive approach to what they knew would be a privacy beehive ensured that the app was produced in clear view, which would also aid the international efforts on creating apps such as these, with simple language explaining to the user what the app does versus what the app does not do.

However, Boots explains that what influenced the approach to how the app would be developed was more so the technology ecosystem rather than government decisions. This adds to one of the key trends the States of Emergency project has been seeing; the acceleration of the blurring of lines between the private and public sector. Boots explains that during the early stages of the process there was debate on the topic, but once the team received a proof of concept by a volunteer group from Shopify that was based on the Apple/Google Exposure Notification System, the team quickly adopted this proof of concept as a jumping off point to what would become the COVID Alert app. This illustrates the power the private sector had in delivering a product that was hard to resist.

Morozov argues that this drive to eradicate imperfection and make everything “efficient” shuts down other avenues of progress and leads ultimately to an algorithm-driven world where Silicon Valley, rather than elected governments, determines the shape of the future.

Another temptation the government will have to face is the push to collect anonymized data for the provincial governments, thus changing the app’s initial privacy setting from no data collection to some data collection.

Building Resilient Communities through Trust

In the fight against the pandemic, perhaps one of the most crucial tools at our disposal is information. Brenda Jimris-Rekve, a Community Manager at the Basic Internet Foundation, explains the importance of information to developing resilient communities. She believes communities can sustain themselves through learning from one another. However, the nature of top-down information can be problematic in regions with low trust in governments and multi-national organizations. Jimris-Rekve explains that even now, there are those who believe that the pandemic is a conspiracy. In order to fight misinformation, she argues that communities need “digital friends”. These are individuals joined through a digital network which operates in communities’ long term, and who can slowly build trust. This long-term investment will yield greater results as opposed to investing in information campaigns that will not have the desired effect on people due to a lack of trust in its source.

Jimris-Rekve’s work with the Basic Internet Foundation further illustrates Milan’s warning of universalizing pandemic solutions. Different states will have differing contexts that they operate within as they face this pandemic. This is an important reminder for international organizations to invest in long-term trust-building in fighting the pandemic, and more broadly, in developing sustainable communities.

Long-Term Impact of Short-Term Techno-solutions

Times of emergencies such as these are critical as they set new standards which are then normalized and maintained after the crisis. Decision-making during this ‘extended emergency’ that fail to critically examine the impact of their solutions on all people, instead of the “standard person”, risk further marginalizing those already on the margins of our society. In Revisiting Emergency eLearning, Michael P.A. Murphy explores the consequences of the acceleration of eLearning due to the pandemic.  This trend and the decisions behind these policies operate with the “standard human” in mind. While students face different education environments at home, this transition disproportionally impacts students who lack private space to work, a calm household and proper tools. The assumption that students, and workers, can transition to digital work from home further accelerates the blurring of lines between private and professional spaces. As such, the professional realm will be highly impacted by the pre-existing conditions of the private one.

Therefore, the development of technology and solutions that can help people through this pandemic may in fact, in some cases, work towards solidifying pre-existing power dynamics and deepening inequalities if decision makers do not wholeheartedly understand the political implications of techno-solutions. This can be difficult during a crisis such as this, as Milan explains, in which it can be challenging to put ourselves in the shoes of those around the world because we are focused on our own emergencies. However, the narratives and solutions adopted in one country can have a rippling impact on others. As such, a human-centered approach must be taken to technology development and deployment.

As political actors, technology developers, and citizens face this pandemic, it is critical that the data used includes all, not just the ideal standards; that technology is seen as an inherently political enterprise that requires intentionality in its development and deployment; and that we resist the temptation of solving deeply systemic problems with temporary technological solutions.

Share this: