From Principle to Practice: Humanitarian Innovation and Experimentation

Without methods to gauge success and failure, and without appropriate ethical frameworks, humanitarian tech may do more harm than good.

A passenger arriving at Monrovia’s Roberts International Airport takes advantage of effective, if not innovative, humanitarian intervention. Photo: Sean Martin McDonald

Humanitarian organizations have an almost impossible task: They must balance the imperative to save lives with the commitment to do no harm. They perform this balancing act amidst chaos, with incredibly high stakes and far fewer resources than they need. It’s no wonder that new technologies that promise to do more with less are so appealing.

By now, we know that technology can introduce bias, insecurity, and failure into systems. We know it is not an unalloyed good. What we often don’t know is how to measure the potential for those harms in the especially fragile contexts where humanitarians work. Without the tools or frameworks to evaluate the credibility of new technologies, it’s hard for humanitarians to know whether they’re having the intended impact and to assess the potential for harm. Introducing untested technologies into unstable environments raises an essential question:

When is humanitarian innovation actually human subjects experimentation?

Humanitarians’ use of new technologies (including biometric identification to register refugees for relief, commercial drones to deliver cargo in difficult areas, and big data-fueled algorithms to predict the spread of disease) increasingly looks like the type of experimentation that drove the creation of human subjects research rules in the mid-20th century. In both examples, Western interests used untested approaches on African and Asian populations with limited consent and even less recourse. Today’s digital humanitarians may be innovators, but each new technology raises the specter of new harms, including biasing public resources with predictions over needs assessment, introducing coordination and practical failures through unique indicators and incompatible databases, and significant legal risks to both humanitarians and their growing list of partners.

For example, one popular humanitarian innovation uses big data and algorithms to build predictive epidemiological models. In the immediate aftermath of the late 2014 Ebola outbreak in West Africa, a range of humanitarian, academic, and technology organizations called for access to mobile network operators’ databases to track and model the disease. Several organizations got access to those databases — which, it turns out, was both illegal and ineffective. It violated the privacy of millions of people in contravention of domestic regulation, regional conventions, and international law. Ebola was a hemorrhagic fever, which requires the exchange of fluids to transmit — a behavior that isn’t represented in call detail records. More importantly, the resources that should have gone into saving lives and building the facilities necessary to treat the disease instead went to technology.

Without functioning infrastructure, institutions, or systems to coordinate communication, technology fails just like anything else. And yet these are exactly the contexts in which humanitarian innovation organizations introduce technology, often without the tools to measure, monitor, or correct the failures that result. In many cases, these failures are endured by populations already under tremendous hardship, with few ways to hold humanitarians accountable.

Humanitarians need both an ethical and evidence-driven human experimentation framework for new technologies. They need a structure parallel to the guidelines created in medicine, which put in place a number of practical, ethical, and legal requirements for developing and applying new scientific advancements to human populations.

The Medical Model

“Human subjects research,” the term of art for human experimentation, comes from medicine, though it is increasingly applied across disciplines. Medicine created some of the first ethical codes in the late 18th and early 19th centuries, but the modern era of human subject research protections started in the aftermath of World War II, evolving with the Helsinki Declaration (1975), the Belmont Report (1978), and the Common Rule (1981). These rules established proportionality, informed consent, and ongoing due process as conditions of legal human subjects research. Proportionality refers to the idea that an experiment should balance the potential harms with the potential benefit to participants. Informed consent in human subjects research requires that subjects understand the context and the process of the experiment prior to agreeing to participate. And due process, here, refers to a bundle of principles, including assessing subjects’ need “equally,” subjects’ ability to quit a study, and the continuous assessment of whether an experiment balances methods with the potential outcomes.

These standards defined the practice of human subjects research for the much of the rest of the world and are essential for protecting populations from mistreatment by experimenters who undervalue their well-being. But they come from the medical industry, which relies on a lot of established infrastructure that less-defined industries, such as technology and humanitarianism, lack, which limits their applicability.

The medical community’s human subjects research rules clearly differentiate between research and practice based on the intention of the researcher or practitioner. If the goal is to learn, an intervention is research. If the goal is to help the subject, it’s practice. Because it comes from science, human subjects research law doesn’t contemplate that an activity would use a method without researching it first. The distinction between research and practice has always been controversial, but it gets especially blurry when applied to humanitarian innovation, where the intention is both to learn and to help affected populations.

The Belmont Report, a summary of ethical principles and guidelines for human subject research, defines practice as “designed solely to enhance the well-being of a client or patient and that have a reasonable expectation of success,” (emphasis added). This differs from humanitarian practice in two major ways: First, there is no direct fiduciary relationship between humanitarians and those they serve, and so humanitarians may prioritize groups or collective well-being over the interests of individuals. Second, humanitarians have no way to evaluate the reasonableness of their expectation of success. In other words, the assumptions embedded in human subjects research protections don’t clearly map to the relationships or activities involved in humanitarian response. As a result, these conventions offer humanitarian organizations neither clear guidance nor the types of protections that exist for well-regulated industrial experimentation.

In addition, human subjects research rules are set up so that interventions are judged on their potential for impact. Essentially, the higher the potential for impact on human lives, the more important it is to get informed consent, have ethical review, and for subjects to extricate themselves from the experiment. Unfortunately, in humanitarian response, the impacts are always high, and it’s almost impossible to isolate the effects generated by a single technology or intervention. Even where establishing consent is possible, disasters don’t lend themselves to consent frameworks, because refusing to participate can mean refusing life-saving assistance. In law, consent agreements made under life-threatening circumstances are called contracts of adhesion and aren’t valid. The result is that humanitarian innovation faces fundamental challenges in knowing how to deploy ethical experimentation frameworks and in implementing the protections they require.

First Steps

The good news is that existing legal and ethical frameworks lay a strong foundation. As Jacob Metcalf and Kate Crawford lay out in a 2016 paper, there are significant enough similarities between biomedical and big data research to develop new human subjects research rules. This January, the United States expanded the purview of the Common Rule to govern human subjects research funded by 16 federal departments and agencies. Despite their gaps, human subjects research laws go a long way toward establishing legally significant requirements for consent, proportionality, and due process — even if they don’t yet directly address humanitarian organizations.

Human rights-based approaches such as the Harvard Humanitarian Initiative’s Signal Code go further, adapting human rights to digital humanitarian practice. But, like most rights frameworks, it relies on public infrastructure to ratify, harmonize, and operationalize. There are proactive efforts to set industry-focused standards and guidelines, such as the World Humanitarian Summit’s Principles for Ethical Humanitarian Innovation and the Digital Impact Alliance’s Principles for Digital Development. And, of course, there are technology-centric efforts beginning to establish ethical use standards for specific technologies — like biometric identification, drone, and big data — that offer specific guidance but include incentives that may not be relevant in the humanitarian context.

That said, principles aren’t enough — we’re now getting to the hard part: building systems that actualize and operationalize our values. We don’t need to decide the boundaries of innovation or humanitarianism as industries to begin developing standards of practice. We don’t need to ratify an international convention on technology use to begin improving procurement requirements, developing common indicators of success for technology use, or establishing research centers capable of testing for applicability of new approaches to difficult and unstable environments. A wide range of industries are beginning to invest in legal, organizational, and technological approaches to building trust — all of which offer additional, practical steps forward.

For humanitarians, as always, the stakes are high. The mandate to intervene comes with the responsibility to know how to do better. Humanitarians hold themselves and their work to a higher standard than almost any other field in the world. They must now apply the same rigor to the technologies and tools they use.

Share this:

One Comment

Comments are closed