For over a decade now, I have be rolling around the concept of dual-use in my research, much like how a kitten plays with a fluff ball in the sunbeams of a room. What is the term? I’m mildly interested in it, though it might appear to some others that it’s all I focus on. I like rolling it around, batting it about to see how it will react. I also notice how different it appears in different lights. When I’m engaged in research on security concerns in nuclear settings, the duality presents itself as that between energy production and weapons production. In computing/cyber, it is between defensive and offensive applications. In conventional export controls, it is between civil and military applications.
Many of these contexts for understanding what the ‘dual’ is in dual-use shifted after 2001 to incorporate a focus on terrorism as an ‘other’ category. Perhaps this has been taken up most strongly in biology, where an initial focus on the ‘dual-use dilemma’ of biological research was laid out in the 2004 Fink Report, Biotechnology Research in an Age of Terrorism, focusing on how “the same technologies can be used legitimately for human betterment or misused for bioterrorism” (p. 15).
Ten years ago, I would have said that all of these ways of understanding dual-use are curious, and that they all pivoted towards terrorism in the same way, given their different starting points, was even curiouser.* In my research now, I am pivoting to thinking about the limitations of the concept of dual-use itself. Why focus on duality at all?
To work through this question, in the last week or so I have turned back to Foucault, particularly to his lectures on “Society must be defended”. I’ve been really taken with his analysis of the othering that is at the heart of the construction and normalisation of power, regardless of whether that power is centered around a sovereign or distributed throughout a society. “Dual-use” as a term in use today, especially in biology, has been developed, however unconsciously, to structure a group of potentially unruly people (scientists and bioengineers) around a set of practices that employ themselves in the process of governing security concerns in the life sciences. The point that most people don’t know what the ‘dual’ is in ‘dual-use’ when first introduced to it is a very sly tactic to ‘reveal’ to that person a whole world of biosecurity threats that sit beneath the thin veneer of intended beneficial use of advances in biology. This world of threats is presented as real, as definitely out there and in need of constant vigilance to keep at bay.
It is a process of indoctrinating students and researchers into the current dominant narrative of biosecurity governance. The duality, in its general form, might then be considered as a balancing not of military and civil applications of science and technology, but as balancing ‘use’ and ‘abuse’. Normalising researchers into a biopolitics of biosecurity is about creating a system of relations between them and the rest of society that governs themselves. ‘Abuse’ here can then refer to non-socially sanctioned uses of biology. Is it ok for DARPA to be developing biotechnologies? Is it ok for companies to be developing massive synthesising capacity when capacity to understand things like pathogenicity are still not clearly known? Whether these are uses or abuses of a line of innovation can only be answered within particular epistemes.
Characterising the concept of dual-use this way, we can more clearly see a stumbling block that isn’t very widely acknowledged in biosecurity governance right now: to define what constitutes an abuse of power of biotechnology is to agree on the terms of reference for the debate. Do we? There seems to be broad, though perhaps more tenuous than some would like, consensus for not using biology as a weapon (the Biological Weapons Convention). But where novel biological security concerns are going to come from is not entirely clear. A system of governing based on bright lines around known objects of concern, like the American policies on Dual-Use Research of Concern, relies on a central authority to define a threat, but on a distributed network of practitioners to internalize that threat and govern themselves. Many of them, however, do not perceive the threat in the way the state does, and what do you do about threats that are not yet known?
There are two different understandings of security that are at play in the dual-use debate these days: one that has a clear authority searching for the objective list of objects of concern and clear examples of what will happen when rules about their use are disobeyed; and one that has a network of varying levels and kinds of awareness and attention to security governance of science and technology, coupled with a situated and responsive responsibility for addressing concerns as they are identified. I don’t think we yet appreciate the radically different forms of governing these are based on.
* We are indeed going down a Lewis Carroll rabbit hole.