(function (window, document, $) { $(document).ready(function() { $('form select').each(function (i, e) { var label = $(e).parent().parent().find('.form-item-label').text().trim(); $(e).attr('aria-label', label); }); $('section.about-policy #related-information').attr('id', 'related-information-wrapper'); $('a:empty').remove() $('a span:empty').remove(); $('a').attr('aria-label', 'icon'); }); })(window, window.document, jQuery);

by: Jannis Grimm, INTERACT Center for Interdisciplinary Peace and Conflict Research, Freie Universität Berlin

 

Critical junctures such as the Covid-19 pandemic can disrupt hitherto unquestioned modes of research and thereby offer researchers a chance to break new ground. In fact, driven by the need to adapt, within a span of two years, some social scientists have gotten accustomed to an entirely new (and largely digital) research ecology full of software applications to host interviews and focus group discussions, browser-based presentation and brainstorming tools, and online platforms that facilitate collaborative writing and the sharing data sets. But entering uncharted territory also entails giving up some control: While the pandemic has certainly enabled a rejuvenation of the social sciences and their methods, it also made popular a range of new research tools whose implications for research ethics and safety are still largely neglected and sometimes little understood.

In this reflection I argue that we might want to pay more attention to the corollary of  entire disciplines – in my case, political science –  “going digital” virtually overnight and shift from passive observation to an active reflection about the trade-offs between convenience and security inherent in this process. These considerations build on the conversations we had within the frame of the SAFEResearch project, a collaborative effort at the University of Gothenburg’s Program on Governance and Local Development (GLD), where we tried to come up with very practical guidance on the implications of ethical norms such as do no harm and informed consent for field work practices. The project concluded in May 2020 with the publication of a handbook of safer field research in the social sciences. Notably, the idea of this handbook was not to provide a one-size-fits-all instruction for how to do field research – digital or not – but rather to highlight ways how both junior and senior scholars could make more informed decisions on the question of how to adequately protect themselves, their partners and their interlocutors.

Of course, the book was written before the pandemic, but even before Covid-19, the trend was visible that social scientists were increasingly reliant on a digital infrastructure of which they often only had a rudimentary understanding. Since Covid-19, this dependency has become more obvious. Especially younger and more tech-savvy researchers have compensated for the lack of field access by a resort to remote methodologies. But also less technically versed researchers, those who did their field work before the pandemic hit, or those who modified their projects in the wake of the pandemic and opted to focus on the “do-able” as suggested by Tonya Dodez on this site, now often explore digital methodologies to complement the way they collect their research data. In fact, the pandemic has created a significant academic market for new video conferencing products, as well as other automated tools that promise to facilitate an increasingly datafied research process – and a demand for guidance on how to use these tools in safe and ethical ways. In fact, one even might say that the pandemic became a transformative event that finally moved practical and methodological debates about the implications, limits and advantages of remote methodologies – debates that were ongoing, above all, in the disciplines of anthropology and psychology – into the academic mainstream.

These fruitful discussions, some of which have taken place on this website, include thoughtful reflections of the things we miss out on when we engage in virtual interviewing, rather than traveling to a place, the drawbacks of virtual interviews and focus groups as regards sampling bias and analytical depth, as well as invaluable attempts to collect, test and categorize a plethora of tools, apps and software solutions for interviewing, transcription, conference hosting and the like. But what is worrying is that the increasing dependence on these – sometimes little known – technologies has not led to greater discussion of their risks. Even after two years of Zoom, WebEx and the like, we still lack compelling answers to the question how we should deal with untransparent technologies and the risk of surveillance; we still treat digital security as a sort of side aspect, rather than an integral part of our project planning.

To be sure, one of the problems with surveillance is that it is, by definition, hard to detect. That’s the case for the surveillance of both physical meetings, such as face-to-face interviews, as well as virtual encounters. But when it comes to the latter, researchers are not even physically ‘there’ to assure a safe space for interviewees and a safe encounter. As Danny Hirschel-Burns has noted on this blog: “For in-person fieldwork, handwriting notes, not recording interviews, and thinking carefully about where the interview is taking place all helps improve security for myself and my participants. Doing fieldwork digitally means I lose control over all of those elements.”

In traditional types of fieldwork there may exist disparities as regards the position, privileges, resource access or safety nets of researchers and their informants – especially in research with vulnerable populations. But digital fieldwork adds an additional imbalance with regards to the safety of both sides involved in an encounter. A remote interview may sometimes be the safest option for a researcher – the risk that I’m harassed, arrested, kidnapped or even just caught up in an uncomfortable situation that is difficult to control or physically exit is reduced to zero when I conduct an interview from my home office. But this isn’t necessarily the case for the interviewees for whom “the field” we investigate may start at their doorstep. In fact, coming up with secure connections, finding a place from where to do the interview, or even securing a stable data connection may place a substantial additional burden on them.

This situation is exacerbated by many researchers’ lack of sound technical knowledge. Sophisticated surveillance tools, such as the recently exposed Pegasus software provided by the Israeli NSO group, are almost impossible to fend off – but the bulk of digital research related risks do not stem from such sophisticated spy tools. Mostly, it is our understandable attraction to commercial software solutions, which help us optimize our workflows, that should raise privacy concerns. Think of automated transcription software! While conveniently reducing a tedious task, uploading audio files to a commercial service provider also implies providing this very provider with a — sometimes sensitive — recording. It is our everyday software and media practices that carry the highest risk of exposing research participants. By using some of the most widespread and convenient communication tools for research purposes, every day we passively conform to certain types of data sharing that we often do not really understand (either because we haven’t read the respective agreements or because we do not understand what they imply). Can we confidently say that these decisions do not jeopardize the confidentiality that we promise our interlocutors and the privacy that they often rely on?

In fact, without proper training, it is hard to calculate the risk from surveillance that we face when engaging with each other through digital mean. This, however, means that our ability to comprehensively inform our research partners about potential consequences of their participation in a questionnaire, interview, focus groups etc. is also restricted. This dilemma is only heightened throughout the research process and affects the trust that can reasonably be established between researchers and their interlocutors and that is so essential for field research. How can we expect our research partners to trust us with their sensitive information or testimonies, if we can’t trust in our abilities to inform them about the potential risks of their participation in our research?

In our handbook for Safer Field Research in the Social Sciences we have suggested honest self-reflection and awareness-building, as first step out of this bind. This can be done through a rudimentary digital threat modeling that comprises a range of questions that researchers may want to discuss within their research teams or think through jointly with their supervisors. This threat modeling consists of mainly three dimensions: a data mapping, an actor mapping, and a context analysis and is sketched out in the table below.

Table 1: Digital threat modeling

TypeGuiding questions
Data mappingWhat types of data do you produce or make use of during your research?

What information do you collect, store, or communicate?

What types of contextual/metadata are being created throughout the research process?

What would be the consequences if either of these data types were compromised?
Actor mappingWho might be interested in your data, the sources of your data, and in its further use?

Who might obtain access to your communication via physical or digital means?

How do these actors view your research project? By whom could your data be misused?

What are their capabilities and how likely it is that they will use them?

Which ones are they more likely to use than others?

Who would be affected by data breaches?
Context analysisHow accessible and reliable is the communication infrastructure?

What software and ICT services do you rely on during your research?

Does the legal context prohibit or prescribe the use of certain ICT services?

Does your academic institution prohibit or prescribe the use of certain ICT services?

What level of digital proficiency do you and/or your research team possess?

Who else might have access to your computer?

The main aim of such a threat modeling is to help the process of identifying data-related threats and assessing the likelihood that they will materialize and potentially impact a the research project. They are a means to an end: researchers themselves need to make choices on how to address these potential risks, as they are usually the ones who know best the specific context of their empirical project – but a threat assessment will a) draw attention to some ethical and security-relevant implications of certain digital methodologies that might otherwise be overlooked and b) help researchers to make more informed decisions. It is useful to seek advice beyond disciplinary boundaries throughout this process. This can be done either by consulting the countless resources, cheat sheets and guidelines on safe and ethical technology use that associations and collectives like the Electronic Frontier Foundation, Citizen Lab, TacticalTech, Security in a Box and many others have made publicly available on their websites. Or it can be done through interpersonal exchange and training.

Unfortunately, established pre-field work trainings and review processes are not really designed for this. To date, social scientists still receive little training on data anonymization and encryption, anonymous browsing, secure messaging, data compartmentalization and offloading, and other crucial aspects of digital safety. They are thus usually not sufficiently equipped to provide a safe “digital encounter”. Moreover, unlike with other methodological questions that inevitably arise during any project, senior scholars are often equally overwhelmed by questions of data safety. Sometimes, they are not acquainted with some of the newer technologies used by their students which, though convenient for private use, may not be suited for field research purposes. In this context, it is useful to ask external digital security experts for training and advice, if resources allow it.

For instance, one way forward could be a more systematic integration of digital security courses into university curricula. Another would be to include digital safety trainings for staff into project planning right from the start, by budgeting workshops with external security trainers into funding proposals, by establishing workflows that treat digital threat assessments as integral part of the research process and not as optional or ancillary, or by setting up IT-focal points that keep an eye on the digital infrastructure and practices of the entire research team.

But even researchers without access to substantial resources can do a lot to make their research practices and communication with remote research participants or assistants safer by informing themselves about the potential ethical and security implications of the software they use in their daily research practices and seeking out better alternatives. At times, this may entail accepting that some of the most convenient ways to compensate for lack of field access and to substitute well-established but now infeasible practices are sometimes not adequate. In any case, I believe it is imperative that we start reflecting more about how to minimize potential harm while our profession and its toolkits are becoming increasingly datafied.