ESRC Digital Good Network


How and why collect Equity, Diversity and Inclusion (EDI) data?

Our Associate Director, Ros Williams, discusses the challenges of gathering ‘good’ data that enables us to evaluate whether we are meeting our EDI aims.

Part of our aspiration to diversify our research community  meant that we were keen, from the outset of our activities, to collect data about the people applying for our different schemes, from our PhD summer school, to internships and the Digital Good Research Fund.

Doing so would help us to monitor whether we were achieving this goal. Not surprisingly, given our interest in good uses of digital technology (and related aspects like the uses and values of data), we wanted to think quite carefully about what data we collected and how.

We tried to devise a data collection mechanism that could capture information on applicants’ self-identified gender, sexuality, dis/ability, race/ethnicity and nationality. Whilst oftentimes a researcher’s identity isn’t relevant to an individual application, our broader contention is that researcher positionality is one of the most relevant aspects of research. Who we are inflects the research problems we want to explore, and perhaps even the solutions we generate to address them. Our view is that, if we can cultivate a more diverse research community, we stand a better chance of generating a body of research and scholarship capable of confronting our guiding problematic – what constitutes a good digital society and how do we get there? – in a way that will be legible and valuable to more people.

Which categories should we be using?

One of our commitments is towards internationalisation. In practice, that means trying to find ways to engage scholars from beyond the UK, and especially  beyond places where there’s often more resource for digital society research, like Europe and Northern America and wealthier countries in the southern hemisphere. Some of this activity includes encouraging international co-applicants to our Digital Good Research Fund; another example is our Fellowship scheme.

When these applicants apply, they fill in the same EDI data form as every other applicant. They have the same boxes to pick and tick as everybody else. But when we collect applicant ethnicity data, we use the standard UK ethnicity collection framework. And, whilst this might make sense to a UK audience, it might not to other audiences outside the UK. We noticed that some applicants from other countries used the ‘open text’ field to try to describe themselves using language not available to them through our data capture mechanism.

That’s because, as I’ve written about elsewhere“[r]ace—which appears deceptively stable when put into a list that permits only one selection—is not reducible to one thing.  Imagining a mechanism for collecting data that could holistically account for the multiple (maybe endless!) ways we all self-identifynot just in terms of our ethnicity, race, heritage or ancestry (all different terms themselves), but even in terms of gender, sexuality or dis/ability feels impossible.

We continue to think through this challenge as we look towards how we collect data in future rounds of our schemes. Ultimately, we need to find a balance between allowing people to articulate who they are in a way that is meaningful to them, and being able to aggregate information in a way that allows us to act upon it.

How are we doing?

So how do we act upon that data? First, we need to see what stories it can tell us, which can be challenging because we need a basepoint from which to compare. International differences notwithstanding, ethnicity is a good example of where this is more straightforward (for social locations like disability or sexuality, it can be harder). If we know what the UK population’s ethnic constitution looks like (and we do, because of the national census), and we use the same categories (which we are currently doing at the Digital Good Network), then we can see whether nationally representative numbers of people from different ethnic groups are applying for our schemes. We can thus ask whether we are engaging proportionally fewer Black applicants than there are Black people in the UK (keep in mind the challenge mentioned earlier about the fact international applicants are included in these data too!)

There’s a difference between who’s applying and who’s applying successfully though. Given the numbers of awards are rather small (we’ve funded nine projects in the first round of the DGRF, for example, and welcomed 24 students to our 2023 summer school), we have elected not to report any specific numbers because it risks identifying individuals. At best, we can broadly characterise the data sets. Let’s look at the Digital Good Research Fund.  

In our first round, we’ve attracted a diverse range of ethnic groups that, to some extent, maps onto census data. For example, about 77% of applicants self-describe as white (about 80% of the UK population do so). Just shy of 3% of our applicants self-described as Black (about 4% of the UK population do so). It’s harder to say what the success rate is, given applications to the fund are team applications. Black heritage applicants looked to be attached to projects that were funded in above average numbers (the success rate for the scheme was around 10%, whilst these applicants had an overall success rate of around 40%). 

Using data to help reach our goals

What could be better? We had fewer Caribbean heritage and Pakistani heritage applicants applying than is proportionate to the census data. Could we, then, do more to draw in applicants from these backgrounds? Moreover, the success rates for people of Chinese, Bangladeshi, Pakistani and Indian backgrounds were really low. How does one support applicants to move from creating applications, to getting funded?

One of the three societal challenges that we want to engage with in this network is equity. We chose the word equity, and not equality, deliberately. It enables us to think about how people with different backgrounds and histories have different starting points. In the UK, some of our major research funders have a history of disproportionately underfunding research led by scholars of colour, for example. Do you address that by trying to fund more of these scholars than is nationally representative? What about the other social locations (and, of course, an individual always has more than one!) that concern us? Sexuality, gender, disability?

Some funders have experimented with ‘ring fencing’ funding (this means protecting some or all of a resource for a particular purpose or people) or even randomly allocating funding across applications. Our view is that, this early in our process without spending more time awarding funds, we don’t have a solid enough evidence of the potential ‘problems’ emerging in our own scheme. Our plan, then, is to collect data again next year and see if these data remain broadly stable. Could a case be made for doing more outreach specifically with Pakistani and Caribbean heritage applicants, for example? These are the kinds of things that we will consider as the Network grows over the years ahead.

Back to Outputs