Craft metrics to value co-production

To assess whether research is relevant to society, ask the stakeholders, say Catherine Durose, Liz Richardson and Beth Perry. This article was originally posted on Nature

Democracy Monument in Bangkok at night, where city plans aim to improve the quality of life.

Advocates of co-production encourage collaboration between professional researchers and those affected by that research, to ensure that the resulting science is relevant and useful. Opening up science beyond scientists is essential, particularly where problems are complex, solutions are uncertain and values are salient. For example, patients should have input into research on their conditions, and first-hand experience of local residents should shape research on environmental-health issues.

But what constitutes success on these terms? Without a better understanding of this, it is harder to incentivize co-production in research. A key way to support co-production is reconfiguring that much-derided feature of academic careers: metrics.

Current indicators of research output (such as paper counts or the h-index) conceptualize the value of research narrowly. They are already roundly criticized as poor measures of quality or usefulness. Less appreciated is the fact that these metrics also leave out the societal relevance of research and omit diverse approaches to creating knowledge about social problems.

Peer review also has trouble assessing the value of research that sits at disciplinary boundaries or that addresses complex social challenges. It denies broader social accountability by giving scientists a monopoly on determining what is legitimate knowledge. Relying on academic peer review as a means of valuing research can discourage broader engagement.

This privileges abstract and theoretical research over work that is localized and applied. For example, research on climate- change adaptation, conducted in the global south by researchers embedded in affected communities, can make real differences to people’s lives. Yet it is likely to be valued less highly by conventional evaluation than research that is generalized from afar and then published in a high-impact English-language journal.

Not good on paper

There are now many examples of work co-produced by local partnerships that address health inequalities or environmental and social injustice. Today’s ‘publish or perish’ system in academia vastly under-values outputs from such projects, which often don’t come in the shape of a paper.

Examples challenging this include the feature film Pili (2018), a ground-breaking co-production project. The women of Miono in west Tanzania make up the ensemble cast of non-actors, 65% of whom are HIV positive; their real stories provide the basis for the film. It came together as part of a research project on global health, led by political economist Sophie Harman at Queen Mary, University of London, that aimed to give a voice and visibility to unseen women on the periphery of world politics.

Another example of co-production that would be underrated by conventional measures is the Massachusetts Institute of Technology’s Fab Lab Network. This open community of scientists, engineers, educators, students and artists of all ages is located across more than 1,000 laboratories in some 100 countries. Fab Labs is, in part, a distributed research lab that aims to democratize access to the tools, education and means for invention, to create opportunities to improve lives.

A group of policemen gather in New York.

Consider also the Morris Justice Project. Residents of the Bronx in New York City worked with the City University of New York’s Public Science Project to challenge the New York Police Department’s ‘stop and frisk’ policy, which had been rolled out to prevent gun violence. Running since 2011, the project combines research, community participation and action. It showed that people in the Bronx were stopped by police 4,882 times in the first year. More than half of the stops involved physical force, but less than one-tenth resulted in an arrest or summons — and only eight guns were found. The research contributed to a city-wide movement, Communities United for Police Reform, to ensure that debates challenging existing policies were informed by robust and locally informed research. This co-produced work helped to reform legislation and supported several landmark class-action lawsuits.

Another example is the Resource Center for Raza Planning at the University of New Mexico in Albuquerque. Over its 20-year history, the centre has brought together planning researchers, professionals and traditional communities in New Mexico to influence policy decisions on issues such as economic development, land use, water rights and infrastructure. It exists to ensure that traditional communities are sustainable by co-producing research and making sure this is incorporated into policymaking.

The real-world effects of these examples depend on extending the research community. Although that still makes many academics uncomfortable, people increasingly acknowledge that local, experiential or applied knowledge can enrich the quality and impact of investigations. The work is more responsive, socially relevant and connected to affected communities.

What is missing are ways to measure success in those dimensions — meaningfully, consistently, rigorously, reproducibly and equitably.

Reporting standards

To encourage the practices that broaden research communities, we must make those procedures apparent. Then they can be evaluated and, crucially, rewarded.

Reporting standards could go a long way. The CONSORT Guidelines for reporting the results of clinical trials were proposed in 1996, and have now been taken up almost universally, enforced by journals and government funders. Before their adoption, reports of clinical trials were hard to appraise. Similar efforts around co-production would be advantageous.

It is still early days. We cannot assume that we are all on the same page about what it means to co-produce research, especially across different scientific disciplines. Reporting standards around the research process could clarify what is involved when different groups talk about co-production (see go.nature.com/2nzn7xw). They would show how research was planned, conducted and applied.

An emerging strategy is to clearly state the intentions of co-produced work, and evaluate it on the basis of the intentions. If the intention is instrumental — to characterize lay knowledge of local conditions, say — then the metric would be based on the inclusion of that lay knowledge. If the intention is to honour inclusion — encapsulated in the disability-rights call, ‘nothing about us without us’ — then more-appropriate metrics might centre on how participants perceive the quality of their involvement in the work.

Reporting standards should capture the stage of the research process at which co-production occurs. Were the initial research questions defined co-productively? Or did co-production happen later, such as during analysis, interpretation and dissemination of the findings? For example, The BMJ now requires that all its journal articles acknowledge whether and how patients or carers were involved in research — a demand that came about through consultation with those communities (see T. Richards, page 30).

Tools needed

The extended peer community should play a part in determining any evaluation system. The goal is not to be prescriptive, but rather to clarify the intentions and processes of scientists and other co-producers of research. An accepted suite of criteria helps to document these choices and leads to context-appropriate evaluation (see ‘Best practice’).

There are only a handful of examples to build on. Co-production tends to function at a small, experimental scale, and generally does not attempt to draw out working principles that other programmes might learn from.

One notable exception is the organization Mistra Urban Futures (for which B.P. serves as the UK lead). It has developed workshops that support peer learning for people working on co-produced research, a transdisciplinary research school and a handbook, alongside an evaluation methodology for co-production that considers both the quality of the processes and the outcomes achieved. This international centre focuses on how cities and settled areas can grow sustainably, and is led by a consortium of local authorities and academics in Sweden, with partners in South Africa, Kenya and the United Kingdom. Mistra’s criteria for high-quality co-production include relevance, credibility and legitimacy. Outcomes are categorized in several ways: as effects that can be directly attributed to a programme and as potential broader effects and influences.

The use of proxies to measure outcomes is crucial, yet is underdeveloped. Proxies for social values (such as commitment and a feeling of belonging) can include contributions in kind, time donated to projects and the depth and breadth of resulting personal, inter-relational and system-wide networks.

Another organization of note is Canada’s International Development Research Centre in Ottawa, which funds work aimed at tackling social and health problems in the global south. It has developed a tool for assessing the projects that it supports, which incorporates the views of stakeholders, users and non-scientific beneficiaries in communities.

Co-production doesn’t devalue science, it re-evaluates other ways of knowing. If we want to see more co-production, we need to revise the dominant metrics accordingly. In essence, metrics to assess co-production must themselves be co-produced.

Best practices

Survey your options. Different groups of extended peers will need to hammer out their own criteria for co-producing research, but examples of good practice and templates to describe intentions and processes will help. Recommendations should be aligned with guidelines on responsible metrics. There is precedent. In 2011, the UK Arts and Humanities Research Council’s Connected Communities programme commissioned a group of Durham University academics and community partners to examine and make recommendations on the ethical challenges raised in community-based participatory research (see go.nature.com/2qyh21j).

Support long-term partnerships. Institutions and funders must put resources into extended peer communities. For instance, the UK Economic and Social Research Council has invested in our Jam and Justice project. This explores how an extended peer community can govern research around positive urban transformations. Similarly, the University of Illinois at Chicago employs community-development workers in its Office of Community and Public Health Practice; they sustain relationships with local organizations to enable community-based research.

Catherine Durose is a reader in policy sciences at the University of Birmingham, UK. Liz Richardson is a reader in politics at the University of Manchester, UK. Beth Perry is a professor in urban studies at the University of Sheffield, UK.

References

Durose, C. & Richardson, L. Designing Public Policy for Co-Production: Theory, Practice and Change (Policy Press, 2016).

Funtowicz, S.O. & Ravetz, J.R. ‘Science for the post-normal age’. Futures 25, 739–755 (1993).

May, T. & Perry, B. Cities and the Knowledge Economy: Promise, Politics and Possibilities (Routledge, 2018).Pain, R. et al. Mapping Alternative Impact (N8 Research Partnership, Durham Univ. & Economic and Social Research Council, 2015).

Molas-Gallart, J.  ‘Research evaluation and the assessment of public value’. Arts Humanit. Higher Educ.14, 111–126 (2014).

Perry, B., Patel, Z., Norén Bretzer, Y. & Polk, M. ‘Organising for co-production: local interaction platforms for urban sustainability’. Polit. Gov. 6, 189–198 (2018).

Polk, M., ‘Integration and implementation in action at Mistra Urban Futures: A trans-disciplinary centre for sustainable urban development’, in Disciplining Interdisciplinarity (ed. Bammer, G.) Ch. 51 (ANU Press, 2013).

Richardson, L. ‘Participatory evaluation’ in Handbook of Social Policy Evaluation (ed. Greve, B.) 119–138 (Elgar, 2017).

Walker, D. ‘Do academics know better or merely different?’ Public Money Mgmt 30, 204–206 (2010).

Wilsdon, J. et al. The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management (2015).