Demonstrating socioeconomic impact - a historical perspective of ancient wisdom and modern challenges

Dive into the historical evolution and contemporary challenges of assessing the socioeconomic impact of publicly funded research with insights from the Institute for Scientific Information.

In research and higher education circles, there is a widespread notion that the socioeconomic impact of research is a new concern for funding agencies. It can become an onerous distraction: difficult to measure and rarely the main impetus behind research effort.

In fact, the expectation that publicly funded research should pay off for the public investor has long been a part of the package. For this reason, we at the Institute for Scientific Information (ISI)TM have been investigating just these topics for much of the past few decades.

The evolution of public investment in scientific advancement

Historical examples that demonstrate the tangible outcomes of publicly funded research abound.

In the third century B.C.E., Archimedes reportedly appealed to King Hiero for research support and convinced him of the merit of his ideas by presenting pilot data and a practical demonstration of a pulley and rope system capable of moving a fully loaded ship up a beach. As a result, he was given access to the ‘resources of the city’ – a city that made other military uses of his innovative research, although we weren’t there to check those metrics.

Government institutional funding of science in Britain formally started in the 17th century C.E. King Charles II granted £100 a year to the first ‘King’s Astronomical Observator’ leading to the establishment of the Royal Observatory at Greenwich in 1675. This in turn boosted the competency of a naval empire. The first two independent U.K. Research Councils were the Medical (1913) and the Agricultural (1931). The significance and utility of these state and public investments is pretty clear.

Publicly funded research enjoyed a boom period after 1945, with the impetus in the U.S. government of Vannevar Bush’s ‘Science, the Endless Frontier’ and Harold Wilson’s 1963 vision, ‘the white heat of this (scientific) revolution’ in the U.K. Perhaps it was this golden age that spawned an internalized view of what is important, influential and worthy of esteem – and the perception that academic research and societal improvement are interlocked.

Navigating the challenges of economic shifts

But there is never enough money for all the science we could do. Budget constraints inevitably followed the global oil crisis of the 1970s and prompted the need for greater selectivity in research funding.

In 1986, the U.K. Advisory Board for the Research Councils (ABRC) published ‘A Strategy for the Science Base’ with new assessment criteria ‘which give due weight to considerations of applicability and other external [factors]’. In addition to the ‘internal’ criteria of excellence, timeliness and pervasiveness, these externalities were: exploitability – potential for nationally profitable industrial or commercial use; applicability – potential for social, environmental or policy use; and significance for education and training.

As part of the ABRC Secretariat, I began to explore data sources for assessment during the late 1980s and early 1990s. It was challenging and the data were extremely poor in most areas. One source stood out: the Science Citation Index™ of then Philadelphia based ISI emerged as the most promising source for quantitative assessments, aided at the economic level by the work of Ben Martin and others at SPRU – Science Policy Research Unit, University of Sussex.

Initial evaluations were very ‘helicopter’ level and did not suggest a strong new methodology, although analytics and indicators based on publications and citations later became a small industry. It also became evident that a singular perspective on research ‘value’ was insufficient and necessitated a more complex approach.

Global responses to the need for research impact assessment

Calls for the demonstration of the societal benefits of public research investments – notably by ISI founder Eugene Garfield – prompted strategic responses from both governments and institutions, e.g. our strategic advice to the U.K. government provided the underpinning for the 1993 whitepaper.

At much the same time, the U.S. Congress passed the ‘Government Performance and Results Act’ (1993) requiring agencies to prepare annual performance plans and reports. These clearly pointed along a path towards multifaceted approaches when determining what research should be funded and whether projects had delivered an appropriate outcome.

The European Commission also implemented diverse and comprehensive evaluation for the Framework Program for research. In 2004, when I chaired the EC Monitoring Committee for the Evaluation of FP6, we were provided with a spread of data on assessments prior to funding, data monitoring on progress, and traditional evaluations as projects approached outcomes.

Sadly, national agencies with more constrained budgets often focus solely on internal research criteria when prioritizing ideas and reporting outcomes. Two key things explain why:

  • First, publication is core to the academic process, citation is core to academic culture and there are peer norms for both. So, everybody does it, across disciplines and regions; they all do it in a similar way; and they put their name and address on it.
  • Second, Eugene Garfield and ISI provided a quality, curated, comprehensive source of publication records and metadata in what is now the Web of Science™ and then showed how the data could be turned into ‘impact’ indicators.

That just isn’t feasible for most of the other (external) impacts that research makes. Projects across medical, technological, social and cultural domains exhibit varying impacts (some of which may overlap), but they will generally differ in major respects and be difficult to quantify and compare due to their diverse nature and timeframes for realization – which may be 5, 10 or even 20 years.

Towards a holistic framework for impact assessment

Here at the ISI, our efforts to address these challenges persist through projects for bodies like the U.S. National Science Foundation, the U.K. Funding and Research Councils, the Hong Kong University Grants Committee and the Australian Research Council. We collaborate with partners such as King’s College, London, the Rand Corporation’s research branch, SPRU at Sussex and the Max Planck organization. We continue to draw on community expertise and collaboration to produce commentary, reports and new ideas about indicators every year.

We know that an assessment balance between both internal and external research impacts remains essential to fully capture and demonstrate the value of research to society’s prosperity, well-being and economic growth. Furthermore, we are committed to working with partners to develop multiple views on routes to the demonstration of broader contributions, all the while remaining vigilant against the pitfalls of over-simplifying impact in solitary metrics.

To this end, we intend to disseminate our proposed framework for responsibly measuring the different dimensions of societal impact through a focused engagement and publication schedule.

Register for our upcoming webinar series on research impact.

Download our latest reports on research impact and evaluation, or read other blogs from the Institute for Scientific Information to discover more insights.