Analysing the impact of joint call/funded projects
A final step in assessing the joint call is the analysis of the impact of the call and funded projects. This goes beyond the monitoring of the projects and the mechanics of call implementation. Impact assessment is normally carried out after the completion of the funded projects and/or, depending on the type of impacts being considered, several years afterwards.
The basis for all impact assessments should be the specific objectives of the joint call and the global objectives of the network. The type of research funded by the network either through one of the Horizon Europe / Horizon 2020 instruments (Co-funded European Partnerships, ERA-NET COFUND, Article 185) or without EU co-funding (e.g. by a JPI or a self-sustained ERA-NET) will also have a bearing on what the expected impact of the funded research should be. Basic research projects will have different outcomes and impacts than those that are focussed on industrial R&D for the benefit of SMEs. Many networks (including, but not limited to, the JPIs) are aimed at addressing societal challenges. Some may also be orientated towards policy research. The beneficiaries of joint calls can therefore cover the whole spectrum of stakeholders including private sector companies, academic research groups, public services, societal organisations and even public administrations.
Project-level impact assessment is still the exception rather than the rule for Partnerships and therefore is a priority area for the ERA-LEARN project in terms of both supporting mutual learning and the evolution of common frameworks.
The most common approach to the assessment of projects funded from joint calls is a logic model that considers six main steps, including:
- Inputs (human and financial resources, knowledge, facilities)
- Activities (the research and/or innovation project)
- Outputs (deliverables of the project)
- Outcomes (what has been achieved by the beneficiaries based on the project outputs)
- Direct Impacts (short term impacts for the beneficiary and/or those directly affected)
- Indirect Impacts (longer term economic, social and/or environmental impacts)
It is the latter three (i.e. outcomes, direct impacts and indirect impacts) that are the focus for impact assessment of projects, which are funded through joint calls. This provides a clear distinction from the ‘monitoring’ of projects (i.e. inputs/activities/outputs).
Whilst the national/regional agencies acknowledge the importance of assessing the impact of research projects that are funded through joint calls they often find it difficult to measure and quantify impacts even at the national level. Impact assessment of transnational research projects is even more difficult as it needs to consider not only the absolute outcomes and impacts but also the relative added value compared with national projects.
The ERA-LEARN toolbox includes a Reference Library with examples of indicators, impact assessment frameworks and impact assessment reports that have been produced by some of the more mature networks. ERA-LEARN also provides Impact Assessment Guidelines. These show that, whilst there has been some cross-fertilisation via agencies that are involved in multiple networks, the general approach has been to develop a specific set of assessment indicators for the Partnership domain whether it be industrial, societal or scientific. They also highlight the relatively long lead time between the joint call and the point where meaningful impact assessment surveys and/or case studies can be carried out. Whilst it is possible to make an initial assessment of ‘expected’ outcomes and impacts shortly after the end of the project, it seems that best practice is to survey beneficiaries around three years after the completion of the project to assess ‘realised’ impacts. This could be 7-8 years after the launch of the joint call and so creates a high risk that the original beneficiaries may not be accessible or willing to respond to a questionnaire survey. At least two of the more mature Partnerships have addressed this problem through the development of case study reports. Another practical issue is that the Partnership may not be sufficiently active and so it may be necessary for the funding agencies to make their own assessment.
The ERA-LEARN presents with the RIPE Toolkit a complete monitoring and evaluation methodology with concrete steps, examples, templates and good practice tips based on the work of ERA-LEARN over the years in supporting the Partnerships in their monitoring and evaluation activities.