Featured Article Archive
The Times Higher Education (THE) University Impact Rankings have been released. But what are they, and what do they have to say about global higher education?
The impact of the Impact Rankings
The THE University Impact Rankings measure the success of higher education institutions globally in delivering against the United Nation’s Sustainable Development Goals (SDGs). More than 560 universities from 80 countries across six continents have taken part.
At Vertigo Ventures, we’ve been working closely with the THE to steer the direction of this research – in line with our mission to help leading research organisations identify, capture, and report on the impact of their work and drive global recognition of how higher education impacts society on a global level. In this blog, we’re going to take a look at the methodology and the metrics underpinning these rankings and why the rankings matter, especially for institutions not usually recognised in HE league tables.
Methodology and metrics
The UN SDGs have been in force since January 2016. Each one is designed to address a different global challenge – ranging from poverty and inequality to peace and climate change.
The rankings required universities to submit data on the impact they’ve had on up to 11 of the 17 SDGs. Each institution has latitude to pick which goals they report on, enabling them to highlight their achievements in the areas most relevant to them and their surrounding communities.
Only one goal was compulsory: SDG 17, Partnerships for the Goals – a goal centred around collaboration. This is because collaboration is something that’s both vital for institutional success, and for impacting on the remaining SDGs. The rankings provide an excellent opportunity for higher education organisations to demonstrate exactly how they’re contributing to each goal. We hope they inspire more institutions to get involved.
The ten optional SDGs that universities can report on are:
• SDG#3: Good Health and Well-Being
• SDG#4: Quality Education
• SDG#5: Gender Equality
• SDG#8: Decent Work and Economic Growth
• SDG#9: Industry, Innovation and Infrastructure
• SDG#10: Reduced Inequalities
• SDG#11: Sustainable Cities and Communities
• SDG#12: Responsible Consumption and Production
• SDG#13: Climate Action
• SDG#16: Peace, Justice and Strong Institutions
The THE University Impact Rankings are designed to be as inclusive as possible: they’re open to all universities and institutions which offer undergraduate programmes. They include an overall ranking of universities based on data for SDG17 plus the best three SDGs per university and highlight the top performing universities for each SDG through an individual ranking.
Sourced from university submissions and existing Elsevier datasets, each goal is weighted equally, given its own metrics, and designed to filter out any biases related to wealth.
You can find out more about the methodology and metrics via our webinar.
Vertigo Ventures clients in the development rankings
At Vertigo Ventures, we’re experts in international impact – and we’re delighted to see that all of our clients have performed very well:
• Kings College London ranked 5th overall
• The University of Wollongong came in 12th
• The University of Sydney came in 23rd
A further nine institutions that we work with have been listed in the top 150 overall, and when it comes to individual SDG rankings, another five achieved excellent results:
• The University of Worcester ranked 16th in SDG#17 Partnerships for the Goals
• The University of Sydney ranked 4th in SDG#3 Good Health and Well-Being, 7th in SDG#8 Decent Work and Economic Growth and 13th in SDG#5 Gender Equality
• Hong Kong Baptist University ranked 15th in SDG#13 Climate Action and 16th in SDG#8 Decent Work and Economic Growth
• The Australian Catholic University ranked 25th in SDG#3 Good Health & Well-Being
Congratulations to all!
Why the THE University Impact Rankings matter
Levelling the institutional playing field
These rankings were designed to make it easy for all universities to participate – regardless of their resources, their endowments, and their sizes. The aim is to create a level playing field to enable institutions to show the impact of their work.
This has led to excellent showings from universities that wouldn’t necessarily chart in traditional ranking league tables, and wider participation from institutions of all sizes the world over. This is particularly important, because universities from the developing countries often find themselves excluded from league tables due to publication thresholds and other disqualifying criteria – even if these universities have a positive social impact. Their participation demonstrates how universities from developing nations can deliver against SDGs, do outstanding work and create a positive impact on their local and regional communities.
These rankings are focussed on understanding how the higher education sector is making a positive contribution to society and the environment. They’re about supporting behaviour change among universities worldwide and identifying whether research policy is making a difference. This will allow institutions with highly specific remits to appear in the rankings and hopefully attract more students and more funding as a result.
Funding and impact
In 2019, every higher education institution should be acutely aware of its impact on society. In fact, it’s becoming increasingly intertwined with their financial futures. Since the last round of funding for example, weighting of impact has increased from 20% to 25% in the REF, meaning that measuring impact and demonstrating the difference universities are making (for example progress towards the SDGs) will only get more important. This is a worldwide trend and something universities need to take very seriously – especially now, with the advent of the THE University Impact Rankings, it’s being officially measured.
Need help demonstrating the impact of your institution’s work, or preparing for a national research assessment exercise? Talk to our team of impact specialists today.
Despite growing interest in public engagement with research, and general acceptance of its importance, relatively little is understood within the sector about how to plan for and evaluate the impact(s) of such wide-ranging activities placed under the broad umbrella of ‘public engagement’. For VV’s next webinar – to be held on 23rd April (Monday) at 1pm – we have invited Paul Manners and Sophie Duncan, Directors of the National Co-ordinating Centre for Public Engagement (NCCPE), to share lessons learnt about assessment of impact(s) arising from public engagement in REF 2014, and, the works currently underway aimed at providing a more robust framework for evaluating impact(s) arising from public engagement for REF2021.
You can sign up for the webinar through this link.
There is a clear desire from the funders – and the wider sector – to see high quality public engagement featuring as a pathway to impact in the REF in 2021. This was one of the recommendations in the 2016 Stern review:
“Guidance on the REF should make it clear that impact case studies should not be narrowly interpreted, need not solely focus on socioeconomic impacts but should also include impact on government policy, on public engagement and understanding, on cultural life, on academic impacts outside the field, and impacts on teaching.”
The REF consultation in 2017 revealed that:
- “There was widespread perception that institutions had been cautious in their choice of case studies, submitting those impacts that were easy to evidence or align with the criteria. In particular, this was felt to have affected impact through public engagement and case studies demonstrating cultural or social benefits.
- A significant number of respondents highlighted the need for clearer guidance on capturing impact arising from public engagement. Overall, respondents were supportive of broadening definitions to be more inclusive of public engagement activity, but there was a lack of clarity about how such impacts would be assessed. A particular concern was raised about providing clarity on the distinction between dissemination and impact. Similar points were made regarding cultural and societal impacts, which were perceived to be challenging to evidence and measure.”
The NCCPE is keen to contribute to the development of criteria and guidance which encourages high quality public engagement with research to feature in the next REF. Their review of the 2014 case studies revealed that, even with the constraints noted above, nearly half of the case studies featured some aspect to public engagement as one of their routes to impact. That engagement led to a variety of significant impacts, particularly in stimulating discussion, dialogue and debate that deepened understanding, influenced policy and practice, improved products and services, developed individual’s skills and efficacy and brought new networks into being.
There is no question that assessing the impacts arising from public engagement is challenging, as it is for other forms of ‘stakeholder’ engagement, when the influence exerted by the engagement is often a subtle contribution to complex interpersonal processes like learning and cooperation. But that shouldn’t deter us from trying, or allow these kinds of impacts to be discounted because they are ‘too hard’ to judge.
Overview of webinar
The 2014 case studies revealed that it is possible to make compelling and robust accounts of impacts arising from public engagement. There is also a significant body of work, inside and outside higher education, dedicated to planning and evaluating complex interventions and processes of social change which we can draw on to inform the next REF.
The NCCPE is currently developing a framework to inform the guidance for assessing public engagement in REF 2021. This involves synthesizing the key lessons learned from the last REF, and from the extensive literature about evaluation and ‘what works’ in effective engagement and evaluation processes. A summary of the framework will be shared with webinar participants before the session.
The webinar will explore:
- Lessons learned about the assessment of impacts arising from public engagement in REF 2014
- The NCCPE’s draft framework describing the key pathways to impact realised through engaging the public with research
- How HEIs and researchers can apply this framework to inform their planning for REF 2021
The webinar will also include a Q&A session with our guest presenters.
Who should attend?:
- REF Managers, Impact Officers, Research Support Officers, and, Public Engagement Professionals
- Academic Impact Leads, or, Impact Champions
- Directors of Research seeking to understand and develop impact strategies for their department(s)
The webinar will be led by Paul Manners and Sophie Duncan, Directors of NCCPE.
The National Co-ordinating Centre for Public Engagement (NCCPE) is internationally recognised for its work supporting and inspiring universities to engage with the public. It works to change perspectives, promote innovation, and nurture and celebrate excellence. It champions meaningful engagement that makes a real and valued difference to people’s lives.
The NCCPE is supported by the UK Higher Education Councils, Research Councils UK and Wellcome, and has been hosted by the University of Bristol and the University of the West of England since it was established in 2008.
To join us for the webinar please register your interest here: https://www.eventbrite.co.uk/e/webinar-public-engagement-as-a-pathway-to-impact-tickets-44874158817
Researchers today are under increasing pressure to systematically plan for, measure, evidence, and, report the impact – economic, social, cultural and environmental benefits – of their work. Apart from the significant increase in weighting of the impact criteria in UK’s next national research assessment exercise, Research Excellence Framework (REF) 2021, all Research Councils UK (RCUK) grant applications also require researchers to consider carefully at the outset the potential beneficiaries of their research and to draw up impact plans for targeted engagement activities – in the form of an impact summary and a pathways to impact document – in order to ensure the best design and uptake of research.
This is not just the case in the UK. Of the three criteria used to assess Horizon 2020 (the EU’s €80billion research and innovation funding programme) grant applications, impact is the dominant measure. Outside of Europe, Australia is introducing its own pilot Engagement and Impact Assessment (EI) 2018 exercise to run parallel to its main national research assessment exercise, while Hong Kong is following in UK’s REF footsteps by introducing impact as a key criterion in their Research Assessment Exercise (RAE) 2020.
The importance of a research impact strategy
Significant investment is made every year by governments into university research. For example, in 2015 £8billion (€11billion) was spent by the UK higher education sector on research. As most university research funding is supported by public funds, to demonstrate return on investment and to maintain and improve the high quality of research undertaken within universities, government policies in recent years have sought to introduce frameworks for the systematic planning, reporting and evaluation of impact. Universities are under more pressure than ever before to ensure that the right impact skills and knowledge is cultured among their researchers, and, that the appropriate support mechanisms and research environment is available to them.
Despite the increased importance of planning for and demonstrating impact and the strong funder requirements around this, institutional support remains at the relatively early stages of embeddedness and naturally it remains an area many academics feel ill-equipped to tackle. As the saying goes, impact is not meant to be an ‘add-on’ (or an afterthought to a project). On an institutional level – to be truly effective – impact must be embedded in the institution’s research culture. A well developed and implemented institutional research impact strategy can do just this.
So what is a research impact strategy and how can it help?
A research impact strategy should be aligned with the top level institutional mission and strategy, and, support the achievement of one or more of the institutional key performance indicators. Its development and implementation should involve – or preferably be led by – senior management. Clear and regular messaging from research leaders that impact is important and core to the university’s mission is a starting point in embedding impact as part of a university’s research culture. The strategy should also be developed in close consultation with the university’s researchers and research support staff to ensure proper buy-in and that they deem it useful for their needs.
A research impact strategy and its corresponding implementation plan should provide clarity of purpose. It needs to define key objectives and the mechanisms that will be used to achieve them. Furthermore, it should set out roles and responsibilities and an approach towards the systematic allocation of time and resources for these. Ideally, such a strategy needs be based on a proper analysis and understanding of areas of strength and weakness for the university in terms of impact, key blockers, the main areas to focus resources and time in the shorter term, and the areas to build further capacity going forward.
An institutional impact strategy, when properly implemented, should provide a clear long-term vision, help develop engagement and impact skills among both research and research support staff, and, ensure that the necessary guidance, support systems (e.g. IT systems to track and evidence impact) and appropriate research environment is available to them. Roles that can help ‘champion’ impact and guide staff should be embedded within departments and on a leadership level, and, the strategy should also provide mechanisms and platforms that both incentivise and celebrate impact.
Upcoming webinar – Developing and embedding a research impact strategy
To take a deeper look at the topic of developing and implementing institutional impact strategies, we are holding a webinar on Wednesday 7th of March at 1pm (GMT). The session will be led by independent consultant and guest speaker, Jenny Ames, who has over 30 years of experience working with universities from all over the United Kingdom. Jenny was the Associate Dean at University of the West of England (Bristol) where she was also Research Impact Lead. She oversaw submissions to Units of Assessment 3, 6 and 22 for REF 2014. In 2017, Jenny was Impact Lead for University Alliance.
The webinar will run for approximately 45 minutes and is aimed at supporting those who want to know more about or are involved in developing and implementing impact strategies at institutions involved in research.
The session will address questions such as:
- Why should a university develop a research impact strategy?
- What are the key considerations when developing and implementing a research impact strategy?
- What are the key components of a research impact strategy?
- What should be the role of Impact Officers and the Research Office in developing and implementing one?
At the end of the webinar there will also be a brief Q&A session where participants will have the opportunity to pose questions to Jenny and address any concerns they may have.
To join us for the webinar please register your interest here: https://www.eventbrite.co.uk/e/webinar-developing-and-embedding-a-research-impact-strategy-registration-43163479128
Since at least 2009, when the research councils launched pathways to impact and HEFCE released proposals for impact assessment, impact, as part of research organisations’ agendas has grown both in importance and sophistication. How far institutions are equipped to support this development has last week been questioned in a timely blog on the LSE impact site, but activity in this area has grown exponentially in the last eight years or so.
Alongside the expectation that all researchers should consider the likely impact of their work, the government has been investing new research money in challenge-led research funds. The Global Challenges Research Fund and the Industrial Strategy Challenge Fund are part of the government’s plan to invest an extra £4.7bn in research and development by 2021. The challenge-led nature of these funds means that attention is naturally given to the impact of projects.
One tool for enhancing the effectiveness of research impact is a ‘theory of change’. This is a tool for planning and monitoring impact, which is increasingly referred to by research councils both directly and indirectly, as an approach that researchers should consider within their impact work.
In its guide to creating excellent pathways to impact, ESRC suggests using a theory of change as a tool to enable the impacts you are expecting. More explicitly RCUK requires development of a theory of change as part of the final round of applications to its collective calls.
So, what is a theory of change and where did this come from? Essentially it is a framework for thinking through a process of achieving change, and making the steps and assumptions within that, visible. Theory of change emerged from evaluation techniques used back in the 1950s which led to log frames or logical models of causal chains leading to outcomes. Criticism of the log frame model include the fact that it is not always clear what the intention was in a log frame. Secondly one of the challenges to evaluating research (and programme) activity is that the assumptions behind them are invisible. If is unclear how the change process will unfold, it is more difficult to see how far a longer-term goal has been reached.
So, to create a theory of change you need to work backwards from a clearly articulated change objective, building in a strategy and activities to achieve it, through intermediate and early stage changes (your pathway to impact). You also need to take into account the conditions that need to be in place to make this happen. At the end, you will usually end up with a diagram and narrative description.
However, theory of change is as much a process as product. The activities required to agree clearly defined objective, put in place appropriate steps to move towards it and acknowledge and agree assumptions about how these will lead to change are time consuming and challenging.
To carry this out you need to be able to answer some key questions:
Theory of Change is both a planning tool and a framework for monitoring and evaluation impact. In terms of planning, the theory of change approach is designed to encourage very clearly defined outcomes at every step. Details are required about the nature of the desired change — including specifics about the target population, community, organisations etc., then what success in terms of the change, would look like, and the timeframe over which change is expected to occur. This detail allows you, and your funder, to assess the feasibility of reaching explicitly stated goals.
Evaluation of research impact can have particular difficulties due to the separation of research and impact over time, distance, and, for international research, differences in, culture, language and so on. The orientation of challenge-led research towards explicit problems lends itself to a theory of change approach and allows indicators to be identified for each step in the change process, beyond outputs and activities.
Using a theory of change process is useful for researchers and research managers as it helps adjust a way of thinking from focusing on what you are doing, your own activities, to what you want to achieve, which researchers often find hard, it enables, and requires, you to put in place steps to achieve this, it gives you a framework to establish monitoring and evaluation as you see what each step is, so can measure progress against that.
However, it is a very challenging process; you or your partners need to understand how change happens in the context you are working in, theory of change development needs to be participatory (involving collaborators and stakeholders) and need to be flexible enough to evolve as your project develops and you test your assumptions.
The ‘theory’ of theory of change is not that complex, but putting it into practice, especially in interdisciplinary and international groups is very challenging. There is a lot written about theory of change to help you, and some of the most comprehensive documents are at http://www.researchtoaction.org/2015/09/theory-change-reading-list or try http://www.theoryofchange.org/
Vertigo Ventures is going to discuss theory of change, challenges and opportunities and what to bear in mind if you need to develop one or support development, in a webinar on 8-December at midday (GMT). Laura from Vertigo Ventures will introduce theory of change and we will then hear from Matthew Guest from GuildHE who has spent several years supporting theory of change development and can take your questions.
In December 2015, as part of its National Innovation and Science Agenda (NISA), the Australian government announced the development of a national engagement and impact (EI) assessment. This will examine how universities are translating their research into economic, social and other benefits, in order to incentivise greater collaboration between universities, industry and other end-users of research. The first full, national assessment will run alongside the ongoing Excellence in Research for Australia assessment (ERA) in 2018.
In September 2017, the Higher Education Funding Council for England (HEFCE) released the first decisions regarding REF2021, and before the end of the year the Australian Research Council (ARC) releases guidance for the roll out of its first Impact and Engagement Assessment.
So, this is a timely moment to look at how the two approaches differ and what they might learn from each other. There’s also a surprising new dimension of relevance between the two countries, after the UK’s Universities and Science Minister announced on 12 October this year that he wanted to see a knowledge exchange framework (or KEF) to assess engagement, particularly between universities and businesses.
The UK’s REF assesses outputs, impact, and environment in a single process approximately every 6 years, with impact being included for the first time in the last round in 2014. In Australia research outputs and a number of other indicators related to research are assessed approximately every three years. For the first time, an assessment of impact and engagement will be introduced to the Australian process in 2018, with data going back over the previous six years.
The engagement part of the Australian process is expected primarily to involve quantitative information, supplemented by qualitative information. The impact element is expected primarily to involve qualitative information, in the form of case studies, that may be supplemented by quantitative information.
Making engagement a central part of the assessment process is a conceptual difference from the UK’s REF, which is neutral about how impact happens. According to the guidance impact that was achieved with great effort and brilliant engagement wouldn’t be graded any more highly than impact that happened serendipitously or via passive means, such as through journal articles.
On the one hand, the Australian engagement assessment looks rather like the UK’s annual Higher education – business and community interaction survey (HE-BCI) which is expected to form at least part of any future KEF. In Australia and in the UK these processes to measure engagement collect quantitative information on patent and patent citation data, research outputs analyses (co-authored etc), research income analyses (funding from end-users), and co-supervision of research students. HE-BCI also requires the provision of information on other activities intended to have direct societal benefits, such as continuing professional development and continuing education courses, lectures, exhibitions, and other cultural activities. This information is optional in the Australian engagement assessment.
However, when looking at the requirements for impact case studies in the EI, the importance of engagement to the Australian process becomes even more obvious. For the Australian impact pilot, the guidance declared that ‘the assessment of impact will focus on the institution’s approach to impact, that is, the mechanisms used by institutions to promote or enable research impact’. Although the UK’s 2014 REF did collect evidence on this process, it accounted for less than 4% of the overall score, with 16% allocated to the ‘reach and significance’ of the impact. In 2021 ‘reach and significance’ will account for 25% of the score.
The implications of linking the ‘institutional mechanisms’ of impact to a specific case study are debatable. One of the findings of REF2014 was that narrative accounts of institutional activity around impact didn’t necessarily connect well with the best examples of impact. Often these happen independently of institutional structures to support engagement and impact. So, what might be the effect of bringing them together? Perhaps it might lead to less strong, but institutionally relevant case studies being submitted, or it might lead to tortuous explanations of how a specific impact was really connected to a ‘mechanism’.
It will be interesting to see how this plays out in the forthcoming report on the pilot from the Australian Research Council – will there be high marks for effort (engagement) as well as achievement (impact)?
To join us for discussion of these issues sign up on Eventbrite here – ‘ERA vs. REF – How UK and Australian impact assessment differs
Last week Open Forum hosted an event on Research Impact: Strengthening the Excellence Framework. The University of Kent and Vertigo Ventures collaborated to share insights from their experience of embedding impact. The University of Kent shared their experience so far from rolling out VV-Impact Tracker as their impact capture system.
If you have any queries or would like to learn more, please feel free to reach out to email@example.com
We are half way through the current REF cycle, with the submission deadline for REF 2021 likely to be just over three years away. HEFCE are reviewing their consultation documents, and we expect the first indications of how they intend to implement the Stern review in the next month or so. Right now, HEIs are beginning to pay more attention to REF preparation. A new tranche of REF and impact managers are being appointed and a new ARMA Special Interest Group (SIG) on the REF has just started up. As part of this flurry of activity, organisations are developing different ways to prepare, institutionally and as individuals, for assessment.
Most institutions now conduct internal assessment of outputs on a regular or ongoing basis and many are also looking at how to assess impact. Reviewing impacts and outputs present different challenges. Once an output is published, it is in the same form as it will be once the submission deadline comes around. Barring debates about possible UoA changes, most outputs can be internally reviewed, or with the help of critical friends (or ‘external advisors’), to give research leaders and department heads a view of what strong outputs look like. Impact case studies, by contrast, are rarely ‘finished’. Research may be completed, but once it’s out in the wild it has a life of its own and the effects often roll on, seen or unseen.
So, what are the options for reviewing progress towards a strong impact submission? All of those we’ve heard about so far start from identification of areas of likely strong impact for each UoA. Once these broad areas have been identified, approaches to interim assessment differ. We categorise these as Completeness only, Mock-REF, and Middle way.
Some institutions focus on completeness of a possible case, looking to make sure that the (likely, as we don’t have the final rules yet) threshold issues are met. This means ensuring there is underpinning research (noting a possible ‘body of work’ change) of rigorous nature. Many institutions are currently playing it safe and expecting that a reassurance of quality research will still be looked for and so require some assurances that the underpinning research is of 2* quality at least. Enough research must have happened at the submitting institution (a ‘distinct and material’ contribution in the words of REF 2014) as it is expected that portability of impact will still be prohibited, and this must have happened within the past 20 years. Some sort of impact must be taking place within the window, although this is not formally assessed in this type of approach.
This is the full monty approach. Case studies are drafted and evidence collected. All the components of a case study are put together for assessment by a REF-type panel. The panel usually consists of internal staff with an interest in or experience of REF, sometimes combined with external critical friends. This requires a lot of work. Evidence needs collecting, panels assembling, and case studies will have to be amended on an on-going basis. In this situation researchers should be provided with support and or training to write the case. If case studies are graded, panels might also need training, or at least given very good guidance about what makes a good case study.
At this stage, there should be an element of formative assessment as well as any summative grading. Formative assessment can then lead to specific additional resources, such as training, mentoring, funds, or research leave where necessary and possible. It is important not to write off early-stage cases too early. If grades are given, it might be useful to temper that with some sort of risk measurement as well.
Although this approach is hard work and time consuming, it does give the best all-round picture and shows researchers what it takes to build a case study. Risks about giving a star rating could be marginally mitigated by doing this assessment on a University or at least Panel basis, not UoA by UoA, so there are more numerous types of impact to compare with.
The middle way
There are various middle way assessment exercises that Universities are carrying out. These include narrative reports from impact leads in departments and interviews carried out by impact staff to formulate impact case lists. These varied approaches can respond to different requirements in different UoAs, for example where there is a new submission or fast-growing department. However, it can put a heavy onus on research impact staff and this can detract from other, broader tasks such as training. Also, it doesn’t spread learning widely across the institution and brings risks for the institution if the impact staff move on.
So, what should institutions looks to assess at this stage? Where possible, an understanding of likely reach and significance can help with allocation of resources, and this should be combined with an assessment of specific need. An assessment of risk to promising cases can be valuable, such as identifying cases that rely on a single member of staff or have an assiduous post-doc on a precarious contract.
Other risks can involve neglected key relationships or very specific windows of opportunity that must be capitalised on. Any type of interim assessment should then develop a plan of action, ideally for each case but also for departments or UoAs. Specific types of support should be developed to meet the needs of those working on cases, from peer learning networks to specalised training, support of specialist consultants, workload planning, or support for evidence collection, tracking and collation.
To look in more depth at these challenging issues, we’re holding a webinar on 8 September. This will be an online discussion with three impact experts from Universities that are taking different approaches to impact monitoring and assessment. It will then be opened out for comment and questions from the audience. This one-hour, intensive session is aimed at supporting those who are developing impact cases for the next REF and will address questions such as:
- What are the options to consider when structuring an impact review?
- What support do researchers need in the process?
- How do you account for risk in developing cases?
- Should you use a panel? If so how?
This paper was co-written for the Triple Helix Conference 2015 by Averil Horton, Brunel University, Tim Jones, National Physical Laboratory and Laura Tucker (nee Fedorciow), Vertigo Ventures.
Despised yet necessary, loved and hated, difficult and easy; Impact is all of these things.
When we need to justify expenditure, Impact feels like an easy option, yet when we need to demonstrate the value of our efforts, it becomes difficult. When there is a need to demonstrate an investment case, future Impact appears obvious, but when we need to measure it, it slips through our fingers. When we need to secure support, funders require Impact, yet when we must report it, we shy from it.
The economic downturn has emphasised the need for sustainable economic growth around innovation and Government policies. Impact provides a way of demonstrating the social, financial and environmental return on the initial activities undertaken. So we must all embrace impact and do so now. But existing methods work poorly because embracing Impact to our collective advantage requires a different way of thinking. We all lack a common framework to make use of impact; we cannot easily talk about it, identify it, report it, or measure it.
But hope now arrives – here we synthesise four simple ideas, Impact Journey, Audiencei, Metricated Case Studies and Value Scorcardsii, to create just that essential common structure. Blending together frameworks and output methodologies already developed and used by Brunel University and NPL with an existing toolkit from Vertigo Venturesiii,iv – we give you a pragmatic, realistic, and simple method to identify, report, and measure impact, together with two output examples of practical impact reporting: Pragmatic Impact Metrics helps us all.
Since my Review was commissioned, I have had the chance to review evidence and meet entrepreneurs, members of LEPs, Business Schools and Universities across the country.
Two conclusions dominate:
- The UK has an extraordinary wealth of ideas, technology and human energy – much of which is world-leading and capable of seeding not just new companies but whole industries with potential to build substantial export positions.
- Significant scope exists to better align funding streams, organisational focus and increase cross institution collaboration to avoid delays in ideas reaching maturity and the risk of British inventions building foreign industries.
At an early interview session, I was deeply struck by the statement: “Britain doesn’t breed entrepreneurs, it breeds endurance entrepreneurs”. The point being that the ‘thicket’ of complexity that exists between central and local structures and diffusion of funding and advisory energies leads to unnecessary hurdles for those striving to translate ideas to job creating businesses.
At the heart of my recommendations – three philosophies:
- Structure funding flows by technology/industry opportunity – not by postcode. We should embrace the country’s density of population and institutions and drive greater collaboration wherever the ‘idea flows’ – eliminating unnecessary regional barriers which create domestic competition instead of marshalling our resources to run a global race.
- Universities have an extraordinary potential to enhance economic growth. The full Encouraging a British Invention Revolution: Sir Andrew Witty’s Review of Universities and Growth diversity of institutions have a role to play from local SME support and supply chain creation to primary technology leadership and breakthrough invention. Incentives should be strengthened to encourage maximum engagement from Universities in the third mission alongside Research and Education.
- Government should help facilitate what I have called Arrow Projects* to drive forward globally competitive technological ideas into real businesses. The Arrows should provide full support to the invention at the ‘Tip’ and should be uninhibited by Institutional status, geography or source of funding. Government should put its weight behind creating global scale through encouragement of real collaboration in fields in which we can win. A great debate has taken place on whether Britain can or should have an ambition to grow its manufacturing sector.
It seems obvious that at least two basic conditions need to be met to have any chance of a long term sustainable manufacturing base:
- An invention culture which successfully translates from ‘mind to factory’.
- A globally competitive sense of timing and scale.
My review has convinced me that while the UK can’t do everything, it has the capacity to do much, very well, if we do a better job of aligning our resources and put simply, on occasion, ‘get out of our own way’! The advances in knowledge in this era reveal a prize worth challenging our behaviours for and if we were successful could herald a British Invention Revolution to rival the transformation witnessed in the 19th Century. Surely a prize worth re-thinking how we work?
Finally, while responsibility for this report is mine alone, I have benefitted greatly from insights of the distinguished experts on the Review’s Advisory Group – Professor Sir John Bell, Professor David Greenaway DL, Professor Graham Henderson CBE DL, Professor Dame Julia King, Professor Wendy Purcell, Professor Dame Nancy Rothwell, Colin Skellett OBE – and I am very grateful to them for the time and thought they have given.
Strategies for embedding impact: The management and adoption of impact capture processes within research information systems
Following the 2014 UK Research Excellence Framework (REF), attention across the Higher Education sector is turning to embedding impact measurement within the organisation. Impact is defined as the social, financial and environmental effects of research. Planning and capturing impact however is a difficult and resource-intensive activity, demanding both strategic commitment and infrastructure support. A means to systematically capture and monitor impact across the organisation is crucial to continued research success. In addition, with impact data capture as an emerging practice, there is the opportunity and necessity for a degree of standardisation in the approach to measuring impact across HEIs. Vertigo Ventures, an impact measurement consultancy, has been using and expanding its tool- VV-Impact Metrics with UK universities to support assessments by identifying impact pathways, impact indicators, evidence collection and analysis to improve the quality of the evidence and narrative. Vertigo Ventures has been working with Coventry University to use its VV-Impact Metrics tool in their self-service module (ERIC) to create a systematised data capture platform that can be readily used by the academic community to input data. This paper discusses the experience and learning from the process of embedding a solution institutionally.