Developing a Framework for Innovation Intermediation

My exciting journey with the Innovation Caucus started one rainy morning in Spring 2017, when by chance I spotted an advertisement for internship applicants doing the rounds over email. This was followed by an email from my supervisor, asking all of his PhD students if we have seen the call and whether we were interested. Not being someone who declines any opportunity, my reply was immediate – yes!

Having found out about the Innovation Caucus and its work some months previously, when putting together a notice for the departmental newsletter about our engagement with policy, I was really excited by the opportunity to further translate my research interest into useful knowledge for policy-making. Having applied and made it through to the interview, I was ecstatic! Speaking to Tim and his team was interesting and inspiring, and once I was offered the internship, it took even less time than before to say “yes” and accept it.

As I am really passionate about my PhD research topic (social aspects of technology development and innovation) and my subject matter (Space Industry – yes, the stuff “up there”) I took quite some convincing to take on new challenges within the Innovation Caucus brief. In part, this was because I really wanted to create a new space of shared knowledge and sense-making, i.e. to challenge the theoretical concepts with empirical findings and policy realities – and I could only envisage doing so within the topics about which I was already somewhat knowledgeable.

However, in discussion with Innovate UK and the Economic and Social Research Council (ESRC), I did eventually reshape my interest into developing a broader framework and typology of innovation intermediation in any geographically bound sectoral system of innovation. This was of value to Innovate UK, since it supported the ongoing development of their portfolio of Catapults and Knowledge Transform Networks, as well as other projects and policies.

This experience was a great lesson for me, not only in working with and delivering for a policy-making system, but also in expanding my own research interests into domains I did initially find uncomfortable. Presenting the headline findings of this work at one of the most prestigious innovation conferences in the world, DRUID (2018), helped me appreciate the power of broader generalisation of academic knowledge, in order to achieve more substantial societal impact.

The lessons learned and experiences from this project also enabled me to engage better with new concepts, unfamiliar settings and unknown stakeholders in my subsequent work. For instance, these skills have proved critical in working on a consultancy project for the OECD and as a Research Assistant in academia.

I have to express my big thanks to Tim and his team for their support and mentorship and to all involved with the Innovation Caucus, particularly Innovate UK and the ESRC teams involved with my internship. It was your determination and generosity that turned this project from a 3-month desk-job into a transformational professional journey.

lxvchcxo_400x400

This post has been published in October 2018 at Innovation Caucus blog: Developing a framework for innovation intermediation.

Find out more about Innovation Caucus.

Quantified Correlated Impacts (QCI) – ER4

To start building a (more) coherent picture of impact evaluation in science and technology programmes, we need to look for a constellation of many different methods to provide a meaningful insight into the need for, and success of, an intervention. Consequently, evaluation research is/should be organisational modus operandi, rather than a set of separate top-level exercises.

I propose a new paradigm in impact evaluation of investment and development in science, technology and innovation, namely Quantified Correlated Impacts (QCI). This approach is based on both quantitative as well as qualitative data collection, as bibliometric and econometric figures are correlated with ethnographic methods – interviews, focus groups and surveys – to determine the perceived causal contribution of the different factors, with particular focus on those pertaining from the intervention.

At it’s core, QCI are underpinned by a logic model; which is connecting the intervention with the evidence justifying the planned outputs; and leads form the inputs through action towards short-, mid- and long- term outcomes.

LogicModelBlog4EvalRes
Logic model for the proposed networking strategy in UK/Scottish Space Sector

As part my research project, for example, I am involved in an intervention driving economic growth in the UK/Scotland through stimulating the collaboration across the UK/Scottish Space industry by increasing sectoral networking. This is a particularly important part of my research in business incubation in the Scottish space sector and the related (sectoral) systemic properties, such as institutional framework, networks of actors, and knowledge creation and dissemination (following Malerba’s Sectoral Systems of Innovation approach (Malerba, 2005)). Also, there is a wealth of evidence about the importance of networking for the success (and growth) of small businesses (Brüderl and Preisendörfer, 1998; Ostgaard and Birley, 1996).

The suggested action to generate these positive effects is to support the growth of small to medium sized businesses by integrating them in a wider network across the sector and wider. This will be facilitated by the creation of, and enrolment of actors into, an on-line database/forum/platform to provide easy access to contacts. Having established that, there are also provisions to host networking events (thematic or generalist), to solidify the ties and introduce more actors into the network, particularly from the non-core businesses.

In terms of evaluation, key facilities need to be established prior to the beginning of the evaluation of outputs (database and its uptake, and the networking events). The database growth can be analysed quantitatively (i.e. number of enrolled individuals, organisations, etc), while the networking event qualitatively (i.e. interviews, feedback, ethnography).

The key next step is to tie the intervention with the outcomes/impacts through an advanced cost benefit analysis. In the example given, this can be done by analysing the investment made with respect to the growth and revenue of the companies most interconnected within the newly established network, comparing to the more peripheral ones, or ones outside the network.

The last part is the crucial correlation, which provides tangible benchmarking for the overall success of a programme (within the cost benefit analysis). This is done by comparing the noticed trends in key parameters (in our case job creation, revenue growth, etc.) with corresponding regional, sectoral, national or global trends. The key objective is to trace any significant difference which can then be (in part! – see below) attributed to the intervention.

Crucial information, however, comes from the collected qualitative data which maps the action to its value for the participants, i.e. what was the contribution of a specific intervention to the overall change. For instance, in the example above we investigate the effect/importance of the networking on business success. This data can only be obtained by interviewing the participants in networking events, and running surveys and focus groups with representatives of the companies/individuals on the database. The key questions to ask will be: What made the difference?; How?; and How significant was it? We can then comment on the part the intervention played in the difference found between the participants performance and correlated trends.

Overall, this approach enables the evaluator to marry the desirable clarity of cost benefit analysis, where standards of success/failure can be contested, with a more balanced set of criteria and tangible links. The key features are quantified data (engagement figures, costs, returns, growth, etc) about the intervention, which is qualitatively (interviews, focus groups, etc.) examined as a contribution towards the difference in participants’ performance with respect to correlated background trends (sector growth, national job creation, GDP, etc.) – revealing the impact of the programme.

As said, this new, Quantified Correlated Impacts (QCI), framework is currently under development and I am sincerely opening its tenets to comments and suggestions. (And, please, do have a look at the other posts in the series, too: ER1, ER2, ER3.)

Many thanks in advance!

Cost Benefit Analysis: “What Have the Romans Ever Done for Us?” – ER3

Cost benefit analysis is an attractive evaluation method, as it can provide concrete, often quantified, data about interventions, usually in a form which is easily communicated to the clients, policy makers, funders and the general (lay) public. In its core and at its best, cost benefit analysis is a very direct and straightforward evaluation process, whereby inputs and outcomes are weighted against each other and logical conclusions about the efficacy of a programme can be reached.

However, all three of these elements – inputs or costs, outcomes or benefits, and efficacy or the relationship between the two – are highly contestable. To begin with, defining your parameter space and acknowledging constrains and assumptions is the key element of this approach to evaluation. These decisions, even if very well argued for, are ultimately just decisions; a global cost benefit analysis, if such a thing was ever possible, would need to encompass much of the factors and effects left on the other side of the dividing line for the evaluation to be a true representation of the net impact of the programme.

Secondly, even though the aim is to have a quantified data as possible – best if every input and impact are turned in some sort of monetary measure – both costs as well as benefits are often indirect or intangible. In Cellini and Klee’s most stark example (2010, p. 500): what is “the value of wilderness or an increased sense of community”? Furthermore, even if a measure can be put to notions such as wellbeing, another – perhaps most challenging of all – decision has to be made, namely what ratio between costs and benefits defines effectiveness of even efficiency?

However, in my limited experience, cost to benefit analysis is effective if the intervention being evaluated is narrow and well defined in terms of the available resources, the scope and the intended outcomes, or better still, when all of the above have an intrinsic monetary value attached. The intended outcomes I look for in my research are related to innovation and consequently increased economic activity, contributions to GDP, business growth, job creation, etc., hence quantification of these parameters is not very difficult as they often come as monetary values to begin with.

The most challenging for me is to benchmark the efficacy of this cost to benefit ratio and, to be honest, even though it would be to a degree possible to put a judgment on how significant the benefits have to be to deem a programme a success, I prefer to correlate these ratios to background trends such as global economic activity, comparisons to global GDP growth, global business and job creation, and add qualitative data where possible, as I believe the later provides a broader judgment on how the intervention is impacting those in and close to it.

This advanced cost benefit analysis can then feature prominently in a new paradigm of impact evaluation – the Correlated Quantified Impacts (QCI) – the topic of the next post.

“Means, Motive, Opportunity” – ER2

In order to frame this enquiry, let’s begin with a small the exploration of the motivations behind commissioning and performing the evaluations in the first place. Though examples here are from social research, these are easily compared with parallels in any intervention, including investment in the development of the science, technology (and business support facilities and services (for example STFC, 2014:5-7).

Firstly, an important part of the evaluation research is process evaluation (Rossi, 1972:34), used in order to improve on the delivery of the intervention, or – as beautifully listed in an interview with Waverley Care (a Edinburgh charity) CEO – “what we need to stop doing, what we want to keep doing and what we are not doing that we should be doing”. When working along this strand of evaluation, it is crucial that the researcher provides recommendations that can be acted upon. The best way to carry out such evaluation is often to focus on a specific small area of the intervention, for example how does an organisation collect feedback and implement changes reflecting the concerns raised by internal and external customers. Having said that, conclusions and recommendations can often be very general.

In the process evaluation, there is further check on the identifying emerging needs and (geographical, social, economic) individualisation of the delivery of outputs. This is particularly important for social projects (such as the Waverley Care), where there is significant variation across the different locales in which they work. However, this is also important in terms of social and geographical inclusiveness of science and technology investment. Hence, evaluation research in this context can provide important checks on the “fairness” of the intervention whilst it is underway.

Then there is the often missed – but in my opinion very important objective in evaluation – the inward facing component, i.e. the improvement of morale of the people engaged in the programme/intervention/organisation by celebrating their success. It is very important for the staff to appreciate the whole picture, “take a step back” to frame their work within a wider context. This is both a good motivation for future work as well as a huge morale boost as one can see how they personally and as a team are making a significant difference to people’s lives.

Finally, the primary motivation for impact evaluation is (always?), to understand the impact/difference an intervention/organisation is making. Evaluation is often considered important for funding applications, i.e. both assessing the need for the intervention as well as monitoring the delivery of outcomes (to evaluate the VALUE generated).

My research is similarly linked to the need for accountability when spending public money (Nutley, Walter and Davies, 2007:254) and in particular the effectiveness of the investment in natural sciences research (mainly cost benefit analysis), which is currently epitomised in cost benefit analysis, but that is already the topic of the next post…

“The Case for Space” – ER1

To start at the beginning, as you might know my main research is in innovation form (basic) natural sciences and its commercialisation in the form of spin-outs and entrepreneurship. My specific field is Space Technologies here in the UK and in Scotland, so I look at emerging technologies ranging from satellite hardware to the use of the expertise developed in large telescopes for designing medical devices, such as Retinal Densitometer.

My research is tied in with the development of a new Space-related business incubator in Edinburgh, the Higgs Centre for Innovation. The expected growth of this sector is part of a wider UK government’s initiative to grow the UK’s share of the global Space Sector to 10% by 2030.

qjCWLWS
The design for the Higgs Centre for Innovation building (bottom right) at the Royal Observatory Edinburgh. (C) STFC

As such, a major part of my work will be the evaluation of past and present incubation programmes, to learn about their effectiveness and suggest examples of good practice. This work is done in collaboration with my research partners, the Science and Technology Facilities Council (STFC), who are launching the said new incubator and who run frequent impact evaluation exercises to justify the investment of public funds and bid for further funding allocation.

I have worked on some impact evaluation previously, for example I have recently written a summative report about the impact of CERN (the European Organisation for Nuclear Research) on science in general and on the UK economy and society in particular, spanning the first 60 years of its operation (up to 2014), which is partially included in a chapter of the last STFC Impact Evaluation Report (2014).

However, while impact evaluation frameworks in the context of science and technology policy are well developed (see STFC example), I have come to realise that their methodology is less so and that there is little available literature to easily form a new coherent approach to this topic (Autio, 2014; Zuijdam et.al., 2011; Markman, Siegel and Wright, 2008). Crucially, most methodological discussions included in the impact evaluation exercises that I have been drawing upon, often focus solely on econometric parameters and their calculations, rather than discussing any holistic framework of evaluation or any of the qualitative or comparative methodology.

Hence, I am looking at programme evaluation elsewhere to cross-reference the methods I encountered in my past research with the well developed theories of policy evaluation in social sciences, in particular concerning social policies and to come up with a rounded impact evaluation logic. Even though my past and present impact evaluation is about research in natural sciences and its impact on the socio-economic situation in the UK, many themes emerging from social policy evaluation match directly the ones I encounter(ed) in my research.

Impact Evaluation Series – ER0

Hello, finally a “proper” post after a while!

In fact, this post may not be so “proper” after all, as it is only marking a start of a short series about impact evaluation, an important part of my research in science, technology and innovation.

The plan is to have four posts (ER1-4): (1) an introduction to my research in (impact) evaluation; (2) an exploration of key themes in evaluation research; (3) an analysis of the cost benefit analysis model, dominant in the policy sphere; and (4) an outline of a new methodology -Quantified Correlated Impacts (QCI).

This is very much work in progress so, perhaps, more posts will appear later on and I would very much like to hear your comments on any of it!

Importantly, this effort is part of 2015 incarnation of  Evaluation Research Methods course, a postgraduate course in the School of Social and Political Science at the University of Edinburgh.

Please, do spare a minute or two and have a look at a host of other contributions at our collective blog and follow our Twitter discussion marked with: #evalres15.