Implementation as learning: 24 questions to ask

Implementation is everything. Or is it?

True, strategy without implementation doesn’t get done. 

But implementation without good strategy gets the wrong things done. 

No one could blame the soldiers on the Somme for not implementing their mission. They charged the machine guns and barbed wire with staggering courage and heart-wrenching loyalty to their country. But the strategy was totally flawed and doomed to failure, no matter how perfectly they’d executed it. Strategy is crucial. 

Even so, implementation plays a large part in why initiatives succeed or fail. Even the most carefully planned programs can fail if execution isn’t up to scratch. 

What can we learn from research?

Research on implementation

For almost 100 years, McKinsey have been studying implementation performance, factors and practices. In 2015, they surveyed 2,200 leaders from 900 organisations. They found that the most important implementation factor by far was ownership and commitment to change across all levels of the organisation. Most respondents said their organisations struggled to identify issues, root causes and solutions when implementing. Organisations tend to implement well when they have staff ownership, honest feedback and frontline trouble-shooting at the heart of what they do. 

Further research could test whether this hypothesis holds for schools too. Especially because McKinsey seem to have recurring blind spots around domain expertise. Based on experience in schools, a combination of domain knowledge and ownership seems to be a strong hypothesis for better implementation than without them.

How can we create high levels of ownership, along with deepening expertise, when we implement our plans in schools? What do we need to know? 

Two disciplines that take us beyond implementation science and its ‘mechanisms’

Headteacher Matthew Evans signposted me to a mind-shifting research summary by the BMJ that combines three disciplines: implementation science, complexity science and social science. Each has a different logic of change: mechanistic, ecological and social respectively. 

Implementation science developed from evidence-based medicine, and is systematic, sequential and structured, with talk of mechanisms as an ‘engine’. The EEF draw heavily and narrowly on this discipline. 

By contrast, complexity science is ecological. It sees organisations as complex, dynamic and evolving, full of relationships, interactions and uncertainty. It encourages understanding relationships, running experiments, asking questions, exchanging viewpoints, muddling through, developing adaptive capacity in staff by allowing judgments and tinkering, and a participatory culture. 

What is more, social science draws on people’s beliefs, feelings, values, motives, expectations, understanding, norms, customs and interpretations. It tries to understand why people act as they do. It recognises that under conditions of uncertainty, high stakes, group dynamics, and limited time, decisions aren’t always fully rational, but are social and emotional too. 

You can see how complexity science and social science perspectives, for example, complicate the causal web for things like obesity and its discontents.

I’d love to see further research on implementation and causal webs specific to schools, beyond the organisations and hospitals that the two papers above study.

Like strategy, which is actually more than one thing (knowledge-building, prioritising and adapting), and writing too (planning/studying, drafting, editing), implementation is actually several things: knowing our stuff, preparing, planning and reviewing.

We need to know what knowledge to equip ourselves with.

We need to know how to choose whether to run an initiative.

We need to know how to approach planning each initiative.

We need to know how to review our initiative.

Preparing, planning and reviewing well upstream can save massive problems and heartache downstream. All depend heavily on knowing the context and area we are implementing in: whether behaviour, curriculum or CPD.

What if we created and iterated a list of questions to ask ourselves to build our tacit and contextual knowledge at the crucial points before deciding on an initiative, before launching it, and for shortly after?

PREPARING: how to know enough to decide if this is the best thing to do

  1. Team: who will lead on the initiative as project leader, team and wider champions? How deep is their domain expertise – how much credibility do they have?
  2. Capacity: How much capacity do they and the organisation have – how many other initiatives are going on? how much time do the frontline implementers have to bring to what will need to be done?
  3. Time: How much time can we allocate to the knowledge-building, planning and reviewing phases of the new potential initiative? Remember Hofstader’s law- everything takes far, far longer than we think, even using this law.
  4. Comparisons: How confident are we that it’s the very best next initiative? What’s its feasibility and probability of lasting success, based on the research evidence and our domain experience? How high-impact is it for the time invested? What’s the opportunity cost? What are the alternatives? Should we definitely choose this over others?

In short, to prepare well when implementing, know your team, their capacity, the time and the options.

PLANNING: how to increase chances of success

  1. Problem: What’s the precise problem we want to address? what are the causal webs here, based on our experience?
  2. Initiative: What are the vital ingredients that multiple disciplines, research evidence and our experience suggest? What’s our theory of change – how exactly do we think our solution will address the problem and its multiple causes? 
  3. Scope: What scope are we choosing – what are the breadth-depth tradeoffs? 
  4. Quality: What standards will we set as essential, given our expertise – and how can we make them clear?
  5. Alignment: what are our team’s current beliefs, values, feelings, expectations and norms? Where does our theory of change clash with those? Where will it resonate and align with our people’s outlooks and worldviews?
  6. Pilot: How could we run a smaller-scale pilot to evaluate and learn from?
  7. Follow-through: What training activities and rationale will best boost follow-through and deepen domain knowledge?   
  8. Content: What needs to be produced to make the vital ingredients and training activities work?
  9. Risks/Premortem: What will the biggest challenges and risks be? What could best be done to mitigate them? If in a year it hasn’t  worked out, what would the reason be? 
  10. Tasks/Owners: What tasks must be done by when? Who will do what by when? How strong is their expertise in these areas?
  11. Milestones: What milestones and deadlines will we set? How realistic are these deadlines, given everything else we have on?
  12. Preemption: How can we preempt the ‘implementation dip’ by anticipating what is most likely to prove tricky? Could we get internal or external expert feedback on the content we create?
  13. Ownership: How can we create ownership among frontline staff, perhaps through involvement, decision-making and trouble-shooting on this?
  14. Launch: Who’s best placed to launch the initiative to frontline staff and how?
  15. Support: How will comms, reflection/discussion, coaching or other support work best?  
  16. Input: How will feedback, data collection and input-gathering work best to identify issues to resolve? Who will we get input from ahead of the launch, and after?

REVIEWING: how to choose whether to sustain, scale or scrap it 

  1. Adaptations: What side-effects and unintended consequences have we seen? How should we adapt to issues raised in review?
  2. Bright spots: What are the early wins and bright spots we can flag up and learn from?
  3. Sustaining: What is required to sustain this long-term in the culture? What knowledge will most deepen our expertise, and for who?
  4. Scale: Should we stick with the current scope, scrap it or scale the pilot up beyond its scope? If we scale, let’s return to the questions of the preparing phase.  

Do less, better

I didn’t ask these questions of the initiatives I ran in the past. To be honest I did not carve out the time for them. As a result, we came down with recurrent bouts of initiativitis.

Actually, these 24 or so questions make me realise: it would be much better for me to run far fewer initiatives, far better implemented. To do fewer things in greater depth. One CEO of a successful organisation I know ran just 5 big initiatives in 25 years. Many schools run 25 in 5 years. Or more!

If I asked these questions of each initiative I thought about launching as a school leader, it could help a lot.

  • It could create greater ownership in our teams. 
  • It could create greater understanding among our leaders.
  • It could create greater clarity and cohesion.
  • It could reduce confusion, overload, forgetting and burnout. 
  • It could boost morale, productivity and perhaps even happiness and retention. 

And it’s just a few steps.

Better knowledge. 

Better preparation. 

Better planning. 

Better reviewing. 

*

With implementation – combining expertise, problem-solving and ownership – there’s lots to ask and loads to learn. 

Time to redouble my efforts!

About Joe Kirby

School leader, education writer, Director of Education and co-founder, Athena Learning Trust, Deputy head and co-founder, Michaela Community School, English teacher
This entry was posted in Education, Staff Culture. Bookmark the permalink.

1 Response to Implementation as learning: 24 questions to ask

  1. Pingback: Articles | Joe Kirby

Leave a Reply