We spoke to Jason Pearman on his team’s experience of advocating for an experimental culture within government
ADAPT was a team incubated in Natural Resources Canada’s (NRCan) Innovation Hub. It had a two-year mandate to raise awareness of novel policy tools (e.g. Prizes, Open Policy Making, AI, Experimentation, etc.) across the department, and support their use. When central agencies introduced a directive requiring departments to set aside funding to support experimentation, ADAPT helped NRCan be one of the first departments to respond.
We caught up with Jason Pearman, former manager of ADAPT, to reflect back on the team’s experience with advocating for an experimental culture within government.
In the past, we’d observed a number of efforts to onboard new policy tools/methods/approaches within bureaucracies that focused on investing in a few early adopters and then showcasing their success. While encouraging to those who cared about such things, this approach never really seemed to shift the organisation from business as usual.
Given this observation of past efforts, and timely advice from folks like Indy Johar (Dark Matter Labs), Vinod Rajasekaran (Future of Good), Jenn Gustetic (NASA), Sarah Schulman (In With Forward), Lee Rose (CKX) and former and current deputy ministers, we opted to focus on motivating the middle - the early and late majority of the standard adoption curve.
Simply put, if you have a mandate to drive innovation and/or transformation in public sector organisations, you’re responsible for moving the middle. Anything less, and your impact will be clipped by organisational momentum.
An inter-departmental-consultation on barriers to using novel policy tools
Here are some of our insights on supporting this early/late majority:
1. They’re insanely busy, and they’re constantly being beaten by the drums for change (top down and bottom up).
i) Piggyback existing workflows. Adding to their time-scarcity [1] will cause them to become more biased towards well understood options. For example, we worked with the department’s program evaluation team to build a measurement template (including checklist) for program experiments. Instead of ADAPT sending this material independent of the evaluation team’s standard information pack, it all goes as one package. If possible, integrate your interventions into existing workflows.
ii) You need to make it as easy to try the novel things as it is to do the usual things that are sub-optimal. In our case, novel policy and program approaches required additional corporate due-diligence (e.g. legal, fiscal, communications, audit, senior executive sign-off, etc.). This could slow down, increase levels of effort, and/or block novel proposals, even if the affected program areas were low risk or currently having limited impact. After a bit of observation and many stakeholder interviews, it became clear that much of the friction was a result of poor and often late engagement between program and corporate communities. Ultimately, we evaluated a few models to reduce this friction that showed promise.
iii) You have a short window of opportunity to provide information, resources and advice before they need to move on to the next task. So we converted commonly shared advice and materials into standard products that we could pull off the shelf and repurpose if a team reached out. If you’re catering to the middle, expect a larger volume of clients compared to innovation units that are focused on the early adopters. Wherever possible, productise your services [2], as it will help you respond in a timely manner to surges in demand, and help you skip a meeting or two. At the end of the day, these teams will want to talk to people, but standard products will save you time and increase the ROI when you do meet with clients.
2. Consistent, top-down pressure is a powerful extrinsic motivator.
i) Don’t expect enthusiasm and mandates for experimentation to trickle down. We observed that a majority of public servants are acutely attuned to the expectations of one to two executives directly above them in the hierarchy. In theory, this should result in a trickle down of expectations, but in reality, gaps in expectations emerge and fester. This creates a significant bottleneck for policy and program innovations.
To protect against gaps, we moved to create demand for policy innovation throughout the chain of command (senior executives, directors, and managers). For example, we worked with our human resources team to craft performance evaluation criteria that now apply to all executives. This move resulted in an immediate spike in demand for ADAPT’s services.
If you chat with enough people, and support motivated teams, you’ll get a decent sense of the bottlenecks in your organisation, e.g. where approvals for novel approaches get stalled or there are a lack of proposals being generated. Find a way to reinforce demand at these critical junctures.
NB: We were lucky at NRCan, as the Innovation Hub had spent 1.5 years supporting the supply side, so we were able to focus our efforts on structurally introducing demand.
3. Most public servants will follow the rules and assume the most conservative interpretations.
i) Repurpose the rules and help interpret them to make them workable. In Canada, there are often department generated rules/processes and central agency generated rules/processes. You need to work both. For example, we invested heavily in supporting the central agency team responsible for crafting the directive to heads of departments on program experimentation, and also NRCan’s team responsible for establishing the department’s Results and Delivery (i.e. Deliverology) regime.
By engaging with these two initiatives, we were able to address, in part, some pernicious barriers to policy innovation at the department (e.g. systemic lack of resourcing [financial and human capital] to test new policy tools, uneven executive demand, lack of legitimacy [moving policy innovation from work on the margins to being considered as a feature for priority files]), and low visibility despite many promising use cases.
ii) Evidence of strong alignment to central agency directives made us a trusted source for information and advice, which increased our access to priority files. This virtuous cycle put us in front of a core audience that needs to be better engaged for this work to stick: middle management.
For example, over the course of two years, ADAPT went from modest presentations at departmental policy committees, to presenting at the kick-off meeting for an interdepartmental working group responsible for the design of a $1 billion program, to feeding talking points into the department’s second in command’s speech at an annual forum for managers.
The inaugural Policy Community Conference, co-organised by ADAPT and focused on advancing policy and program innovation and experimentation
When we pivoted our mandate to really align with program experimentation, the initial reaction was definitely a sense of “we’re already investing in new things, nothing to change here.” So we had to move to clearly establish the difference between funding innovation in sectors outside of government, and deliberately testing ways to improve the efficiency and/or effectiveness of government programs - in fact, it became slide #2 in all of our presentations:
Slide 2 of Program Experimentation presentations
We also found that we needed a distinct narrative for every level in the bureaucracy to describe what program experimentation would be like within their sphere of influence. Here are some narratives that seemed to work for us:
We also worked with the Treasury Board of Canada Secretariat Experimentation team [4] to build Experimentation 101 material for the policy and program communities (the material is hosted on the Policy Innovation Portal - Canada’s version of the Open Policy Making Toolkit, built on the internal Government of Canada wiki).
NRCan executives walking through the Framework for Policy & Program Experimentation
As our approach was focused on the middle, we decided to establish a minimum bar that all policy and program teams implementing experiments could achieve, regardless if they were using an experimental, quasi-experimental or non-experimental design. Our “bare minimum” was:
If the minimum bar wasn’t met, our guidance to NRCan’s senior management was that the initiative shouldn’t go towards NRCan’s program experimentation count, i.e. it shouldn’t be rolled up into the departmental reporting on experimentation now obliged by central agencies:
“Performance information from experiments, regardless of whether they result in demonstrable improved results, will be shared openly by deputy heads and results will be reported publicly through departmental plans and departmental results reports.”
Excerpt from Directive to Deputies on experimentation
To help teams wrap their heads around all of this, we produced an experimental design guidance document where we strongly encouraged teams to consult appropriate experts in order to create a right-sized experimental design. We also worked closely with our evaluation group to build a checklist for the steps necessary to design a policy or program experiment. For senior executives, we created a dashboard so they could build and keep track of a roster of experiments in their portfolio.
In general, we intentionally shied away from a narrative that included aspirational targets that the average team wouldn’t be mentally able to get behind, e.g. RCTs. We’d often describe to teams that we were asking the organisation to just move up one rung up the evidence pyramid:
N.B. A few teams were already doing RCTs, but again, our focus was on early and late majority.
For us, the promise of rigorous, enterprise-wide program experimentation to better understand what works was interesting; but shifting government to be more agile, curious, evidence-based and iterative was the higher value proposition. Our view was that the latter could be achieved by taking a learn-by-doing approach.
At the end of the day, we still received pushback within the department despite the bare minimum standard for program experimentation. One contributing factor was likely that a multi-year federal government innovation campaign, Blueprint 2020, conditioned staff and executives to pursue novel initiatives - a focus on experimentation promised to undercut that regime, as just doing something novel would no longer be enough.
Bureaucracy is nothing more than a series of rules, process, norms and ways of organising ourselves in order to achieve standard/reliable outcomes. Its essential function is to move the middle and to maintain a pre-set risk-tolerance level. Make peace with this reality, and use it to your advantage.
Finally, any gains in reimagining the rules, routines and behaviours within the organisation can be quickly lost if there isn’t sustained leadership and support for this work - organisational culture established over years takes time and discipline to shift.
[1] The work of Sendhil Mullainathan and Eldar Shafir is instructive here: https://www.economist.com/news/books-and-arts/21584303-those-too-little-...
[2] Putting Products Into Services, HBR: https://hbr.org/2016/09/putting-products-into-services
[3] The Prime Minister of Canada has shared publicly the expectations for ministers and their ministries: https://www.pm.gc.ca/eng/mandate-letters
[4] Some early insights from the Treasury Board Experimentation team: https://www.nesta.org.uk/blog/promoting-experimentation-government-learn...