01782 450760

Single Post

Commissioning 2.0: There’s a ghost in the machine analogy

VOICES Commissioning 2.0

Andy Meakin BA(Hons) MBA
VOICES Project Director


In the first article of this series (available via this link), I stated that the cyclical model of commissioning often fell short when applied in the context of health and social care services.  In this second article, I’ll consider just two of the often-unstated assumptions of that model and their consequences which has led me to the conclusion that commissioning must change.


VOICES flow chart
Figure 1: The Machine Analogy


The cyclical model of commissioning, Commissioning 1.0, is based on a simple machine analogy applied to services.   Raw materials are input to a specific process that leads (with an assumed high degree of consistency) to a known output.  Occasional undesirable outputs are considered the result of either defective inputs or processes and are, therefore, issues of performance that are sensitive to corrective action.  In the context of commissioning physical products these assumptions hold true.  It is very possible to specify the dimensions of office furniture with a high degree of accuracy.  The raw materials can be accurately replicated.  Similarly, the process of assembly and delivery is easily reproduced.  External variables that might impact on the quality or quantity of the product can be controlled effectively.  Therefore, a consistent output can be relied upon and, where the specification is not met, an effective remedy applied.

There’s a related story, albeit perhaps apocryphal, often told on management courses to illustrate this point.  A Japanese electronics manufacturer was commissioned to produce a very large quantity of transistors.  The contract specification set a defect rate of no more than 0.001%.  On delivery of the order, the manufacturer had enclosed a delivery note: “Please find enclosed your order of 1,000,000 fully functional transistors.  Separately, free of charge, also find enclosed a bag containing 1,000 defective units.  We’re not sure why you want them, we had to make them as a special order.”

That story also provides a useful step to the related second assumption of Commissioning 1.0 that I wanted to highlight.  The assumption that quality – and by extension output and value – is primarily a function of conformance to the specification.  One sign of this assumption is that contract monitoring often focuses on key performance indicators that relate to either inputs (e.g. hours, staff numbers, utilisation, etc.) or related processes (e.g. referrals, assessments completed, etc.).  These input and process measures are not good analogues for either the citizen experience or consequential beneficial outputs.

If these assumptions even approached the reality of delivery in the context of health and social care, then perhaps the Commissioning 1.0 model would be fit for purpose.  But they don’t and it’s not.  I could say that there is a ghost in the machine analogy.   However, in my view it would be more accurate to say that there is a whole flight of ghosts sliming up the works of Commissioning 1.0.  Perhaps even the combined powers of Venkman and co.’s proton packs couldn’t coax them into the containment trap.


VOICES ghost busters
Commissioners Venkman and Spengler doing battle with the spectre of non-conformance to specification


What are the consequences of these assumptions and why must they change?

A principle detrimental consequence of the machine analogy relates to the controls applied, consciously or otherwise, to achieve ‘conformance to specification’.  People are the raw material and parts in the processes.  And, like parts taken from an assembly plant inventory for processing, people are passed from station to station for inspection by operatives.  Tolerances measured with engineers’ callipers are replaced by questions to test people’s circumstances against ‘eligibility criteria’.  This process also serves as an unofficial stress test which assesses whether people are ready to enter the process or otherwise meet with the approval of the process gatekeepers.  These all too human factors provide ample opportunity for gaming behaviours intended, consciously or otherwise, to keep out people that present a higher risk to the process – like parts that do not meet the quality tolerances being kept away from the assembly line for fear that they will break the machine.

One example of a gaming behaviour is ‘parking’.  This is where people are kept on a waiting list or become part of an unofficial, secret, or ghost caseload that is not acknowledged to commissioners until the service is more confident that there will be a positive outcome.  In this way people can come into and out of a service without adversely impacting on measured performance as observed by commissioners.

Another example is ‘shunting’, where conditions are placed on access to divert risky demand elsewhere in the system.  In practice, this means that homeless people may be prevented from registering with a GP due to a lack of identification.  Their condition worsens.  They may then present at A&E for treatment or fall out of the treatment system altogether and, in some cases, present a potential public health risk to the community.  People with mental ill-health may be told that they must successfully complete drug treatment or otherwise become abstinent before they can get help.  They approach drug treatment services and are told that they must address their mental ill-health first.  They feel stuck.  Homeless and destitute people may be expected to demonstrate readiness for housing by paying previous rent arrears to gain access to accommodation.  They lose hope.  People with certain key words in their records related to perceived risk, like ‘arson’ or ‘sex offence’ or ‘violence’, may be excluded from services without proper curiosity being applied to the nature or severity of risk and the likelihood of future harm.  They feel injustice.

Whether the nature of these matters falls towards the relatively minor or the very serious end of the risk spectrum, the risk doesn’t go away because the person has been refused access to, for example, housing or GP registration, or simply diverted to another service.  Indeed, any risk to others may increase due to shunting of the risk and demand elsewhere.  These are the consequences of the voids between services created by siloed commissioning and poorly structured performance metrics.


Remember that time you set fire to your room?  Lol!

A young person in supported housing might argue with their boyfriend or girlfriend and in a moment of poor judgement decide to burn mementos of their relationship in a waste bin in their bedroom.  The alarm is triggered, the building is evacuated, a small fire’s put out, only minimal damage is caused.  Later their overworked support worker writes in a case note something like, “set a fire in the bedroom, building evacuated, no great harm done, warning issued, FARS attended, police notified, no further action taken”.  They then write “arson” on the risk assessment and evaluate the risk as ‘high’ because of the recency and potential for harm to others.  Perhaps months or years later the case note is lost in a sea of other narrative and a different service asks for information to inform their risk assessment.  Of course, the word ‘arson’ and ‘high’ leaps out and now, divorced from the context set out in the case note, triggers their exclusion from the service.

Meanwhile, the same day that the original incident happened, dozens of young people across the country may have done something very similar.  Their parents or carers may counsel or punish them at the time, or both, and there are few, if any, long-term repercussions.  It may even become, with the healing of time, a story of youthful misadventure recalled with amusement at each family Christmas.  Or a PowerPoint slide, featuring a picture of the damage, designed to poke harmless fun at an embarrassed bride or groom on their wedding day. 

Why do we so often hold vulnerable people in supported settings to higher standards than their peers in other circumstances?

These kinds of dynamics in services are, at least in part, the result of rigid commissioning specifications and processes that leave insufficient time or space for professional judgement and, sometimes, foreshadow an exaggeration of the likelihood of risk and harm to the detriment of people we’re meant to be helping.


People can become traumatised or re-traumatised by these processes leading to situations of escalating frustration and sometimes convenient opportunities to exclude those that most need help.  These kinds of gaming behaviours, whether deliberate or subconscious, also hide valuable information about the scale and nature of demand from the commissioning process further undermining the ability of the system to respond effectively.

A model that measures success based on conformance to a specification and the achievement of targets is prone to both ‘parking’ and ‘shunting’ behaviours.  Parking masks activity likely to lead to failure demand until the perceived risk is lower.  Shunting diverts risk and demand into the spaces between services.  Both behaviours have their roots in the Commissioning 1.0 model and the assumptions that underpin it.  They have real world negative consequences for organisational and individual behaviour which do not serve the interests of the people needing health and social care services, housing, or the communities in which we live.

There are many more examples of silo working, parking, and the shunting of risk or demand.  I’d be keen to hear your examples too.

The machine analogy fails in its assumptions for people experiencing multiple disadvantages and complex needs because everyone presenting for help is different and services are too often bolted to rigid specifications that assume people are all essentially the same.  Along with ineffective methods of measuring outcomes, this leads to gaming behaviours – including hidden caseloads – to control the inputs to the official system, conform to specification, and in doing so meet often otherwise unrealistic outcome targets.

As an aside, the sleight of hand underpinning these dynamics may also serve the ‘shop window’ interest of commissioning organisations leading to questions about how the performance of commissioning itself is measured and held to account.  Commissioners that are constantly under pressure to find savings and perhaps, therefore, link their success in that regard to future career progression, may not be too concerned about interfering with forces that lead to an understatement of demand.  But, that is perhaps a topic for future discussion and consideration.

So, if we can’t reasonably change or aggressively select the people to fit the process without negative unintended consequences, then how do we enable the process to change to fit the people?  And, therefore, how and what do we measure to assure ourselves it’s working?

How we access our inner Venkman to exorcise the ghosts from the machine analogy and reach Commissioning 2.0 is the topic of the next article in this series.


VOICES end image

Scroll to Top