What I’m about to write is likely to make me very unpopular with some people. While I’d prefer to avoid that, the issues and the truth about them are too important not to write it. I have no axe to grind, no personal connection to the events or people (that I know of) – but I am passionate about the NHS and about truth.
A couple of weeks ago, I wrote a couple of articles about the Mid Staffs NHS (MSNHS) report by Robert Francis, and about David Cameron’s reaction to it. These articles focused on the political implications of the events and the report, and on setting the reported death figures in context, rather than on the veracity of the figures themselves and the story behind them. Although I touched on the fact that the wide range of the figures given - 400-1200 – showed how uncertain they were, like everyone else I assumed there must be some truth to them, because they were so widely reported and so seemingly uncontested.
Not only the more lurid newspapers like the Sun, but also the ‘respectable’ press and media have reported the 400-1200 figure as fact – and continue to do so, the relevance of which we’ll see toward the end this article. Only a couple of weeks ago, a politics round-up programme on BBC Radio 4 included these figures as simple fact in their comments on David Cameron’s Commons speech on the Francis Report – and none of their guests there to talk about the issue raised even a murmur of contradiction.
The idea that 400-1200 ‘excess’ deaths took place during a period from 2005-2009 has been repeated so often, with such a complete absence of dispute (unless you knew where to look), that in the public consciousness it has become, to all intents and purposes, a fact.
But it is an idea without any basis in fact.
If you’re a regular reader of this blog, you will know that I believe in research – in drawing together facts and making conclusions based on them. I am no stranger to research and to the effort and time that have to go into an article to be able to make credible statements. But the preparation for this article has taken that investment of time and attention to another level.
My research for my earlier articles on Mid Staffs had led to some even more fundamental questions in my mind that I had to investigate. If you’ve noticed that this blog has been quieter than normal for the past couple of weeks, it has been because almost every spare moment over that period has been spent in researching this post – reading transcripts of witness statements to the Francis Inquiry, investigating the comments and opinions of others on the MSNHS issue specifically or the issues around the use of statistics in general.
What was starting to become apparent to me about the whole Mid Staffs issue was so deeply at odds with the prevailing perception that I had to read more widely and deeply than ever before in order to make sure that I was perceiving correctly.
Because the issues are so complex, and the evidence I could use so abundant (I’ve read well over 1000 pages over the past couple of weeks and will leave out of this article far more than I can put in), that even my best efforts to distill them into conciseness will still leave a post that will take patience and attention for anyone to work through, I’m going to break from the normal ‘good form’ that would mean putting the reasoning and evidence first and saving the conclusion until last.
Instead I’m going to state the conclusion first, and then list the evidence and narrative around it, so that those who wish to and who have the patience to can read through it and satisfy themselves that the conclusion is justifed. So here is that conclusion, along with a very brief justification:
There were no ‘excess’ deaths at Mid Staffordshire NHS during the 2005-2009 period in which the news media and anti-MSNHS campaigners claim there were 400-1200 of them – or, in the words of the independent clinical expert who led the ‘Independent Case-Note Review’ (ICNR) into each individual, contentious death at the Trust:
This information has been in the public domain since at least 2010 – but I doubt if you could find a single reference to it in the mainstream media. “One person might have died!” does not sell newspapers, or gain viewers, in the same way that “400-1200 unnecessary deaths!!!” does, I guess.
You’re quite possibly thinking to yourself, “What?! How can that possibly be correct?” Here’s how.
In 2009, Dr Mike Laker was asked to conduct an independent review into the detailed case notes of every contentious death at MSNHS during the period in question. To identify which cases needed reviewing, the Trust offered all patients who had been treated by the Trust, or their families, the opportunity to ask for a detailed case note review – and ‘detailed’ is the right word: each review would take 5-6 months to complete, so a large number of expert, independent clinicians were needed to complete the process within a reasonable timeframe.
60 such requests were received – which already puts a massive question mark against the figures of 400-1200 ‘excess deaths’. In the course of the review, Dr Laker eventually interviewed 120 families and edited the case notes of 40-50 cases. He was asked by Tom Kark, Counsel to the Francis Inquiry, how many ‘excess deaths’ had occurred among the cases he had reviewed. Mr Kark related Dr Laker’s answer in his ‘final submission‘ to the 2010 inquiry:
‘Perhaps one such death’ – so maybe not even one. People die in hospitals every day, of course – but as far as unnecessary, avoidable deaths caused by negligence or malpractice, the detailed, intensive reviews of all the deaths where relatives were dissatisfied enough to ask for one uncovered ‘perhaps one’.
Dr Laker is no ‘stooge’. His comments, which you can read about in the ‘final submission’ link just above, also included strong criticisms of the organisations overseeing the ICNR. He successfully had the overseeing body changed from the Trust itself to the responsible Primary Care Trust (PCT), to ensure independence, and also stopped the Trust from accessing the case review findings before they went to the families. His findings were not those of a man trying to court favour from, or minimise embarrassment for, the establishment – yet he still could only find ‘maybe one such death’.
In terms of demonstrating that the media portrayal of the story and the underlying reality bore no relation to each other, I could ‘rest my case’ here. But in order to understand why and how the false story that has so permeated the public consciousness came to do so, we need to look in more detail at other aspects of the background, the witness transcripts and the advice/opinions of other experts.
What this examination will reveal is a story of:
- overstretched and struggling hospital staff unable to provide the ‘basic care’ that any health professional would wish to, but managing just about to hold things together even though things weren’t pretty (and a ‘drinking from vases’ claim that appears to have been almost entirely fabricated)
- commercial conflicts of interest and over-stated claims
- statistics that could never say what they were made out to say, even if the data-input was perfect
- data input that was anything but perfect, creating an even more false picture
- bereaved relatives lashing out understandably but excessively
- most critically, collaborating political and media interests spinning a story in a wholly false way for their own ends
The (very truncated but still lengthy) details follow. You may prefer to skim the headings and choose the areas of immediate interest to you to read in detail, and then come back later for other sections as required. I leave that to your preference, but please make sure at least to read section 6, which examines the reasons why the misleading figures have been propagated and exploited – and by whom and why.
1. Even in an ideal world, HSMR is no ‘Ronseal’
The public furore over Mid Staffs began as a result of a set of statistics called ‘Hospital Standardised Mortality Ratios’, or HSMRs which – it appeared – showed a significantly higher ratio of deaths at MSNHS compared to the national average. At no point did the statistics or any report on them name a number of avoidable deaths, either in the 400-1200 range or any other figure. Robert Francis stated this unequivocally on the first day of hearings for his second report.
The reason for this is simple: even working perfectly, the HSMR system is neither designed nor intended to identify ‘unnecessary’ or ‘excess’ deaths, nor is it a measure of quality and safety in a particular hospital (the owners of the system did claim the latter, but backtracked on the witness stand). Chapter 5 of the 2013 Francis Report states the following (which again you will struggle to find in any media reports referring to Mid Staffs:
to this day, there is no generally accepted means of producing comparative figures, and unjustifiable conclusions continue to be drawn from the numbers of deaths at hospitals and about the number of avoidable deaths.
In the context of the careful, neutral wording used in official reports as well as the commercial sensitivities around the HSMR method and the vociferous and aggressive tendencies of the anti-Stafford campaigners, Francis might as well be putting up in neon lights: “HSMRs do not say what you’ve been told they say!“
Or take this exchange between Mr Kark and Roger Taylor, the Director of Research and Public Affairs for DFI, the company that supplies the HSMR data:
K: Can I just ask you this, we’ve heard a lot in this inquiry about how HSMRs might be used as no more than an indication of risk or a need for further attention in a particular area. Did the 2007 publication put the significance of HSMRs too high, calling it an effective way to measure and compare clinical performance safety and quality?
T: No, I — I don’t believe it did. I think it is an effective way to do exactly that. However, I will add to that comment the point that it’s really important to remember that in measuring clinical outcomes and clinical performance there are no perfect measures..
K: Does that mean to say that when the HSMR is above a certain level, and that is to say, if I can get my terminology correct, above certain control limits, it’s not just a tool to identify risk, but it is an effective measure of safety?
T: I’m saying an effective measure of safety is one that helps you identify the risk of something being wrong.
Kark asks Taylor about how the HSMRs can legitimately be used and Taylor fudges initially – but when he is asked directly whether HSMRs can provide an effective measure of safety, he backtracks and says it can only identify where there is a risk that something might be wrong.
Professor Brian Jarman, the creator of the HSMR system, made a statement in his evidence that demonstrates that quality of care and HSMRs are by no means automatically linked:
Now, you’re not going to measure the quality of care of pacemaker insertion by measuring the mortality because, you know, that’s – they are very low.
Similarly, the 2010 inquiry put out a ‘Joint Statement’ on the usefulness of HSMRs which included the following statement:
Along with other indicators, they can usefully help us to understand comparative information about in-hospital deaths. But they have limitations, and should not be used as a sole indicator of patient safety. To do so could potentially give a misleading interpretation of a hospital’s safety record. They should be used with other relevant indicators as a tool to support the improvement in the quality of care.
And the clincher comes (again) from Roger Taylor, as he is asked by Counsel about the link between HSMRs and the media claims about the numbers of ‘excess’ deaths:
Q. Where does Dr Foster stand on the portrayal of the figures about Mid Staffordshire as indicating or showing that there were 400 to 1,200 unnecessary deaths?
A. ..that is a misuse of these data.
Some 300 different indicators are used to assess hospital safety and quality. Even in perfect circumstances, with everything functioning as it should, HSMRs can only perform a small role in this assessment – effectively a signal to say ‘take a look, just in case something is wrong’. Using them to state anything beyond this is ‘misuse’.
Another important indicator lies in the guidance provided by the company that owns the HSMR system to Trusts that find themselves with a high mortality ratio. This guidance takes the form of a list of recommended actions:
- Check to see whether incorrect data has been submitted, or whether an approach to coding which differs from other organisations’ approach has been adopted
- Consider whether something extraordinary has occurred which explains the result
- Consider whether their healthcare partners work in ways which are different from those in other areas
- Consider whether there are any potential issues with regard to the quality of care
The 2nd Francis Report criticised MSNHS for focusing first on whether the high HSMRs were caused by coding issues – but DFI’s own guidance to Trusts on what to do in the case of high HSMRs puts ‘check coding’ at number one in the list of actions. By contrast, checking whether there are actually any issues with care standards is down at number 4.
If even the owners of the system consider that there are 3 factors more likely to affect high HSMRs than actual poor care, can anyone seriously consider that the system is accurate, robust and reliable enough to provide an actual number of ‘excess deaths’ – even in perfect circumstances?
And yet the media continue to report the figures as fact. Since they can’t be unaware of all the above statements and factors (and many more that I’ve had to choose not include for the sake of some semblance of readability), then one has to ask ‘Why?’ – what is the real agenda?
A moving target
One of the key weaknesses with the HSMR system is that it is based around a ‘standard’ score of 100 – which is ‘rebased‘ every year. In simple terms, the statistics take an average score for all the hospitals in England and call that ’100′. Hospitals scoring worse than average get a score above 100, while hospitals scoring better get below 100.
But what ’100′ means moves every year. In the words of Professor Jarman:
we do for the simple — simple-minded English, if you like, adjust it so that the English value was every year.
(That Prof Jarman considers the English simple-minded and unable to handle a figure that isn’t simplified every year is interesting, given Roger Taylor’s testimony that DFI considers the public to be savvy enough to realise what you can’t do with its figures, even if the media are all screaming ‘Excess Deaths!‘)
This ‘rebasing’ means that a hospital can have exactly the same performance in a given year that it achieved in the previous one, and still show a worse HSMR because the overall average moved down. Similarly, if some hospitals are ‘gaming’ the system to improve their score (a possibility that the creator of the system, Professor Sir Brian Jarman acknowledged in his testimony to the 2nd inquiry), they will bring down the average so that ‘honest’ hospitals appear to be doing badly.
But even if nobody cheats, a hospital can be doing well, as well as it’s ever done, and still appear to be sliding down the performance table.
[EDIT: if you're struggling to see why this rebasing to 100 is so misleading, please look here for some additional information that might help]
2. Rubbish in, rubbish out
We’ve just seen that, even if everything around the HSMR system is functioning perfectly, HSMR cannot be used to identify a number of ‘excess’ or avoidable deaths. But as a reading of the inquiry transcripts will quickly show, things were about as far from perfect as they could possibly be in terms of the data that was entered into the system – both nationally and, especially, in the case of Mid Staffs NHS.
One fundamental thing needed for any correct understanding of the issues surrounding MSNHS’ HSMR scores is the knowledge that, for most of the ‘problem’ period at the Trust, it had no coding manager.
The data on which HSMR scores are calculated are based on codes that have to be entered for each patient treated. These codes relate to the condition from which the patient is suffering, and an ‘expected’ death rate is allocated to each condition measured for HSMR purposes. If a hospital shows a higher rate of deaths for a particular condition than the expected rate, this pushes up the overall HSMR score for that hospital. If it shows a lower rate, that helps bring down the HSMR score.
Let’s take a simple example. ‘Fractured neck of femur’ (FNOF) is a fairly common result of falls in elderly people – and a serious one. Out of every 10 people, nationally, who go into hospital with this condition (which in layman’s terms might be called a ‘broken hip’), on average one will die as a result of complications arising from the initial condition. If a hospital loses more than 10 patients with FNOF for every 100 it treats, it will have a relatively high HSMR for that condition. Each condition has its own rate of expected deaths.
But there are serious problems with both the basic principles of the coding and with how it was done at MSNHS – and remember, Mid Staffs’ coding manager was on long-term sick leave for most of the period in question.
First or primary diagnosis
The rules of HSMR coding state that the first ‘non-vague’ diagnosis – sometimes referred to as the ‘primary diagnosis’ – for any patient when they enter hospital for an ‘episode of care’ must be used to determine the coding. But this is full of dangers in terms of measuring mortality rates.
If a patient enters hospital with, for example, a broken tibia (shin-bone), you would expect this to have a low death-rate – dying from a broken leg is pretty rare. The ‘first non-vague diagnosis’ is obviously going to be ‘broken tibia’. But if it is subsequently discovered that the bone broke because it was eaten through by an aggressive, spreading cancer, the expectation of death would clearly be completely different.
But, following the rules of HSMR coding, the code that is entered is the one for a fractured tibia – and the death will seem very unexpected and so will worsen the HSMR score.
Junior doctors work long hours in an intense environment. They are often the first medics to assess and diagnose a patient, and they are unlikely – unless the importance is hammered home to them very hard – to consider it too important to put the right code down for a patient they are treating. Being junior, there is also a higher likelihood of them misdiagnosing or missing a condition when a patient is first examined.
MSNHS’ investigation of its coding, once it had a new coding manager in place, showed that there was a major problem with the coding entered by junior doctors.
In his testimony to the 2nd inquiry, Prof. Jarman confirmed that his system did not ‘adjust for’ secondary diagnoses unless they were ‘present on admission’, or POA. In other words, if a condition – no matter how serious – isn’t either spotted by the doctor or otherwise known about when a patient is first treated, it’s ignored for the purposes calculating HSMRs. But Prof. Jarman made a key admission:
70 per cent of PMA (sic) — present on admission diagnoses are the same as the primary diagnosis.
In other words, in 30% of cases there is a discrepancy – 30% room for the figures to be skewed by a primary diagnosis of one thing when a serious condition might be present that would push the expected death rate much higher. So even if everything goes as planned, there is a known potential for variation in the system of as much as 30%.
‘Co-morbidities’ is the medical term for ‘other stuff that’s wrong with you’. So if you’re in for treatment on an ingrown toenail, for example, but you also suffer from congestive heart-failure and lung-disease, there’s a much higher chance you’ll die while you’re in hospital – and it wouldn’t mean the hospital did anything wrong. But the ‘episode of care’ is for treatment of an ingrown toenail – which would have a very low expected death rate.
The HSMR system does allow co-morbidities to be entered (based on the ‘Charlson Index‘)so that they are taken into account – but if these are wrongly entered or not entered at all, the figures will look as though you died from an ingrown toenail.
The investigations into coding at MSNHS showed that there were substantial problems with the coding of co-morbidities, probably because of the absence of the coding manager combined with problems of under-reporting of co-morbidities by consultants.
Z51.5 and the ‘parade ground’ effect
One the major problems with Mid Staffs’ HSMR scores that I found in my reading of the transcripts was in a change that was made to the coding system to include code Z51.5 – a code to indicate ‘palliative care’. A patient receiving palliative care is suffering from an incurable, terminal condition and is being treated to relieve pain, make him/her comfortable etc. At some point he or she is going to die from the condition – so the expected rate of death during any given ‘episode of care’ is going to be relatively high.
For the sake of brevity, I won’t go into every detail, but when the change to include Z51.5 was made, Mid Staffs’ coding did not change to include it. Since other Trusts were now using a code with a high expected death rate that would lower their HSMR score, and because this would affect the ‘rebasing’ and move the ’100′ benchmark, this had the same effect as a rank of soldiers all stepping back at the same time except for one – he would appear to be standing out in front without having moved at all.
‘Zero stays’ and 30 days..
Another thing that came out during Prof. Jarman’s evidence was the effect of two particular peculiarities in the way that Mid Staffs was coding its patients. The first of these is the ‘zero days’ stay’ category (which actually includes stays of up to 1 day).
MSNHS was not including in its coding any patient who came for treatment and either didn’t stay in hospital at all or only stayed one day. Since the vast majority of patients who come into hospital and leave again in a day or less will be there for treatment of mild conditions (or mild manifestations of potentially serious conditions), the rate of deaths among such patients would be very low. This would have the effect of ‘concentrating’ the death rate at Mid Staffs (by reducing the total number of codes and taking out almost exclusively patients with good outcomes). Since all or almost all other Trusts were including these patients, their death rates would be ‘diluted’ by the ‘zero stay’ patients – again causing, or accentuating, the ‘parade ground’ effect and making MSNHS look worse without necessarily being worse.
Conversely, Mid Staffs was also negatively affected by the lack of ’30 day coding’ in HSMRs – codes allocated according to the outcome 30 days after discharge from hospital.
If a hospital discharges a patient early, who then dies outside the hospital, this is not reflected in the HSMR. But if a hospital keeps a patient longer to make sure he/she is fit for discharge, or is unable to discharge an elderly or infirm patient because of the lack of non-hospital care facilities, and the patient then dies, the hospital effectively suffers in its HSMR because it did the right thing.
The 30-day effect might not only occur because of irresponsible discharge of patients. If a hospital has a hospice nearby and can discharge terminally ill patients for palliative hospice care, the patients will die in the hospice and this will improve the hospital’s HSMR even though the patients still die.
Professor Jarman repeatedly claimed that the effect of correcting codings for co-morbidities and palliative care would be very small, but this claim appears highly questionable.
Firstly, the ‘parade ground’ rebasing effect when the Z51.5 palliative coding was launched in other hospitals caused Mid Staffs’ HSMR to rise by 13 points, from 114 to 127 – a serious change.
The group ‘Straight Statistics’, a “pressure group whose aim is to detect and expose the distortion and misuse of statistical information, and identify those responsible”, wrote an article examining the reliability of HSMRs and particularly the effects of errors/corrections in coding. The article included an examination of the relationship between ‘depth of coding’ (how many co-morbidities were recorded alongside the main diagnosis), which varies widely across Trusts, and HSMR.
Quoting a response from Prof. Jarman’s organisation ‘The Doctor Foster’s Unit’, the article says:
a hospital using only 2.5 codes per patient would show an HSMR about 15-20 points higher than one using 5.5 to 6 codes per patient
15-20 points is not ‘very small’. The number of codes per patient at Mid Staffs is not stated – but with no coding manager in place and proven issues with uncoded co-morbidities, it is certain that it was at the low end during the period of high HSMRs.
When the new coding manager joined MSNHS, she carried out a re-coding exercise (apparently 2, in fact, since the first one over-corrected). According to evidence given by acting Chair of the Trust David Stone in 2009 to the Health Scrutiny Committee, once the correct re-coding was done, Mid-Staffs’ HSMR score was:
Just in case there is any lingering doubt on the fact that coding can have a massive effect, we’ll leave the last word to Professor Jarman. Just 8 days ago, he sent the following message on Twitter:
PwC is PricewaterhouseCooper, a huge firm that carries out detailed audits and analyses – and it found a 25-30% difference in Mid Staffs’ high HSMR code due to incorrect coding.
Rubbish in, rubbish out..
3. Conflicts of interest and exaggerated claims
The HSMR system is run by Professor Jarman’s Doctor Foster Unit (DFU), which is part of the faculty of medicine at Imperial College. DFU receives the majority of its funding, confusingly, from Doctor Foster’s Intelligence, or DFI. DFI is a commercial, profit-making company (although 47%-owned by the Dept of Health). DFU calculates the HSMR scores for hospitals free of charge.
There is no suggestion, that I can make out from the transcripts, that DFI or DFU deliberately skewed any figures in the HSMR index for commercial gain. However, DFI does publish an annual ‘Good Hospital Guide‘ that includes a ‘league table’ of HSMR rankings. Based on these rankings, DFI attempts to sell to Trusts its ‘Real Time Monitoring’ (RTM) service for the sum of £35,000 per year. This service provides ‘alerts’ to customer Trusts about areas where HSMR is poor or starting to slip, so that the Trusts can take corrective action – and can optimise their position in the Good Hospital Guide. From Roger Taylor’s evidence to the 2nd inquiry:
Walsall hospital was named as the hospital with the highest death rate in the first hospital guide in 2001, which they were not very pleased about..Walsall subsequently became very enthusiastic and started using the RTM tool.
An email from a senior DFI director in 2011 stated:
we ran a consultation on the indicators used before they went into the hospital guide in 2010..We alerted hospital trusts to this by writing to them and letting them know and through the Health Service Journal. We will do the same this year. Providing access to them in the tools we sell is the obvious next step.
The fact that DFI stood to gain financially from the creation and publication of league tables based on HSMR must cast serious doubt on the use of HSMR as a tool for assessing quality of care – especially since the information is made public – even if DFI were not deliberately exploiting the opportunity. Despite the fact that Roger Taylor stated in his evidence that he did not think this represented a conflict of interests, an impartial observer must recognise that there was indeed potentially such a conflict.
The fact that Mid Staffs knew that their HSMR position was going to be made public in this way must also have contributed to their focus on coding, which was criticised by the Francis Report – especially when DFI’s own guidance on how to respond to poor HSMRs put ‘check coding’ as number 1 on the list of actions.
As was revealed during the testimony given by Roger Taylor, DFI’s 2007 publication had massively overstated the usefulness and significance of its HSMR data, calling it:
an effective way to measure and compare clinical performance safety and quality. Deaths in hospital are important and unequivocal outcomes.
As we’ve already seen, HSMRs are nothing of the sort, and the information that they give on deaths is anything but ‘unequivocal’. Mr Taylor initially denied that this was claiming too much – but under further questioning he eventually said, when speaking about focus groups made up of members of the public, that they show a
general scepticism of the ability to accurately measure quality of care. In which regard they are being, I think, pretty smart, actually.
If the public are being ‘pretty smart, actually’ to be ‘generally sceptical’ of the system’s ‘ability to measure quality of care’, then I think that calling the HSMR measure ‘unequivocal’ as a measure of ‘clinical performance safety and quality’ is without question an exaggeration – and a pretty big one. Especially when Mr Taylor also acknowledged that the output of the system is only as good as the data that’s put into it – and when, as Prof. Jarman put it in his testimony,
it depends how the coder codes it.
Such an exaggerated claim can only have fanned the self-fuelling flames of misleading publicity about the ’400-1200 unnecessary deaths’.
4. The top 3 factors in poor care at Mid Staffs: understaffing, understaffing and understaffing
There is no doubt that there was poor care in some parts of MSNHS. Various inspections that followed the initial public furore found that care in some departments was ‘appalling’. However, Robert Francis’ recommendation that individuals should not be pursued for events at Mid Staffs strongly suggests that the failings at the Trust were systemic rather than resulting from malice or neglect on the part of any one person or group of people, particularly front-line nurses and doctors.
This is supported by the statistics provided in Annex 1 (part of Volume 3) of the 2013 report which show that, over the 5 years covered by the report, the number of ‘serious untoward incidents‘ which were recorded at the Trust and ascribed to lack of staff was a massive 1,756 – an average of 351 ‘serious’ incidents per year attributable to short-staffing.
However, these ‘untoward incidents’ mostly represented failures of ‘basic care’ – cleaning, comfort and so on – rather than life-threatening incidents. Remember, the review of the 60 incidents (and interviews of 120 families) that were serious enough during this period for the families to accept the offer of a full case-note review resulted in ‘perhaps one’ avoidable death.
Patients were left in their own waste etc, which is a horrendous indignity that no one should have to suffer – but which is very, very rarely life-threatening. If staff numbers were too low, as the stats suggest, then nurses inevitably faced times when they were simply unable to do everything and had to prioritise.
I know from my many conversations with nurses from various hospitals that there can often be times when a patient’s ‘basic care’ needs have to wait – because all the available nurses were trying to help another patient breathe, or to keep him/her alive through a heart attack, or deal with sudden and serious haematemesis (vomiting blood) and so on.
At this point it’s worth addressing one of the most persistent myths of the ‘Mid Staffs phenomenon’: that ‘neglected’ patients were so thirsty, and so ignored, that they had to drink the water from flower vases.
Appalling if true – but flower vases were banned from the two MSNHS hospitals from the late 1990s, presumably for hygiene reasons. I’ve heard anecdotally that there may have been one incident in which a (probably confused) patient was allowed a vase as an exception, and did drink from it – but the idea that this was more than a one-off appears to be entirely unfounded. Instead it appears that the media spun out a one-off into a regular incident for the sake of lurid headlines.
Nurses feel terrible about those who have to put up with indignity or discomfort – and relatives of those patients frequently fail to understand that their loved ones are only suffering ‘neglect’ because nurses had to choose between that and allowing someone to die or suffer horrible fear and pain.
It’s awful and it should never happen – but it will, as long as wards are not fully staffed according to not only the number of patients but also the severity of their conditions and the level of their dependency. And under this government, it will happen more and more.
Which leads me on to my final sections – which I’ll try to keep brief because this post is already more than long enough.
5. The viciousness of grief, the cynicism of politicians and the collaboration of the media
Just last weekend, the Guardian’s online edition carried a call from a relative of someone who died at MSNHS for ‘heads to roll’. This same lady – to whom my heart goes out for her loss – was also heard, at a public meeting of anti-MSNHS campaigners, to call
Let’s shut the hospital, let’s sack all the staff!
Losing a close family member is a horrible experience – I lost my mother after a long and gruelling battle against ovarian cancer 9 years ago. But surely, someone who would rather have no hospital and see thousands of doctors, nurses and other health staff, most of whom she can never have met, made unemployed because of her grief and rage is not thinking straight.
One can understand and sympathise, certainly – and I do. But it must be a foolhardy decision indeed to allow someone who is in such a state of mind to influence policy and to invite him or her frequently to influence public opinion via media interviews and articles. When deciding the fate of health services that about a quarter of a million people rely on, ‘cool heads’ surely have to prevail and decisions made must rest on logic and fact, and not emotion and grief.
And a person or entity that would exploit the grief of such a vulnerable person would be reprehensible indeed.
Which leads me to my final section:
6. Politics, media and exploitation
In my opinion, it’s extremely telling that the ‘media mentor’ of the anti-MSNHS group was the Conservative MP for Stone in Staffordshire, Bill Cash. Mr Cash’s testimony to the inquiry makes perfectly plain that he understood absolutely none of the detail of what was happening at Mid Staffs and why. However, he evidently understood a political opportunity when he saw one, and he set up meetings for the group to promote its calls for a public inquiry.
Mr Cash was also associated with the first ‘leak’ of the supposed ‘unnecessary death’ toll of 400-1200 people to the Daily Mail. Mr Cash, it must be said, has denied being responsible for the leak, and there is nothing to prove that he was. The fact that the figures appeared alongside quotes from Mr Cash must at least raise the question – but the article also included quotes from the leader of the relatives’ group, so the provenance of the figures is uncertain.
It’s all political
At various points throughout his testimony, Prof Jarman refers to negative attitudes from the (Labour) government toward HSMRs – but then (from p.171 of the record) he reports a sudden change:
There has been an improvement, it seems, in [the Dept of Health’s attitude to the value of HSMRs.
In his view, this might be linked to the publication of the first Francis Report on Feb 2010. However, he is very specific about the point when the real change occurred:
But the statements in the White Paper of 12 July 2010 were very positive.
The white paper of 2010 in which the government published its outline plans that eventually led to the Health and Social Care Act 2012, under which they are decimating the NHS and at this very moment are trying to force through undebated, unvoted measures to force accelerated privatisation.
A Tory government takes power. Two months later it launches it’s ‘here’s one we made earlier’ blueprint for the destruction of the NHS – and ‘coincidentally’ it starts to take a ‘very positive’ attitude toward a tool that can make hospitals look as if they’re killing people even when they’re not.
A positive attitude in spite of the fact that Mr Francis’ first report contained the ‘Joint Statement’ that we’ve already seen about the weaknesses and limitations of HSMRs.
It doesn’t take a great deal of imagination to ‘put two and two together’ in a far clearer and more reliable way than the HSMR method.
What the papers say..
It’s also very significant that one of the most enthusiastic users of the spurious figures has been the Daily Telegraph – a ‘newspaper’ with a proven track record of NHS attacks for political purposes. The paper is on record as having co-ordinated articles on behalf of private health interests to help the passage of the invidious 2012 NHS Act and has even instructed sub-editors to leave anti-NHS material in an article to which it was irrelevant.
The desire for eye-catching headlines, improved circulation and journalistic laziness have all contributed to the spread of the myths about ‘excess deaths’ at Mid Staffs and the distortion of the public perception of what really went on there. But, without question, at its core lies yet another unholy alliance between the Tories and the right-wing media for the advancement of their multi-fronted, ideologically-driven assault on the NHS of which most of us are rightly proud.
In this context, it’s perfectly plain why David Cameron found it expedient to ‘eat humble pie’ and apologise on behalf of the country for the “horrific pain and even death” suffered by “many” (again propagating the myth). The recommendations of Robert Francis’ report include the closure of hospitals found to have similar problems to MSNHS; by accepting the report with crocodile tears and in sackcloth and ashes, Cameron has positioned himself to be able to exploit those recommendations as another excuse to close hospitals – alongside ‘rationalisation’, creating ‘centres of excellence’ and the financial problems of neighbouring Trusts (as the people relying on the successful Lewisham Trust have already found to their cost).
And, of course, to tarnish the image of the NHS in the eyes of a public that still considers the NHS the crowning achievement of our country.
The moral is clear:
Don’t believe everything you read in the papers – especially when it involves Tories and the NHS.