“An official survey shows that 76% of people in the [PIP] system responded to say that they were satisfied. That itself is not a happy position, but it shows that her representation of people’s average experience as wholly negative on the basis of a Twitter appeal does not reflect the results of a scientific survey.” Stephen Kerr, (Conservative and Unionist MP for Stirling), Personal Independence Payments debate, Hansard, Volume 635, Column 342WH, 31 January 2018
“The latest official research shows that 76% of PIP claimants and 83% of ESA claimants are satisfied with their overall experience.” Spokesperson for the Department for Work and Pensions.
The Department for Work and Pensions Claimant Service and Experience Survey (CSES) is described as “an ongoing cross-sectional study with quarterly bursts of interviewing. The survey is designed to monitor customers’ satisfaction with the service offered by DWP and enable customer views to be fed into operational and policy development.”
The survey measures levels of satisfaction in a defined group of ‘customers’ who have had contact with the Department for Work and Pensions within a three-month period prior to the survey.
One problem with the aim of the survey is that satisfaction is an elusive concept – a subjective experience that is not easily definable, accessible or open to precise quantitative measurement.
Furthermore, statistics that are not fully or adequately discussed in the survey report – these were to be found tucked away in the Excel data tables which were referenced at the end of the report – and certainly not cited by Government ministers, are those particularly concerning problems and difficulties with the Department for Work and Pensions that arose for some claimants.
It’s worrying that 51 per cent of all respondents across all types of benefits who experienced difficulties or problems in their dealings with the Department for Work and Pensions did not see them resolved. A further 4 per cent saw only a partial resolution, and 3 per cent didn’t know if there had been any resolution.
In the job seeker’s allowance (JSA) category, some 53 per cent had unresolved problems with the Department and only 39 per cent had seen their problems resolved. In the Employment and Support Allowance (ESA) group, 50 per cent had unresolved problems with the Department, and in the Personal Independent Payment (PIP) group, 57 per cent of claimants had ongoing problems with the Department, while only 33 per cent have seen their problems resolved.
– means the sample size is less than 40.
A brief philosophical analysis
The survey powerfully reminded me of Jeremy Bentham’s Hedonistic Calculus, which was an algorithm designed to measure pleasure and pain, as Bentham believed the moral rightness or wrongness of an action to be a function of the amount of pleasure or pain that it produced.
Bentham discussed at length some of the ways that moral investigations are a ‘science’. There is an inherent contradiction in Bentham’s work between his positivism, which is founded on the principle of verification – this says that a sentence is strictly meaningful only if it expresses something that can be confirmed or disconfirmed by empirical observation (establishing facts, which are descriptive) – and his utilitarianism, which concerns normative ethics (values, which are prescriptive). Bentham conflates the fact-value distinction when it suits his purpose, as do the current Government.
The recent rise in ‘happiness’, ‘wellbeing’ and ‘satisfaction’ surveys are linked with Bentham’s utilitarian ideas and a Conservative endorsement of entrenched social practices as a consequence of this broadly functionalist approach. It’s not only a reflection of the government’s simplistic, reductionist view of citizens, it’s also a reflection of the reduced functioning and increasing rational incoherence of a neoliberal state.
As we have witnessed over recent years, utilitarian ideologues in power tend to impose his/her vision of the ‘greatest happiness for the greatest number,’ which may entail some negative consequences for minorities and socially marginalised groups. For example, the design of a disciplinarian, coercive and punitive welfare system to make ‘the taxpayer’ or ‘hard-working families’ happy (both groups being perceived as the majority). The happiness of those people who don’t currently conform to a politically defined norm doesn’t seem matter to the Government. Of course people claiming welfare support pay tax, and more often than not, paid tax before needing support.
Nonetheless, those in circumstances of poverty are regarded as acceptable collateral damage in the war for the totalising neoliberal terms and conditions of the ‘greater good’ of society, sacrificed for the greatest happiness of others. As a consequence, we live in a country where tax avoidance is considered more acceptable behaviour than being late for a job centre appointment. Tax avoidance and offshore banking is considered more ‘sustainable’ than welfare support for disabled people.
This utilitarian problem, arising because of a belief that a state’s imposed paradigm of competitive socioeconomic organisation is the way to bring about the greatest happiness of the greatest number, also causes the greatest misery for some social groups. This is a problem that raises issues with profound implications for democracy, socioeconomic inclusion, citizenship and human rights.
My point is that the very nature and subject choice of the research is a reflection of a distinctive political ideology, which is problematic, especially when the survey is passed off as ‘objective’ and value-neutral’.
There are certain underpinning and recognisable assumptions drawn from the doctrine of utilitarianism, which became a positivist pseudoscience in the late nineteenth century. The idea that human behaviour should be mathematised in order to turn the study of humans into a science proper strips humans down to the simplest, most basic motivational structures, in an attempt to reduce human behaviour to a formula. To be predictable in this way, behaviour must also be determined.
Yet we have a raft of behavioural economists complaining of everyone elses’ ‘cognitive bias’, who have decided to go about helping the population to make decisions in their own and society’s best interests. These best interests are defined by behavioural economists. The theory that people make faulty decisions somehow exempts the theorists from their own theory, of course. However, if decisions and behaviours are determined, so are the theories about decisions and behaviours. Behavioural science itself isn’t value-neutral, being founded on a collection of ideas called libertarian paternalism, which is itself a political doctrine.
The Government have embraced these ideas, which are based on controversial assumptions.
The current government formulates many policies with ‘behavioural science’ theory and experimental methodology behind them, which speaks in a distinct language of individual and social group ‘incentives’, ‘optimising decision-making’ and all for the greater ‘good of society’ (where poor citizens tend to get the cheap policy package of thrifty incentives, which entail austerity measures and having their income reduced, whereas wealthy citizens get the deluxe package, with generous financial rewards and free gifts.)
There are problems with trying to objectively measure a subjectively experienced phenomena. There are major contradictions in the ideas that underpin the motive to do so. There is also a problem with using satisfaction surveys as a measure of the success or efficacy of government policies and practices.
A little about the company commissioned to undertake the survey
The research was commissioned by the Department for Work and Pensions and conducted by Kantar Public UK – who undertake marketing research, social surveys, and also specialise in consultancy, public opinion data, policy and also economy polling, with, it seems, multi-tasking fingers in several other lucrative pies.
Kantar Public “Works with clients in government, the public sector, global institutions, NGOs and commercial businesses to advise in the delivery of public policy, public services and public communications.”
“Kantar Public will deliver global best practice through local, expert teams; will synthesise innovations in marketing science, data analytics with the best of classic social research approaches; and will build on a long history of methodological innovation to deliver public value. It includes consulting, research and analytical capabilities.” (A touch of PR and technocracy).
Eric Salama, Kantar CEO, commented on the launch of this branch of Kantar Public in 2016: “We are proud of the work that we do in this sector, which is growing fast. Its increasing importance in stimulating behavioural change in many aspects of societies requires the kind of expert resource and investment that Kantar Public will provide.”
The world seems to be filling up with self-appointed, utilitarian choice architects. Who needs to live in a democracy when we have so many people who say they’re not only looking out for our ‘best interests’, but defining them, and also, helping us all to make “optimum choices” (whatever they may be). All of these flourishing technocratic businesses are of course operating without a shred of cognitive bias or self-consciousness of their own. Apparently, the whopping profit motive isn’t a bias at all. It’s only everyone else that is cognitively flawed.
Based on those assumptions, what could possibly go
Ok, so having set the table, I’m going to nibble at the served dish. Kantar’s survey – commissioned by the Government – cited in the opening quotes – by the Government. The quotes have been cited in the media, in a Commons debate and even presented as evidence in a Commons Committee inquiry into disability support (Personal Independence Payments and Employment and Support Allowance).
It seems that no-one has examined the validity and reliability of the survey cited, it has simply been taken at face value. It’s assumed that the methodology, interpretation and underlying motives are neutral, value-free and ‘objective’. In fact the survey has been described as ‘scientific’ by at least one Conservative MP.
There are a couple of problems, however, with that. My first point is a general one about quantitative surveys, especially those using closed questions. This survey was conducted mostly by telephone and most questions in the used questionnaire were closed.
Some basic problems with using closed questions in a survey:
- It imposes a limited framework of responses on respondents
- The survey may not have the exact answer the respondent wants to give
- The questions lead and limit the scope of responses
- Respondents may select answers which are simply the most similar to their “true” response – the one they want to give but can’t because it isn’t in the response options – even though it is different
- The options presented may confuse the respondent
- Respondents with no opinion may answer anyway
- Does not provide us with information about whether or not the respondent actually understood the question being asked, or if the survey response options provided include an accurate capture and reflection of the respondents’ views.
Another problem which is not restricted to the use of surveys in research is the Hawthorne effect. The respondents in this survey had active, open benefit claims or had registered a claim. This may have had some effect on their responses, since they may have felt scrutinised by the Department for Work and Pensions. Social relationships between the observer and the observed ought to be assessed when performing any type of social analysis and especially when there may be a perceived imbalanced power relationship between an organisation and the respondents in any research that they conduct or commission.
Given the punitive nature of welfare policies, it is very difficult to determine the extent to which fear of reprisal may have influenced peoples’ responses, regardless of how many reassurances participants were given regarding anonymity in advance.
The respondents in a survey may not be aware that their responses are to some extent influenced because of their relationship with the researcher (or those commissioning the research); they may subconsciously change their behaviour to fit the expected results of the survey, partly because of the context in which the research is being conducted.
Government ministers like to hear facts, figures and statistics all the time. What we need to bring to the equation is a real, live human perspective. We need to let ministers know how the policies they are implementing directly impact on their own constituents and social groups more widely.
Another advantage of qualitative methods is that they are prefigurative and bypass problems regarding potential power imbalances between the researcher and the subjects of research, by permitting participation (as opposed to respondents being acted upon) and creating space for genuine dialogue and reasoned discussions to take place. Research regarding political issues and policy impacts must surely engage citizens on a democratic, equal basis and permit participation in decision-making, to ensure an appropriate balance of power between citizens and the state.
Quantitative research draws on surveys and experimental research designs which limit the interaction between the investigator and those being investigated. Systematic sampling techniques are used, in order to control the risk of bias. However not everyone agrees that this method is an adequate safeguard against bias.
Kantar say in their published survey report: “As the Personal Independence Payment has become more established and its customer base increased, there has been an increase in overall satisfaction from 68 per cent in 2014/15 to 76 per cent in 2015/16. This increase is driven by an increase in the proportion of customers reporting that they were ‘very satisfied’ which rose from 25 per cent in 2014/15 to 35 per cent in 2015/16.
The report states clearly: “The proportion of Personal Independence Payment customers who were ‘very dissatisfied’ fell from 19 per cent to 12 per cent over the same period.
Then comes the killer: “This is likely to be partly explained by the inclusion in the 2014/15 sample of PIP customers who had a new claim disallowed who have not been sampled for the study since 2015/16. This brings PIP sampling into line with sampling practises for other benefits in the survey.
In other words, those people with the greatest reason to be very dissatisfied with their contact with the Department for Work and Pensions – those who haven’t been awarded PIP, for example – are not included in the survey.
This introduces a problem in the survey called sampling bias. Sampling bias undermines the external validity of a survey (the capacity for its results to be accurately generalised to the entire population, in this case, of those claiming PIP). Given that people who are not awarded PIP make up a significant proportion of the PIP customer population who have registered for a claim, this will skew the survey result, slanting it towards positive responses.
Award rates for PIP (under normal rules, excluding withdrawn claims) for new claims are 46 per cent. However, they are at 73 per cent for Disability Living Allowance (DLA) reassessment claims. This covers PIP awards made between April 2013 and October 2016. Nearly all special rules (for those people who are terminally ill) claimants are found eligible for PIP.
If an entire segment of the PIP claimant population are excluded from the sample, then there are no adjustments that can produce estimates that are representative of the entire population of PIP claimants.
The same is true of the other groups of claimants. If those who have had a new claim disallowed (and again, bearing in mind that only 46 per cent of those new claims for PIP resulted in an award), then that excludes a considerable proportion of claimants registering across all types of benefits who were likely to have registered a lower level of satisfaction with the Department because their claim was disallowed. This means the survey cannot be used to accurately track the overall performance of the Department or monitor in terms of whether it is fulfilling its customer charter commitments.
The report clearly states: “There was a revision to sample eligibility criteria in 2014/15. Prior to this date the survey included customers who had contacted DWP within the past 6 months. From 2014/15 onwards this was shortened to a 3 month window. This may also have impacted on trend data.”
We have no way of knowing why those peoples’ claim was disallowed. We have no way of knowing if this is due to error or poor administrative procedures within the Department. If the purpose of a survey like this is to produce a valid account of levels of ‘customer satisfaction’ with the Department, then it must include a representative sample of all of those ‘customers’, and include those whose experiences have been negative.
Otherwise the survey is reduced to little more than a PR exercise for the Department.
The sampling procedure is therefore a way of only permitting an unrepresentative sample of people to participate in a survey, who are likeliest to produce the most positive responses, because their experiences have been of a largely positive outcome within the survey time frame. If those who have been sanctioned are also excluded across the sample, then this will also hide the experiences and comments of those most adversely affected by the Department’s policies and administration procedures, again these are claimants who are the likeliest to register their dissatisfaction in the survey.
Measurement error occurs when a survey respondent’s answer to a survey question is inaccurate, imprecise, or cannot be compared in any useful way to other respondents’ answers. This type of error results from poor question wording and questionnaire construction. Closed and directed questions may also contribute to measurement error, along with faulty assumptions and imperfect scales. The kind of questions asked may also have limited the scope of the research.
For example, there’s a fundamental difference in asking questions like “Was the advisor polite on the telephone?” and “Did the decision-maker make the correct decision about your claim?”. The former generates responses that are relatively simplistic and superficial, the latter is rather more informative and tells us much more about how well the DWP fulfils one of its key functions, rather than demonstrating only how politely staff go about discussing claim details with claimants.
This survey is not going to produce a valid range of accounts or permit a reliable generalisation regarding the wider populations’ experiences with the Department for Work and Pensions. Nor can it provide a template for a genuine learning opportunity and committment to improvement for the Department.
With regard to the department’s Customer Charter, this survey does not include valid feedback and information regarding this section in particular:
Getting it right
• Provide you with the correct decision, information or payment
• Explain things clearly if the outcome is not what you’d hoped for
• Say sorry and put it right if we make a mistake
• Use your feedback to improve how we do things
One other issue with the sampling is that the Employment and Support Allowance (ESA) and Job Seeker’s Allowance (JSA) groups were overrepresented in the cohort.
Kantar do say: “When reading the report, bear in mind the fact that customers’ satisfaction levels are likely to be impacted by the nature of the benefit they are claiming. As such, it is more informative to look at trends over time for each benefit rather than making in-year comparisons between benefits.”
The sample was intentionally designed to overrepresent these groups in order to allow “robust quarterly analysis of these benefits”, according to the report. However, because a proportion of the cohort – those having their benefit disallowed – were excluded in the latest survey and not the previous one, so cross comparision and establishing trends over time is problematic.
To reiterate, the report also says: “When reading the report, bear in mind the fact that customers’ satisfaction levels are likely to be impacted by the nature of the benefit they are claiming. As such, it is more informative to look at trends over time for each benefit rather than making in-year comparisons between benefits.”
With regard to my previous point: “Please also note that there was a methodological change to the way that Attendance Allowance, Disability Living Allowance and Personal Independence Payment customers were sampled in 2015/16 which means that for these benefits results for 2015/16 are not directly comparable with previous years.”
And: “As well as collecting satisfaction at an overall level, the survey also collects data on customers’ satisfaction with specific transactions such as ‘making a claim’, ‘reporting a change in circumstances’ and ‘appealing a decision’ (along with a number of other transactions) covering the remaining aspects of the DWP Customer Charter.These are not covered in this report, but the data are presented in the accompanying data tabulations.”
The survey also covered only those who had been in touch with DWP over a three month period shortly prior to the start of fieldwork. As such it is a survey of contacting customers rather than all benefits customers.
Again it is problematic to make inferences and generalisations about the levels of satisfaction among the wider population of claimants, based on a sample selected by using such a narrow range of characteristics.
The report also says: “Parts of the interview focus on a specific transaction which respondents had engaged in (for example making a claim or reporting a change in circumstances). In cases where a respondent had been involved in more than one transaction, the questionnaire prioritised less common or more complex transactions. As
such, transaction-specific measures are not representative of ALL transactions conducted by DWP”.
And regarding subgroups: “When looking at data for specific benefits, the base sizes for benefits such as Employment and Support Allowance and Jobseeker’s Allowance (circa 5,500) are much larger than those for benefits such as Carer’s Allowance and Attendance Allowance (circa 450). As such, the margins of error for Employment and Support Allowance and Jobseeker’s Allowance are smaller than those of other benefits and it is therefore possible to identify relatively small changes as being statistically significant.”
Results from surveys are estimates and there is a margin of error associated with each figure quoted in this report. The smaller the sample size, the greater the uncertainty.
In fairness, the report does state: “In the interest of avoiding misinterpretation, data with a base size of less than 100 are omitted from the charts in this report.”
On non-sampling error, the report says: “Surveys depend on the responses given by participants. Some participants may answer questions inaccurately and some groups of respondents may be more likely to refuse to take part altogether. This can introduce biases and errors. Nonsampling error is minimised by the application of rigorous questionnaire design, the use of skilled and experienced interviewers who work under close supervision and rigorous quality assurance of the data.
Differing response rates amongst key sub-groups are addressed through weighting. Nevertheless, it is not possible to eliminate non-sampling error altogether and its impact cannot be reliably quantified.”
As I have pointed out, sampling error in a statistical analysis may also arise from the unrepresentativeness of the sample taken.
The survey response rates were not discussed either. In the methodological report, it says: “In 2015/16 DWP set targets each quarter for the required number of interviews for each benefit group to either produce a representative proportion of the benefit group in the eventual survey or a higher number of interviews for sub-group analysis where required. It is therefore not strictly appropriate to report response rates as fieldwork for a benefit group ceased if a target was reached.”
The Government says: “This research monitors claimants’ satisfaction with DWP services and ensures their views are considered in operational and policy planning.”
Again, it doesn’t include those claimants whose benefit support has been disallowed. There is considerable controversy around disability benefit award decisions (and sanctioning) in particular, yet the survey does not address this important issue, since those experiencing negative outcomes are excluded from the survey sample. We know that there is a problem with the PIP and ESA benefits award decision-making processes, since a significant proportion of those people who go on to appeal DWP decisions are subsequently awarded their benefit.
The DWP, however, don’t seem to have any interest in genuine feedback from this group that may contribute to an improvement in both performance and decision-making processes, leading to improved outcomes for disabled people.
Last year, judges ruled 14,077 people should be given PIP against the government’s decision not to between April and June – 65 per cent of all cases. The figure is higher still when it comes to ESA (68 per cent). Some 85 per cent of all benefit appeals were accounted for by PIP and ESA claimants.
The system, also criticised by the United Nations because it “systematically violates the rights of disabled persons”, seems to have been deliberately set up in a way that tends towards disallowing support awards. The survey excluded the voices of those people affected by this government’s absolute callousness or simple bureaucratic incompetence. The net effect, consequent distress and hardship caused to sick and disabled people is the same regardless of which it is.
Given that only 18 per cent of PIP decisions to disallow a claim are reversed at mandatory reconsideration, I’m inclined to think that this isn’t just a case of bureaucratic incompetence, since the opportunity for the DWP to rectify mistakes doesn’t result in subsequent correct decisions, in the majority of cases, for those refused an award.
Without an urgent overhaul of the assessment process by the Government, the benefit system will continue to work against disabled people, instead of for them.
The Government claim: “The objectives of this research are to:
- capture the views and experiences of DWP’s service from claimants, or their representatives, who used their services recently
- identify differences in the views and experiences of people claiming different benefits
- use claimants’ views of the service to measure the department’s performance against its customer charter“
The commissioned survey does not genuinely meet those objectives.
“There is an alternative reality being presented by the other side. The use of figures diminishes disabled peoples’ experiences.”
I don’t make any money from my work. I am disabled because of illness and have a very limited income. But you can help by making a donation to help me continue to research and write informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated – thank you.