A critique of the government’s claimant satisfaction survey

“An official survey shows that 76% of people in the [PIP] system responded to say that they were satisfied. That itself is not a happy position, but it shows that her representation of people’s average experience as wholly negative on the basis of a Twitter appeal does not reflect the results of a scientific survey.”  Stephen Kerr, (Conservative and Unionist MP for Stirling), Personal Independence Payments debate, Hansard, Volume 635, Column 342WH, 31 January 2018 

“The latest official research shows that 76% of PIP claimants and 83% of ESA claimants are satisfied with their overall experience.” Spokesperson for the Department for Work and Pensions.

The Department for Work and Pensions Claimant Service and Experience Survey (CSES) is described as “an ongoing cross-sectional study with quarterly bursts of interviewing. The survey is designed to monitor customers’ satisfaction with the service offered by DWP and enable customer views to be fed into operational and policy development.”

The survey measures levels of satisfaction in a defined group of ‘customers’ who have had contact with the Department for Work and Pensions within a three-month period prior to the survey.

One problem with the aim of the survey is that satisfaction is an elusive concept – a subjective experience that is not easily definable, accessible or open to precise quantitative measurement. 

Furthermore, statistics that are not fully or adequately discussed in the survey report – these were to be found tucked away in the Excel data tables which were referenced at the end of the report – and certainly not cited by Government ministers, are those particularly concerning problems and difficulties with the Department for Work and Pensions that arose for some claimants. 

It’s worrying that 51 per cent of all respondents across all types of benefits who experienced difficulties or problems in their dealings with the Department for Work and Pensions did not see them resolved. A further 4 per cent saw only a partial resolution, and 3 per cent didn’t know if there had been any resolution.

In the job seeker’s allowance (JSA) category, some 53 per cent had unresolved problems with the Department and only 39 per cent had seen their problems resolved. In the Employment and Support Allowance (ESA) group, 50 per cent had unresolved problems with the Department, and in the Personal Independent Payment (PIP) group, 57 per cent of claimants had ongoing problems with the Department, while only 33 per cent have seen their problems resolved. 

disatisfied

–  means the sample size is less than 40. 

A brief philosophical analysis

The survey powerfully reminded me of Jeremy Bentham’s Hedonistic Calculus, which was an algorithm designed to measure pleasure and pain, as Bentham believed the moral rightness or wrongness of an action to be a function of the amount of pleasure or pain that it produced.

Bentham discussed at length some of the ways that moral investigations are a ‘science’. There is an inherent contradiction in Bentham’s work between his positivism, which is founded on the principle of verification – this says that a sentence is strictly meaningful only if it expresses something that can be confirmed or disconfirmed by empirical observation (establishing facts, which are descriptive) – and his utilitarianism, which concerns normative ethics (values, which are prescriptive). Bentham conflates the fact-value distinction when it suits his purpose, as do the current Government.

The recent rise in ‘happiness’, ‘wellbeing’ and ‘satisfaction’ surveys are linked with Bentham’s utilitarian ideas and a Conservative endorsement of entrenched social practices as a consequence of this broadly functionalist approach. It’s not only a reflection of the government’s simplistic, reductionist view of citizens, it’s also a reflection of the reduced functioning and increasing rational incoherence of a neoliberal state. 

As we have witnessed over recent years, utilitarian ideologues in power tend to impose his/her vision of the ‘greatest happiness for the greatest number,’ which may entail some negative consequences for minorities and socially marginalised groups. For example, the design of a disciplinarian, coercive and punitive welfare system to make ‘the taxpayer’ or ‘hard-working families’ happy (both groups being perceived as the majority). The happiness of those people who don’t currently conform to a politically defined norm doesn’t seem matter to the Government. Of course people claiming welfare support pay tax, and more often than not, paid tax before needing support.

Nonetheless, those in circumstances of poverty are regarded as acceptable collateral damage in the war for the totalising neoliberal terms and conditions of the ‘greater good’ of society, sacrificed for the greatest happiness of others. As a consequence, we live in a country where tax avoidance is considered more acceptable behaviour than being late for a job centre appointment. Tax avoidance and offshore banking is considered more ‘sustainable’ than welfare support for disabled people. 

This utilitarian problem, arising because of a belief that a state’s imposed paradigm of  competitive socioeconomic organisation is the way to bring about the greatest happiness of the greatest number, also causes the greatest misery for some social groups. This is a problem that raises issues with profound implications for democracy, socioeconomic inclusion, citizenship and human rights. 

My point is that the very nature and subject choice of the research is a reflection of a distinctive political ideology, which is problematic, especially when the survey is passed off as ‘objective’ and value-neutral’.

There are certain underpinning and recognisable assumptions drawn from the doctrine of utilitarianism, which became a positivist pseudoscience in the late nineteenth century. The idea that human behaviour should be mathematised in order to turn the study of humans into a science proper strips humans down to the simplest, most basic motivational structures, in an attempt to reduce human behaviours to a formula.

To be predictable in this way, behaviour must also be assumed to be determined.

Yet we have a raft of behavioural economists complaining of everyone elses’ ‘cognitive bias’, who have decided to go about helping the population to make decisions in their own and society’s best interests. These best interests are defined by behavioural economists. The theory that people make faulty decisions somehow exempts the theorists from their own theory, of course. However, if decisions and behaviours are determined, so are the theories about decisions and behaviours. Behavioural science itself isn’t value-neutral, being founded on a collection of ideas called libertarian paternalism, which is itself a political doctrine. 

The Government have embraced these ideas, which are based on controversial assumptions. Even the best philosophers and neuroscience specialists have never resolved the debate around determinism and free will. There is no consensus on the matter. 

The current government formulates many policies with ‘behavioural science’ theory and experimental methodology behind them, which speaks in a distinct language of individual and social group ‘incentives’, ‘optimising decision-making’ and all for the greater ‘good of society’ (where poor citizens tend to get the cheap policy package of thrifty incentives, which entail austerity measures and having their income reduced, whereas wealthy citizens get the deluxe package, with generous financial rewards and free gifts.) 

There are problems with trying to objectively measure a subjectively experienced phenomena. There are major contradictions in the ideas that underpin the motive to do so. There is also a problem with using satisfaction surveys as a measure of the success or efficacy of government policies and practices. 

A little about the company commissioned to undertake the survey

The research was commissioned by the Department for Work and Pensions and conducted by Kantar Public UK –  who undertake marketing research, social surveys, and also specialise in consultancy, public opinion data, policy and also economy polling, with, it seems, multi-tasking fingers in several other lucrative pies

Kantar Public “Works with clients in government, the public sector, global institutions, NGOs and commercial businesses to advise in the delivery of public policy, public services and public communications.” 

Kantar Public will deliver global best practice through local, expert teams; will synthesise innovations in marketing science, data analytics with the best of classic social research approaches; and will build on a long history of methodological innovation to deliver public value. It includes consulting, research and analytical capabilities.” (A touch of PR and technocracy).

Eric Salama, Kantar CEO, commented on the launch of this branch of Kantar Public in 2016: “We are proud of the work that we do in this sector, which is growing fast. Its increasing importance in stimulating behavioural change in many aspects of societies requires the kind of expert resource and investment that Kantar Public will provide.”

The world seems to be filling up with self-appointed, utilitarian choice architects. Who needs to live in a democracy when we have so many people who say they’re not only  looking out for our ‘best interests’, but defining them, and also, helping us all to make “optimum choices” (whatever they may be). All of these flourishing technocratic businesses are of course operating without a shred of cognitive bias or self-consciousness of their own. Apparently, the whopping profit motive isn’t a bias at all. It’s only everyone else that is cognitively flawed. 

Based on those assumptions, what could possibly go wrong right?

I digress. 

The nitty-gritty

Ok, so having set the table, I’m going to nibble at the served dish. Kantar’s survey – commissioned by the Government – cited in the opening quotes – by the Government.  The quotes have been cited in the media, in a Commons debate and even presented as evidence in a Commons Committee inquiry into disability support (Personal Independence Payments and Employment and Support Allowance).

It seems that no-one has examined the validity and reliability of the survey cited, it has simply been taken at face value. It’s assumed that the methodology, interpretation and underlying motives are neutral, value-free and ‘objective’. In fact the survey has been described as ‘scientific’ by at least one Conservative MP.

There are a couple of problems, however, with that. My first point is a general one about quantitative surveys, especially those using closed questions. This survey was conducted mostly by telephone and most questions in the used questionnaire were closed

Some basic problems with using closed questions in a survey:

  • It imposes a limited framework of responses on respondents
  • The survey may not have the exact answer the respondent wants to give
  • The questions lead and limit the scope of responses 
  • Respondents may select answers which are simply the most similar to their “true” response – the one they want to give but can’t because it isn’t in the response options – even though it is different
  • The options presented may confuse the respondent
  • Respondents with no opinion may answer anyway
  • Does not provide us with information about whether or not the respondent actually understood the question being asked, or if the survey response options provided include an accurate capture and reflection of the respondents’ views.

Another problem which is not restricted to the use of surveys in research is the Hawthorne effect. The respondents in this survey had active, open benefit claims or had registered a claim. This may have had some effect on their responses, since they may have felt scrutinised by the Department for Work and Pensions. Social relationships between the observer and the observed ought to be assessed when performing any type of social analysis and especially when there may be a perceived imbalanced power relationship between an organisation and the respondents in any research that they conduct or commission.

Given the punitive nature of welfare policies, it is very difficult to determine the extent to which fear of reprisal may have influenced peoples’ responses, regardless of how many reassurances participants were given regarding anonymity in advance.

The respondents in a survey may not be aware that their responses are to some extent influenced because of their relationship with the researcher (or those commissioning the research); they may subconsciously change their behaviour to fit the expected results of the survey, partly because of the context in which the research is being conducted.

The Hawthorne Effect is a well-documented phenomenon that affects many areas of research and experiment in social sciences. It is the process where human subjects taking part in research change or modify their behaviour, simply because they are being studied. This is one of the hardest inbuilt biases to eliminate or factor into research design. This was a survey conducted over the telephone, which again introduces the risk of an element of ‘observer bias.’

Methodological issues

On a personal level, I don’t believe declared objectivity in research means that positivism and quantitative research methodology has an exclusive stranglehold on ‘truth’. I don’t believe there is a universally objective, external vantage point that we can reach from within the confines of our own human subjectivity, nor can we escape an intersubjectively experienced social, cultural, political and economic context.

There is debate around verificationism, not least because the verification principle itself is unverifiable. The positivist approach more generally treats human subjects as objects of interest and research – much like phenomena studied in the natural sciences. As such, it has an inbuilt tendency to dehumanise the people being researched. Much human meaning and experience gets lost in the translation of responses into quantified data – the chief goal of statistical analysis is to identify trends

An example of the employment of ‘objective’ and ‘value-neutral’ methods resulting in dehumanisation is some of the inappropriate questions asked during assessment for disability benefits. The Work and Pensions Select Committee received nearly 4,000 submissions – the most received by a select committee inquiry – after calling for evidence on the assessments for personal independence payment (PIP) and Employment and Support Allowance (ESA). 

The recent committee report highlighted people with Down’s syndrome being asked when they ‘caught’ it. Assessors have asked insulting and irrelevant questions, such as when someone with a progressive condition will recover, and what level of education they have.

This said, my own degree and Master’s, undertaken in the 1990s, and my profession up until 2010, when I became too ill to work, were actually used as an indication that I have “no cognitive problems” in 2017, after some 7 years of being unable to work because of the symptoms of a progressive illness that is known to cause cognitive problems. My driving licence in 2003 was also used as evidence of my cognitive functioning.

Yet I explained that have been unable to drive since 2004 because of my sensitivity to flickering (lamp posts, trees, telegraph poles have a strobe light effect on me as the car moves) which triggers vertigo, nausea, severe coordination difficulties, scintillating scotoma and subsequent loss of vision, slurred and incoherent speech, severe drowsiness, muscle rigidity and uncontrollable jerking in my legs. I usually get an incapacitating headache, too. I’m sensitive to flashing or flickering lights, certain patterns such as ripples on a pond, some black and white stripe patterns and even walking past railings on an overcast day completely incapacitates me. 

The PIP assessment framework is claimed to be ‘independent, unbiased’ and objective.’ Central to the process is the use of ‘descriptors’, which are a limited set of criteria used to ‘measure’ the impact of the day-to-day level of disability that a person experiences. Assessors use objective methods such as “examination techniques, collecting robust evidence, selecting the correct descriptor as to the claimant’s level of ability in each of the 10 activities of daily living and two mobility activities, and report writing.”  They speak the language of positivism with fluency.

However, it has been long recognised by social researchers and sociologists that positivism does not accommodate human complexity, vulnerability and context very well. In an assessment situation, the assessor is a stranger to the person undergoing the assessment. How appropriate is it that a stranger assessing ‘functional capacity’ asks disabled people why they have not killed themselves? Alice Kirby is one of many people this happened to.

She says: “In this setting it’s not safe to ask questions like these because assessors have neither the time or skills to support us, and there’s no consideration of the impact it could have on our mental health.

The questions were also completely unnecessary, they were barely mentioned in my report and had no impact on my award.”

So, not only an extremely insensitive and potentially risk-laden question but an apparently pointless one. 

It may be argued that some universal ‘truths’ such as the importance of ‘impartiality’, or ‘objectivity’ are little more than misleading myths which allow practitioners and researchers alike to claim, and convince themselves, that they behave in a manner that is morally robust and ethically defensible.

A brief discussion of the methodological debate  

Quiz 1 Quiz 2 Quiz 3 All Quizzes

Social phenomena cannot always be studied in the same way as natural phenomena, because human beings are subjective, intentional and have a degree of free will. One problem with quantitative research is that it tends to impose theoretical frameworks on those being studied, and it limits responses from those participating in the study. Quantitative surveys tend not to capture or generate understanding about the lived, meaningful experiences of real people in context.

There are also distinctions to be made between facts, values and meanings. Qualitative researchers are concerned with generating explanations and extending understanding  rather than simply describing and measuring social phenomena and attempting to establish basic cause and effect relationships.

Qualitative research tends to be exploratory, potentially illuminating underlying intentions, responses, beliefs, reasons, opinions, and motivations to human behaviours. This type of analysis often provides insights into social problems, helps to develop ideas and establish explanations, and may also be used to formulate hypotheses for further quantitative research.

The dichotomy between quantitative and qualitative methodological approaches, theoretical structuralism (macro-level perspectives) and interpretivism (micro-level perspectives) in sociology, for example, is not nearly so clear as it once was, however, with many social researchers recognising the value of both means of data and evidence collection and employing methodological triangulation, reflecting a commitment to methodological and epistemological pluralism.

Qualitative methods of research tend to be much more inclusive, detailed and expansive than quantitative analysis, lending participants a dialogic, democratic and first hand voice regarding their own experiences.

The current government has tended to dismiss qualitative evidence from first hand witnesses of the negative impacts of their policies – presented cases studies, individual accounts and ethnographies – as ‘anecdotal.’ This presents a problem in that it stifles legitimate feedback. An emphasis on positivism reflects a very authoritarian approach to social administration and it needs to be challenged.

A qualitative approach to research is open and democratic. It potentially provides insight, depth and richly detailed accounts. The evidence collected is much more coherent and comprehensive, because it explores beneath surface appearances, and reaches above causal relationships, delving much deeper than the simplistic analysis of ranks, categories and counts. It provides a reliable and rather more authentic record of experiences, attitudes, feelings and behaviours, it prompts an openness and is expansive, whereas quantitative methods tend to limit and are somewhat reductive.

Qualitative research methods encourage people to expand on their responses and may then open up new issues and topic areas not initially considered by researchers.

Government ministers like to hear facts, figures and statistics all the time. What we need to bring to the equation is a real, live human perspective. We need to let ministers know how the policies they are implementing directly impact on their own constituents and social groups more widely.

Another advantage of qualitative methods is that they are prefigurative and bypass problems regarding potential power imbalances between the researcher and the subjects of research, by permitting participation (as opposed to respondents being acted upon) and creating space for genuine dialogue and reasoned discussions to take place. Research regarding political issues and policy impacts must surely engage citizens on a democratic, equal basis and permit participation in decision-making, to ensure an appropriate balance of power between citizens and the state.

Quantitative research draws on surveys and experimental research designs which limit the interaction between the investigator and those being investigated. Systematic sampling techniques are used, in order to control the risk of bias. However not everyone agrees that this method is an adequate safeguard against bias.

Kantar say in their published survey report: “As the Personal Independence Payment has become more established and its customer base increased, there has been an increase in overall satisfaction from 68 per cent in 2014/15 to 76 per cent in 2015/16. This increase is driven by an increase in the proportion of customers reporting that they were ‘very satisfied’ which rose from 25 per cent in 2014/15 to 35 per cent in 2015/16.

Sampling practices

The report states clearly: “The proportion of Personal Independence Payment customers who were ‘very dissatisfied’ fell from 19 per cent to 12 per cent over the same period. 

Then comes the killer: “This is likely to be partly explained by the inclusion in the 2014/15 sample of PIP customers who had a new claim disallowed who have not been sampled for the study since 2015/16. This brings PIP sampling into line with sampling practises for other benefits in the survey.

In other words, those people with the greatest reason to be very dissatisfied with their contact with the Department for Work and Pensions  – those who haven’t been awarded PIP, for example – are not included in the survey. 

This introduces a problem in the survey called sampling bias. Sampling bias undermines the external validity of a survey (the capacity for its results to be accurately generalised to the entire population, in this case, of those claiming PIP). Given that people who are not awarded PIP make up a significant proportion of the PIP customer population who have registered for a claim, this will skew the survey result, slanting it towards positive responses.

Award rates for PIP (under normal rules, excluding withdrawn claims) for new claims are 46 per cent. However, they are at 73 per cent for Disability Living Allowance (DLA) reassessment claims. This covers PIP awards made between April 2013 and October 2016. Nearly all special rules (for those people who are terminally ill) claimants are found eligible for PIP. 

If an entire segment of the PIP claimant population are excluded from the sample, then there are no adjustments that can produce estimates that are representative of the entire population of PIP claimants.

The same is true of the other groups of claimants. If those who have had a new claim disallowed (and again, bearing in mind that only 46 per cent of those new claims for PIP resulted in an award), then that excludes a considerable proportion of claimants registering across all types of benefits who were likely to have registered a lower level of satisfaction with the Department because their claim was disallowed. This means the survey cannot be used to accurately track the overall performance of the Department or monitor in terms of whether it is fulfilling its customer charter commitments.

The report clearly states: “There was a revision to sample eligibility criteria in 2014/15. Prior to this date the survey included customers who had contacted DWP within the past 6 months. From 2014/15 onwards this was shortened to a 3 month window. This may also have impacted on trend data.” 

We have no way of knowing why those peoples’ claim was disallowed. We have no way of knowing if this is due to error or poor administrative procedures within the Department. If the purpose of a survey like this is to produce a valid account of levels of ‘customer satisfaction’ with the Department, then it must include a representative sample of all of those ‘customers’, and include those whose experiences have been negative.

Otherwise the survey is reduced to little more than a PR exercise for the Department. 

The sampling procedure is therefore a way of only permitting an unrepresentative  sample of people to participate in a survey, who are likeliest to produce the most positive responses, because their experiences have been of a largely positive outcome within the survey time frame. If those who have been sanctioned are also excluded across the sample, then this will also hide the experiences and comments of those most adversely affected by the Department’s policies and administration procedures, again these are claimants who are the likeliest to register their dissatisfaction in the survey. 

Measurement error occurs when a survey respondent’s answer to a survey question is inaccurate, imprecise, or cannot be compared in any useful way to other respondents’ answers. This type of error results from poor question wording and questionnaire construction. Closed and directed questions may also contribute to measurement error, along with faulty assumptions and imperfect scales. The kind of questions asked may also have limited the scope of the research.

For example, there’s a fundamental difference in asking questions like “Was the advisor polite on the telephone?” and “Did the decision-maker make the correct decision about your claim?”. The former generates responses that are relatively simplistic and superficial, the latter is rather more informative and tells us much more about how well the DWP fulfils one of its key functions, rather than demonstrating only how politely staff go about discussing claim details with claimants. 

This survey is not going to produce a valid range of accounts or permit a reliable generalisation regarding the wider populations’ experiences with the Department for Work and Pensions. Nor can it provide a template for a genuine learning opportunity and commitment to improvement for the Department.

With regard to the department’s Customer Charter, this survey does not include valid feedback and information regarding this section in particular:

Getting it right

We will:
• Provide you with the correct decision, information or payment
• Explain things clearly if the outcome is not what you’d hoped for
• Say sorry and put it right if we make a mistake 
• Use your feedback to improve how we do things

One other issue with the sampling is that the Employment and Support Allowance (ESA) and Job Seeker’s Allowance (JSA) groups were overrepresented in the cohort. 

Kantar do say: “When reading the report, bear in mind the fact that customers’ satisfaction levels are likely to be impacted by the nature of the benefit they are claiming. As such, it is more informative to look at trends over time for each benefit rather than making in-year comparisons between benefits.” 

The sample was intentionally designed to overrepresent these groups in order to allow “robust quarterly analysis of these benefits”, according to the report. However, because a proportion of the cohort – those having their benefit disallowed – were excluded in the latest survey and not the previous one, so cross comparison and establishing trends over time is problematic. 

To reiterate, the report also says: “When reading the report, bear in mind the fact that customers’ satisfaction levels are likely to be impacted by the nature of the benefit they are claiming. As such, it is more informative to look at trends over time for each benefit rather than making in-year comparisons between benefits.” 

With regard to my previous point: “Please also note that there was a methodological change to the way that Attendance Allowance, Disability Living Allowance and Personal Independence Payment customers were sampled in 2015/16 which means that for these benefits results for 2015/16 are not directly comparable with previous years.” 

And: “As well as collecting satisfaction at an overall level, the survey also collects data on customers’ satisfaction with specific transactions such as ‘making a claim’, ‘reporting  a change in circumstances’ and ‘appealing a decision’ (along with a number of other transactions) covering the remaining aspects of the DWP Customer Charter.These are not covered in this report, but the data are presented in the accompanying data tabulations.” 

The survey also covered only those who had been in touch with DWP over a three month period shortly prior to the start of fieldwork. As such it is a survey of contacting customers rather than all benefits customers.

Again it is problematic to make inferences and generalisations about the levels of satisfaction among the wider population of claimants, based on a sample selected by using such a narrow range of characteristics.

The report also says: “Parts of the interview focus on a specific transaction which respondents had engaged in (for example making a claim or reporting a change in circumstances). In cases where a respondent had been involved in more than one transaction, the questionnaire prioritised less common or more complex transactions. As such, transaction-specific measures are not representative of ALL transactions conducted by DWP”.

And regarding subgroups: “When looking at data for specific benefits, the base sizes for benefits such as Employment and Support Allowance and Jobseeker’s Allowance (circa 5,500) are much larger than those for benefits such as Carer’s Allowance and Attendance Allowance (circa 450). As such, the margins of error for Employment and Support Allowance and Jobseeker’s Allowance are smaller than those of other benefits and it is therefore possible to identify relatively small changes as being statistically significant.”

Results from surveys are estimates and there is a margin of error associated with each figure quoted in this report. The smaller the sample size, the greater the uncertainty.

In fairness, the report does state: “In the interest of avoiding misinterpretation, data with a base size of less than 100 are omitted from the charts in this report.” 

On non-sampling error, the report says: “Surveys depend on the responses given by participants. Some participants may answer questions inaccurately and some groups of respondents may be more likely to refuse to take part altogether. This can introduce biases and errors. Nonsampling error is minimised by the application of rigorous questionnaire design, the use of skilled and experienced interviewers who work under close supervision  and rigorous quality assurance of the data.

Differing response rates amongst key sub-groups are addressed through weighting. Nevertheless, it is not possible to eliminate non-sampling error altogether and its impact cannot be reliably quantified.”

As I have pointed out, sampling error in a statistical analysis may also arise from the unrepresentativeness of the sample taken. 

The survey response rates were not discussed either. In the methodological report, it says: “In 2015/16 DWP set targets each quarter for the required number of interviews  for each benefit group to either produce a representative proportion of the benefit group in the eventual survey or a higher number of interviews for sub-group analysis where required. It is therefore not strictly appropriate to report response rates as fieldwork for a benefit group ceased if a target was reached.” 

The Government says: “This research monitors claimants’ satisfaction with DWP services and ensures their views are considered in operational and policy planning.” 

Again, it doesn’t include those claimants whose benefit support has been disallowed. There is considerable controversy around disability benefit award decisions (and sanctioning) in particular, yet the survey does not address this important issue, since those experiencing negative outcomes are excluded from the survey sample. We know that there is a problem with the PIP and ESA benefits award decision-making processes, since a significant proportion of those people who go on to appeal DWP decisions are subsequently awarded their benefit.

The DWP, however, don’t seem to have any interest in genuine feedback from this group that may contribute to an improvement in both performance and decision-making processes, leading to improved outcomes for disabled people.

Last year, judges ruled 14,077 people should be given PIP against the government’s decision not to between April and June – 65 per cent of all cases.  The figure is higher still when it comes to ESA (68 per cent). Some 85 per cent of all benefit appeals were accounted for by PIP and ESA claimants.

The system, also criticised by the United Nations because it “systematically violates the rights of disabled persons”, seems to have been deliberately set up in a way that tends towards disallowing support awards. The survey excluded the voices of those people affected by this government’s absolute callousness or simple bureaucratic incompetence. The net effect, consequent distress and hardship caused to sick and disabled people is the same regardless of which it is.

Given that only 18 per cent of PIP decisions to disallow a claim are reversed  at mandatory reconsideration, I’m inclined to think that this isn’t just a case of bureaucratic incompetence, since the opportunity for the DWP to rectify mistakes doesn’t result in subsequent correct decisions, in the majority of cases, for those refused an award. 

Without an urgent overhaul of the assessment process by the Government, the benefit system will continue to work against disabled people, instead of for them.

The Government claim: “The objectives of this research are to:

  • capture the views and experiences of DWP’s service from claimants, or their representatives, who used their services recently
  • identify differences in the views and experiences of people claiming different benefits
  • use claimants’ views of the service to measure the department’s performance against its customer charter

The commissioned survey does not genuinely meet those objectives.

Related

DWP splash out more than £100m trying to deny disabled people vital benefits

Inquiry into disability benefits ‘deluged’ by tales of despair

The importance of citizens’ qualitative accounts in democratic inclusion and political participation

Thousands of disability assessments deemed ‘unacceptable’ under the government’s own quality control scheme

Government guidelines for PIP assessment: a political redefinition of the word ‘objective’

PIP and ESA Assessments Inquiry – Work and Pensions Committee

There is an alternative reality being presented by the other side. The use of figures diminishes disabled peoples’ experiences.”


I don’t make any money from my work. I am disabled because of illness and have a very limited income. But you can help by making a donation to help me continue to research and write informative, insightful and independent articles, and to provide support to others. The smallest amount is much appreciated – thank you.

DonatenowButton

23 thoughts on “A critique of the government’s claimant satisfaction survey

  1. they got to be joking nah they believe their own lies. most who go through this will reply they weren’t happy core blimey cuckoos hay jeff3

    Like

  2. As there are countless examples showing that this government is thoroughly unscrupulous it seems unnecessary to try to prove that this ‘research’ result cannot be trusted. Rather the onus should be on the government to prove that in this one exceptional case, the result can be trusted. Good luck to them in doing so.

    Like

    1. The problem is that it has been taken at face value, and people are likely to take the claimed ‘scientific’ label more seriously, too. It always helps to challenge government accounts when you know they’re wrong. Other people may be able to use parts of this in debates and so on

      Liked by 1 person

  3. I cannot believe the results of this survey: I have learner with fybromyalgia, osteoarthritis and congenitally turned feet who has been refused PIP and has had, as a result, to give up her adapted car. She’ll appeal, of course, and may get it, but by then, her household economy, which rests on a knife edge, will have been ruined. The idea that anyone could interview this admirable woman and think her unworthy of support is incomprehensible.

    Thank you for this analysis. It will help.

    Liked by 3 people

    1. Those whose claim was disallowed were not included in this survey. Because we are the people who would most likely register the highest level of dissatisfaction. I am so sorry to hear what has happened to your learner. Good luck to her with her appeal x

      Liked by 3 people

  4. I’d love them to do one on Universal Credit. They would find much dissatisfaction with that. Judging by the pages of complaints I’ve written to them.

    Like

  5. From the perspective of the claimant filling in the survey, this is nothing more than emotional blackmail, based on paranoia fear of having everything taken away if you didn’t please “top of the food chain”.
    Primal fear based control is great way to get “people” to conform and manipulate the desired outcome.
    The science of everything and nothing, its the governments way of reading the future. Im sure there is some Tarot Card reader in the back room giving “desired outcomes” in conjunction with some of the worlds best psychotherapists. ( I think Sigmund Freud still gets wheeled out).

    However at least they have stopped drilling holes in our heads to release the “mental health demons”, that apparently was “Scientific”.

    Like

  6. Love your e-mail articles … they’re excellent! Paul Snowdon

    On 11 February 2018 at 07:55, Politics and Insights wrote:

    > Kitty S Jones posted: ” “An official survey shows that 76% of people in > the [PIP] system responded to say that they were satisfied. That itself is > not a happy position, but it shows that her representation of people’s > average experience as wholly negative on the basis of a Twi” >

    Liked by 1 person

Leave a comment