Paul Willging Says…

PAUL WILLGING says…

Decide what you want from satisfaction surveys

Last month I talked about the importance of customer satisfaction in managing a successful long-term care community. So, just how do we measure satisfaction? As it happens, numerous questionnaires are available purporting to be the definitive answer to that very question. They are produced by academicians, consultants, providers, and provider associations. The challenge lies in distinguishing between those whose primary purpose is marketing and those whose critical focus is management. Or, put differently, do you want the good news or the bad news?

We’ve all seen survey results that show astronomical levels of resident contentment. They may help in filling buildings (although, personally, I doubt it), but they are of little use to a management team looking for areas in which to focus its quality improvement efforts. For that purpose, we need to find out what the customer isn’t happy about. To negatively paraphrase marketing guru Chuck Chakrapani in his seminal text on the subject, How to Measure Service Quality & Customer Satisfaction: The Informal Field Guide for Tools and Techniques, don’t measure for the wrong reason, don’t measure the wrong things, don’t measure the wrong audience, and don’t measure the wrong way.

So, what should you do? Well, start by generating useful data. Complaints are a valuable source of information and can be a starting point for effective customer satisfaction measurement-but they can’t substitute for it. Indeed, voluntary complaints are, at best, harbingers of customer dissatisfaction since they represent only the tip of the iceberg. Such complaints, however, might give you a sense of areas needing to be analyzed in further detail.

Other sources of useful information can include focus groups. Remember, when organizing these, that your customers are residents, their families, and staff; don’t ignore any of them. Brainstorming sessions can also be helpful.

Our goal here is twofold: first, to gather all the preliminary information we need; and second, to bring focus to the measurement system itself by defining how the information will be used. If you don’t know what you’ll do with the results of any query, it’s not worth pursuing.

Know what is important to the customer, not what you think is important. But focus also on those areas that are important to your particular mission as a seniors housing and care community. If you’re not offering affordability to begin with, satisfaction with your prices might not be that important to you. (Value, of course, is something else again.) If you don’t admit dementia residents, you probably won’t be as interested in customer/staff reactions to difficult behavioral issues.

Measuring attributes that don’t contribute to satisfaction (because they are not a part of your mission) is not just ill-advised-it can be harmful. It can provide an illusion of community focus that can itself lead to dissatisfaction if it’s not fulfilled. Take affordability as an example: Even though it’s not part of your mission, asking about how well you’re achieving it may confuse the customer into thinking that it is; the result-a less-than-satisfied customer.

Another basic error in satisfaction research is to place more emphasis on the satisfied majority than on the discriminating minority. While most of your residents may claim to be happy with your transportation service, what if only 5% actually use it? Their level of satisfaction-or dissatisfaction-outweighs the views of the 95% who don’t, even if the latter claim high levels of satisfaction with it. Remember, for purposes of quality management, we’re interested in the bad news, not the good. Remember, too, that one very dissatisfied customer who has personal involvement with a service can bring more harm to the facility than the goodwill generated by ten satisfied customers who are only tangentially affected by it.

Quality management experts from Deming to Crosby have pointed out that the most effective way to improve customer satisfaction is to improve the processes through which services are delivered. Focusing your efforts, therefore, on the users of your services will keep the system relevant to its essential purpose.

Perceived importance also provides a useful filter for developing your survey questions. “How important is it to you that you have had to wait ten minutes for available seating for lunch?” If your resident actually looks forward to the wait as an opportunity to socialize, that gives you a different perspective than the response from the resident who might be fuming at the “waste of time.” In short, avoid the temptation to lump all your customers together.

Even within a specific category of facility (assisted living, for example), satisfaction survey results will vary by the type of community in which services are provided. People who move directly to freestanding assisted living from their own homes are different from people who live in the assisted living sector of a CCRC. Most of those in the freestanding assisted living community once believed that they would stay in their own homes and never have to leave; most are residing there because someone else (children usually) made that decision for them. Residents in CCRCs, on the other hand, while perhaps not happy at eventually having to move into assisted living from the community’s independent living wing, were themselves the decision makers when it came to choosing the CCRC in the first place. It shows that they are planners. While initially choosing the CCRC for its independent living, they anticipated at least the possibility of needing assisted living someday, and their survey reactions will likely reflect that fact.

Satisfaction survey results are also known to vary significantly according to who completes the survey (resident, family, or employee), as well as the age and gender of the respondent, the length of time the individual has lived or worked in the community, and the marital status of the resident. These differences are important to understand when results are being compared from year to year within the same community, or when trying to compare between two communities. Older communities will likely have older residents with longer stays than will newer communities. CCRCs have a tendency to have more couples. People still enjoying life with their spouses have different outlooks on life than those who have lost a spouse.

The use of rating scales (e.g., a scale of 1 to 5) can also confound satisfaction research, in that:

  • the customer has an inherent desire not to offend;
  • more customers are likely to claim satisfaction than dissatisfaction, thus skewing the scale results in their direction;
  • customer expectations are not all that high in the first place. If the customer doesn’t expect the food to be good, for example, she may not feel it worth expressing deeply felt concerns about it; asking her how it could be improved might inspire a more helpful answer.

Indeed, numeric scales are recognized as being inherently flawed. On 10-point scales, for example, most customers will religiously avoid the lower points. When there is no midpoint on a scale, most neutral customers tend to move up rather than down in their ranking. The reason for this is that people do not want to be mean and give employees or the community a low rating; they have a tendency, therefore, to say they are “satisfied” or “very satisfied.” Consequently, numeric scales are inherently biased toward “satisfaction”-not very helpful if your goal is quality improvement (although perhaps more helpful if your primary interest is in marketing your facility).

Many researchers believe that “expectations scales” or “improvement scales” are better tools to assess satisfaction, because people have a tendency to be more critical when using these scales. While numeric satisfaction scales usually produce results that are bunched at the higher or “better” end of the scale, the expectations scales encourage respondents to be more critical and to use the full scale. With expectations scales, residents do not have the sense that they are rating the employee. Rather, they are simply stating how a particular service relates to their expectations. Consequently, respondents tend to use the midpoint in the scale more often than the higher ratings.

The value of your system, therefore, will be a function of any number of variables. Certainly, how well you have segmented your audience and chosen your questions will be key factors. The quality of your analysis and how well you interpret the results are, of course, equally important. It is not enough to learn which areas of dissatisfaction are most pronounced in your facility. It is equally important to get at the cause of the problem so that it may be fixed.

That is why it is important to encourage employee involvement. Sometimes, the best way to ascertain root causes is to provide staff with the data and listen to their interpretations of causality and means of correction.

Finally, satisfaction surveys are a yearlong event. Surveys built into the routine of the community will have greater value and impact than those that occur as a singular annual event. Just as quality management is a process, not a project, so too should be the data production that drives your quality management system.


To comment on Dr. Willging’s views, as expressed here, please send e-mail to willging0904@nursinghomes magazine.com.

Topics: Advocacy , Articles