Measuring resident satisfaction more accurately: Two approaches

Afew years ago, the handful of researchers immersed in the topic were lamenting the lack of interest by policy makers in examining nursing home residents’ satisfaction with their care and quality of life. Times have changed. Thanks largely to the federal Nursing Home Quality Initiative, launched in2002, the Centers for Medicare & Medicaid Services (CMS) is now pursuing three initiatives that in one form or another involve measuring resident satisfaction (see “CMS Initiatives,” p. 61). This measurement task is widely viewed as one step toward a larger goal of promoting culture change within nursing homes as a means of enhancing resident care and quality of life.

Having answered the question of whether to measure resident satisfaction with a resounding “Yes,” the question becomes: How do we measure it? With the science of this still in its infancy (but fast approaching the verge of adolescence), the answers are just now emerging. It is becoming increasingly apparent that two distinct approaches are emerging. While these approaches are not necessarily in conflict with each other, they serve different purposes, and without further clarification could seriously confound this new measurement task. In this article, we aim to prevent such confusion and instead promote a deeper understanding of the task at hand through discussion of the whys and hows of resident satisfac-tion measurement.

Why Measure Resident Satisfaction?

Inherent in this question is an acknowledgment, now widely accepted by long-term care experts, that residents themselves—not their family members or facility staff or any other proxies—are in the best position to report on this. They are, after all, the ones who are living it.

With that said, the issue is how these reports will be put to use. These days, the most common response is “for quality improvement [QI] purposes.” Resident satisfaction is assessed so that facilities can pinpoint areas in need of improvement and then design and evaluate appropriate interventions. Alternatively, satisfaction data are sometimes used for public accountability purposes. For example, a few states conduct resident satisfaction surveys and report the results online to help consumers choose among facilities.

How Is Resident Satisfaction Measured?

Purpose is important because it drives the selection of the best methods for measuring resident satisfaction. As Barbara Manard, PhD, points out in a report on nursing home quality indicators (of which resident satisfaction could be one), “the information best suited for internal quality management and improvement is not necessarily the same as that most useful for public accountability… .”1

The table on page 62, a shorthand version of Manard’s work, shows how measurement strategy varies depending on purpose. If a facility’s intent is to improve care and quality of life for residents, then it should collect very specific information at short intervals so that it can determine whether residents are satisfied with new care practices. In contrast, organizations intent on public accountability—often government entities, not individual facilities—will want to collect, at infrequent intervals, global measures from large, reliable samples in order to develop fair and accurate indicators of performance.

Confusion and problems, most notably failed objectives, may result if you mismatch methods and purposes. For example, most state-approved resident satisfaction surveys are designed as public accountability tools. In general, they make poor QI surveys. Increasingly, however, with resident satisfaction measurement now a federal priority, these and other widely available global assessment tools are being recommended for use as QI surveys.

As a case in point, consider the resident satisfaction survey used to generate quality indicators for the Ohio Long-Term Care Consumer Guide (at https://www.ltcohio.org/consumer/index.asp). Although it is designed as a public accountability tool—an independent contractor administers the survey annually to as many as 32,000 nursing home residents across the state—CMS has accepted it as a tool that facilities might use for purposes that include quality improvement.2 A close examination of the survey shows, however, that it is poorly suited to QI tasks.

For example, of the survey’s 48 items, 10, or 21%, are direct satisfaction questions that share a common format: Are you satisfied with (fill in the blank)? Such questions may work well for benchmarking facilities (in any case, they appear regularly in state-approved surveys), but from a QI standpoint, they are arguably worse than useless because of their potential to lead to erroneous conclusions.

In a series of studies,3-5 the UCLA Borun Center for Gerontological Research has shown that direct satisfaction questions suffer from an “acquiescent response” bias; that is, nursing home residents tend to respond favorably to these questions, despite known problems with the quality of care they are receiving.

Additionally, responses to these questions shed little light on how to correct problems. Does the resident want to get out of bed earlier or later? Does she want to eat in the dining room or her own room? With QI, as with many things in life, the devil is in the details—but the details are largely absent in direct satisfaction questions.

A final problem is that these questions are relatively insensitive to objective improvements in quality of care. In theory, if facilities improve services to better meet residents’ needs and preferences, then satisfaction with care should also increase. Another study found, however, that resident responses to direct satisfaction questions did not change even when the services in question were significantly enhanced and consistent with residents’ reported preferences.5

Considered together, these are serious drawbacks. Based on responses to direct satisfaction questions, facilities might falsely conclude that their services are satisfactory or that new interventions are not working. Either conclusion could scuttle desirable improvement efforts.

Another reason to avoid using public accountability surveys for quality improvement is that the former are typically broad and blunt instruments. Ohio’s resident satisfaction survey, for example, taps into nine separate domains (environment, laundry, meals, etc.). This is fine if the goal is to identify which domain to work on first (After all, what nursing home could tackle them all at once?). And, indeed, Manard’s chart (Table 1.) identifies such needs assessments as an acceptable use for public accountability surveys. But when this assessment is completed, a finer, more focused satisfaction assessment is required to shape and evaluate appropriate improvement interventions.

Table. Comparison of quality measurement for quality improvement versus accountability

Issue

Measurement for Quality Improvement

Measurement for Public Accountability

Purpose

Identify process to be improved, or test results of efforts

Purchase decision evaluate programs or policies, or conduct needs assessments

Requester/Audience

Internal (e.g., staff, providers, or management)

External (e.g., purchasers, accrediting bodies, regulators, policy makers)

What to Measure

Biggest gap between practice and science

Measure with wide public acceptance or importance

Frequency of Measurement

Very frequent (e.g., weekly or monthly)

Less frequently (e.g., annually)

Comparison

Longitudinal or within facility or unit

Cross-sectional and longitudinal

Sample Size

Often relatively small

Large samples

Unit of Analysis

Smallest relevant unit (e.g., measuring the amount of lunches actually consumed in a particular facility when a different menu or approach to food choices is introduced)

Aggregate

Detection of Bias

No audit, measurement done internally

External audit and/or external measurement required

Level of Sophistication

Simple, not likely to be challenged

Appears simple, but rigorous and defensible

Level of Detail

Very specific, miniscule

Summarized, global

Expected Response

Behavior change

Decision making

What Questions Are Most Useful for QI?

Although it is too early to talk about consensus standards in such a young field of study, there is nevertheless considerable research support for the use of “discrepancy questions” in resident satisfaction measurement.3-5 These questions, which compare preferred care with perceived care, avoid the drawbacks of direct satisfaction questions: They are relatively resistant to the acquiescent response bias, they are sensitive to objective improvements in care quality, and they generate information useful for directing improvement efforts.

Discrepancy questions come in pairs. The first question in a pair might ask residents, “How many times during the day would you like staff to help you to the bathroom?” The comparison question then asks, “How many times during the day do the staff help you to the bathroom?” You score discrepancy questions by subtracting the second answer from the first. For example, if the resident says she receives toileting assistance once a day but prefers to receive it three times a day, then the discrepancy score is −2 (i.e., 1 −3 = −2). The negative difference signals unmet needs. The ideal score for any discrepancy question set is zero, which means that the resident receives exactly as much care as he or she wants.

Discrepancy questions clearly lend themselves to evaluating care-frequency preferences, but they can also be used to evaluate other aspects of care, such as dining location (“Where do you have breakfast?” versus “Where do you like to have breakfast?”) or timeliness of care (e.g., “What time do staff help you out of bed in the morning?” versus “What time would you like for staff to help you out of bed in the morning?”).

Additional Guidelines

The Borun Center offers these additional guidelines to consider in developing resident satisfaction surveys for QI purposes (for more guidelines and discussion, visit the center’s Web site at https://borun.medsch.ucla.edu):

  • Start small, focusing first on a subset of residents, such as new admissions, or on a single care process or other activity that needs improvement, such as weight loss prevention. If the facility plans to implement a new intervention, assess resident satisfaction with the targeted care process both before and after the intervention is implemented. The results provide a measure of the intervention’s effectiveness.

  • Always conduct face-to-face resident interviews and, as a general rule, interview residents who score 2 or more on the four-item Minimum Data Set (MDS) Recall subscale. Research shows that these residents consistently provide reliable information useful for QI efforts.6,7 If the questions ask about services or care processes that occur daily, as opposed to less frequently, then also interview residents who score 1 (or more) on the MDS Recall subscale. Most of these residents can reliably self-report pain and depression, express meaningful preferences for daily care, and accurately describe care they receive on a daily basis.

  • Ask questions that are straightforward, short, and concrete. Avoid direct satisfaction questions or those that use abstract constructs. For example, a better way to assess “dignity and respect” within care delivery is to ask about concrete staff behaviors, such as: “Do the people who work here knock on your door before entering the room? …pull your curtain closed before helping you to get dressed? …address you by name when they see you?”

  • Focus questions on daily occurrences, because these are the most recent and tangible events in the resident’s memory. Ideally, residents should be interviewed shortly after the occurrence of the care activity in question.

  • Ask residents for a simple yes-or-no response. Although this strategy sacrifices the opportunity for more nuanced responses, it allows more residents to participate in satisfaction interviews; many residents are simply unable to answer questions that use multiple-point rating scales.

  • Include some structured open-ended questions (e.g., “If you could change something about the toileting schedule or the way staff help you to use the toilet, what would it be?”). Residents’ answers to these questions may surprise you—and offer direction for improvement.

  • Consider the goal when interpreting survey findings. If, for example, a facility wants to offer social activities that most residents will enjoy, then examine resident responses as a group to identify the majority opinion. More frequently, however, improvement efforts are intended to enhance daily life for an individual. In such cases, improvement efforts must be driven by the individual responses of residents interviewed.

A Final Note

Nursing homes that assess resident satisfaction with the intent of improving life in the facility face an uncomfortable dilemma: In order to improve satisfaction with care, their assessments must first discover pockets of dissatisfaction. Many facilities are understandably reluctant to assume this task; they fear that surveyors will use the assessment results to sanction the facility. Rest assured: Specific federal regulations guard against this. The regulations require nursing homes to establish internal quality assessment and assurance (QA) committees that meet to identify and respond to quality deficiencies within the facility. But according to the U.S. Office of Inspector General, “[s]urveyors do not have access to QA committee minutes due to the confidentiality of these documents mandated [by law].”8 Officials add that the new CMS initiatives will not harm facilities that identify problems via their resident satisfaction surveys. n

Anna Rahman, MSW, is Principal Editor at the UCLA Borun Center and Sandra Simmons, PhD, is an Assistant Professor at the UCLA School of Medicine, Division of Geriatrics, and the Borun Center.

For further information, visit https://www.borun.medsch.ucla.edu. To send your comments to the authors and editors, e-mail rahman0507@nursinghomesmagazine.com.

References

  1. Manard B. Nursing Home Quality Indicators:Their Uses and Limitations. Washington D.C.:AARP Public Policy Institute, 2002.
  2. Quality Partners of Rhode Island. Memo: “Satisfaction Tools Approved by CMS for Use by QIOs in the 8th SoW,” 2005.
  3. Levy-Storms L, Schnelle JF, Simmons SF. A comparison of methods to assess nursing home residents’ unmet needs. The Gerontologist 2002; 42:454-61.
  4. Simmons SF, Schnelle JF. Strategies to measure nursing home residents’ satisfaction and preferences related to incontinence and mobility care: Implications for evaluating intervention effects. The Gerontologist 1999; 39:345-55.
  5. Simmons SF, Ouslander JG. Resident and family satisfaction with incontinence and mobility care: Sensitivity to intervention effects? The Gerontologist 2005; 45:318-26.
  6. Simmons SF, Schnelle JF. The identification of residents capable of accurately describing daily care: Implications for evaluating nursing home care quality. The Gerontologist 2001; 41:605-11.
  7. Simmons SF, Schnelle JF, Uman GC, et al. Selecting nursing home residents for satisfaction surveys. The Gerontologist 1997; 37:543-50.
  8. Office of Inspector General, U.S. Department of Health and Human Services. Quality Assurance Committees in Nursing Homes, Washington, D.C., January 2003. Publication No. OEI-01-00090.

Topics: Articles