Transforming Data Into Practical Information: Using Consumer Input to Improve Home-Care Services

Similar documents
Health and Long-Term Care Use Patterns for Ohio s Dual Eligible Population Experiencing Chronic Disability

An Overview of Ohio s In-Home Service Program For Older People (PASSPORT)

A REVIEW OF NURSING HOME RESIDENT CHARACTERISTICS IN OHIO: TRACKING CHANGES FROM

Community Care Statistics : Referrals, Assessments and Packages of Care for Adults, England

Long-Term Care in Ohio: A Longitudinal Perspective

2014 MASTER PROJECT LIST

2016 REPORT Community Care for the Elderly (CCE) Client Satisfaction Survey

Duana Patton Ohio Association of Area Agencies on Aging

California HIPAA Privacy Implementation Survey

Robert Applebaum Valerie Wellin Cary Kart J. Scott Brown Heather Menne Farida Ejaz Keren Brown Wilson. Miami University Oxford, Ohio

Richard Mollot, Esq. Executive Director Cynthia Rudder, PhD, Director of Special Projects Long Term Care Community Coalition

Reprint of an article from "ECHOCARDIOGRAPHY UPDATE" Newsletter By Judy Rosenbloom Author of The Cardiovascular Coding Reference Guide.

2012 Report. Client Satisfaction Survey PSA 9 RICK SCOTT. Program Services, Direct Service Workers, and. Impact of Programs on Lives of Clients

The Budget: Maximizing Federal Reimbursement For Parolee Mental Health Care Summary

Older Adult Services. Submitted as: Illinois Public Act Status: Enacted into law in Suggested State Legislation

Policy Does Matter: Continued Progress in Providing Long-Term Services and Supports for Ohio s Older Population

GROUP LONG TERM CARE FROM CNA

Pennsylvania Patient and Provider Network (P3N)

Aging in Place: Do Older Americans Act Title III Services Reach Those Most Likely to Enter Nursing Homes? Nursing Home Predictors

Home & Community Based Services Waiver Member Handbook

DA: November 29, Centers for Medicare and Medicaid Services National PACE Association

Summary Report of Findings and Recommendations

Elderly Simplified Application Project Guidance

Participant Satisfaction Survey Summary Report Fiscal Year 2012

Implementation of the 2016 Ohio Nursing Home and Residential Care Facility Family Satisfaction Survey

Scenario Planning: Optimizing your inpatient capacity glide path in an age of uncertainty

Project Request and Approval Process

Medicaid and CHIP Payment and Access Commission (MACPAC) February 2013 Meeting Summary

Are You Undermining Your Patient Experience Strategy?

Long-Term Care Improvements under the Affordable Care Act (ACA)

Hiring Talented Sales Professionals

Barriers & Incentives to Obtaining a Bachelor of Science Degree in Nursing

MDS 3.0/RUG IV OVERVIEW

EHR Implementation Best Practices. EHR White Paper

Request for Applications to Participate In Demonstration Projects to Evaluate Direct Certification with Medicaid

Healthy Eating Research 2018 Call for Proposals

REQUEST FOR PROPOSALS. For: As needed Plan Check and Building Inspection Services

Supplemental materials for:

PEONIES Member Interviews. State Fiscal Year 2012 FINAL REPORT

A Primer on Activity-Based Funding

The influx of newly insured Californians through

ICD-10 Frequently Asked Questions for Providers Q Updates

Pursuing the Triple Aim: CareOregon

Medicare Quality Payment Program: Deep Dive FAQs for 2017 Performance Year Hospital-Employed Physicians

OBQI for Improvement in Pain Interfering with Activity

Income/Revenue Diversification

Training Requirements for Home Care Workers: A Content Analysis of State Laws

Analysis of Nursing Workload in Primary Care

Methodology Report U.S. News & World Report Nursing Home Finder

Chapter F - Human Resources

Rapid Recovery Therapy Program. GTA Rehab Network Best Practices Day 2017 Joan DeBruyn & Helen Janzen

Minnesota Statewide Quality Reporting and Measurement System: Quality Incentive Payment System Framework

Value-Based Contracting

What Job Seekers Want:

Minnesota Statewide Quality Reporting and Measurement System: Quality Incentive Payment System

(9) Efforts to enact protections for kidney dialysis patients in California have been stymied in Sacramento by the dialysis corporations, which spent

time to replace adjusted discharges

Minnesota Statewide Quality Reporting and Measurement System:

ABMS Organizational QI Forum Links QI, Research and Policy Highlights of Keynote Speakers Presentations

Virtual Meeting Track 2: Setting the Patient Population Maternity Multi-Stakeholder Action Collaborative. May 4, :00-2:00pm ET

MDS 3.0/RUG IV Distance Learning Series January - May 2016

Start Small, Think Big! Fusing Clinical & Business Metrics to Improve Quality & Effect Change. 44 accc-cancer.org July August 2016 OI

Program evaluation of PASSPORT: Ohio s home and community-based Medicaid waiver. Final report

SOCIAL WORKER SUPERVISOR II

The Evolution of a Successful Efficiency Program: Energy Savings Bid

MDUFA Performance Goals and Procedures Process Improvements Pre-Submissions Submission Acceptance Criteria Interactive Review

Developing Specifications for the Competitive Bidding of Intake, Assessment & Case Management Services

Inpatient and Community Mental Health Patient Surveys Report written by:

Commissioning and statutory funding arrangements for hospice and palliative care providers in England 2017

WHITE PAPER. Taking Meaningful Use to the Next Level: What You Need to Know about the MACRA Advancing Care Information Component

Migrant Education Comprehensive Needs Assessment Toolkit A Tool for State Migrant Directors. Summer 2012

COMMONWEALTH of VIRGINIA Department of Medical Assistance Services

Revisiting The Name Game: A Taxonomy of Home and Community-Based Services

Measuring the Cost of Patient Care in a Massachusetts Health Center Environment 2012 Financial Data

CHCS. Case Study Washington State Medicaid: An Evolution in Care Delivery

Author s response to reviews

Midmark White Paper The Connected Point of Care Ecosystem: A Solid Foundation for Value-Based Care

BRIEF SUBMITTED BY THE QUÉBEC OMBUDSMAN TO THE MINISTER FOR SOCIAL SERVICES

June 22, Leah Binder President and CEO The Leapfrog Group 1660 L Street, N.W., Suite 308 Washington, D.C Dear Ms.

3. What does Any Willing Provider (AWP) refer to in the context of MLTSS?

Dual Eligibles : how do they utilize health and long-term care services?

CWCI Research Notes CWCI. Research Notes June 2012

Tips For Attracting Great Candidates to 5Your Jobs

INTRODUCTION. In our aging society, the challenges of family care are an increasing

The HIPAA privacy rule and long-term care : a quick guide for researchers

Accountable Care Organizations. What the Nurse Executive Needs to Know. Rebecca F. Cady, Esq., RNC, BSN, JD, CPHRM

Negotiating a Hospital Anesthesia Financial Support Agreement

National Patient Experience Survey Mater Misericordiae University Hospital.

Long Term Care Briefing Virginia Health Care Association August 2009

TELEHEALTH INDEX: 2015 PHYSICIAN SURVEY

SBA SMALL BUSINESS PROCUREMENT AWARDS ARE NOT ALWAYS GOING TO SMALL BUSINESSES REPORT NUMBER 5-14 FEBRUARY 24, 2005

REPORT OF THE BOARD OF TRUSTEES

Direct Hire Agency Benchmarking Report

Working Paper Series

REQUEST FOR PROPOSALS FOR PENSION ADMINISTRATION AND FINANCIAL SYSTEMS CONSULTING SERVICES

Appendix VI: Developing and Writing Grant Proposals

GAO WARFIGHTER SUPPORT. DOD Needs to Improve Its Planning for Using Contractors to Support Future Military Operations

BALANCING THE SYSTEM

August 25, Dear Ms. Verma:

Analysis of 340B Disproportionate Share Hospital Services to Low- Income Patients

Transcription:

The Gerontologist Vol. 47, No. 1, 116 122 PRACTICE CONCEPTS Copyright 2007 by The Gerontological Society of America Transforming Data Into Practical Information: Using Consumer Input to Improve Home-Care Services Robert Applebaum, PhD, 1 Suzanne Kunkel, PhD, 1 and Ken Wilson, MGS 2 Purpose: As funds have increased for the provision of in-home care, so too have concerns about the quality of services. In response, care management agencies and home-care providers have developed an array of monitoring activities designed to ensure the quality of services. In this article, we show how an area agency on aging both collected and used data to improve the quality of a network of in-home services. Design and Methods: Data came from more than 4,200 consumers enrolled in a community-based long-term-care program operated by the agency. In addition, other indicators of quality, such as elapsed time to service, were also collected. The area agency combined these data into part of a provider quality report it produced. Results: The provider quality report showed considerable variation across the more than 80 providers delivering services. The report also included examples of how data could be used to improve the quality of in-home services. Implications: Many home-care agencies now collect data from and about consumers participating in their programs. Often, however, these organizations do not have a good plan for actually using the data. This study demonstrates how to use consumer information to improve the quality of services delivered. Key Words: Quality, Care management, Consumer satisfaction Background With the expansion of in-home services, there is a growing recognition that it is critical to collect data Address correspondence to Robert Applebaum, Scripps Gerontology Center, Miami University, Oxford, OH 45056. E-mail: applebra@ muohio.edu 1 Scripps Gerontology Center, Miami University, Oxford, OH. 2 Council on Aging of Southwestern Ohio, Cincinnati. from and about consumers. Despite a consistent history in long-term care of largely ignoring the consumer, states and local programs now widely use a number of home-care consumer satisfaction instruments (Home Care Satisfaction Measure [HCSM], Participant Experience Survey, Service Adequacy Satisfaction Instrument) to hear consumers views (Geron et al., 2000; Medstat, 2003; Murdoch, Kunkel, Applebaum, & Straker, 2004). Assessing consumer satisfaction is a necessary but not sufficient component of program quality. To achieve high-quality home-care programs, experts must couple consumer satisfaction with two additional activities. First, an expanded array of quality measures, such as time to service, needs to supplement the consumer-generated satisfaction measures; and second, agencies must have a plan for how to incorporate these data into a quality management system. This article focuses on how care management agencies and direct-service providers can use data in their quest for ensuring and improving the quality of in-home services. A major motivation for this work involves the conceptual shift from program monitoring and quality assurance to quality improvement. The qualityassurance model has largely relied on an inspection paradigm, with an emphasis on identifying and correcting mistakes after they happen (Applebaum, Straker, & Geron, 2000). The inspection model has a long history, having dominated nursing home review since the inception of Medicaid (Hawes, 1997/1998). Critics of the inspection approach argue that it simply has not worked, but perhaps the biggest flaw with this model in the home-care context is that some proportion of clients receive inferior care, even when the inspection model is working well. The shift to quality improvement requires providers to develop a strategy in which they use data on a continual basis to constantly modify and improve the services being delivered (Crosby, 1979; Deming, 1982). Under the quality-improvement model, the search for quality is less adversarial and more cooperative, 116 The Gerontologist

involving the major stakeholders in the development and review of the service. In the case of home care, this means that both consumers and the in-home service provider must be active participants in the quality-improvement model. Another important feature of the quality-improvement model asserts that unless all of the providers are performing at a high level, there will always be some consumers who are receiving subpar service. Rather than having some high-performing providers and some low-performing ones, the goal of quality improvement is to bring all providers up to a common high standard. Such a model requires the elimination of poorly functioning providers but then establishes a partnership for quality with the remaining providers. This article describes a model that involved providers in using consumer satisfaction results and additional program indicators to improve service quality. Quality Context This article reports on a partnership between an area agency on aging and a university-based research center. As one of the largest area agencies in the United States, the Council on Aging of Southwestern Ohio has an annual budget of more than $60 million. It has an extensive care-managed in-home services program funded through two major sources: the Home and Community-Based Medicaid Waiver program (which in Ohio is called PASSPORT) and local property tax initiatives, which generate $35 million annually in revenue for in-home services (Council on Aging of Southwestern Ohio, 2004). The two programs provide care-managed in-home services to almost 13,000 older people in the region. Although using local tax dollars to fund home care is an innovation, it also means that both the press and political officials pay close attention to issues of quality. To be responsive to all stakeholders, the Council allocates significant agency resources to assessing and improving quality. Like many care-management agencies, the Council holds contracts with a multitude of providers. In its largest county, the Council contracts with more than 40 providers for home-care services alone. The large number of providers makes individual agency monitoring time consuming and expensive. Prior to this initiative, the Council used the traditional quality-assurance approach to monitoring the array of providers under contract. The Council made annual visits to review the provider agency paperwork and employee records, care managers identified problem cases for review, investigations were launched in response to consumer or family complaints, and data were examined for a small sample of consumers participating in a statewide mailed satisfaction survey of PASSPORT clients. Although this approach allowed the Council to identify and address problem providers in select cases, data on which to assess provider quality were quite limited. The Council was particularly concerned with the limited amount of data that it was able to collect directly from consumers. Some individual providers used surveys to collect satisfaction data, but there were significant methodological problems with the collection of the data and a lack of good benchmarks against which to compare data about the quality of the services. The existing state survey had two problems that resulted in the data not being useful to the Council. First, the mailed state survey had varying consumer response rates across regions of the state and within the Council s geographic area. Second, the Council was convinced that there was considerable variation in quality across the range of service providers; thus, obtaining consumer satisfaction data at the provider level, rather than at a regional level, was critical. The large number of providers, however, meant that such an approach required large sample sizes more than the Council s budget would allow. To address these problems, the Council, as part of a state-funded project, worked with its research partner to test an alternative data collection approach. The approach involved using care managers to collect consumer satisfaction data about the homemaker, personal care, and home-delivered meal services received, excluding care management. The strength of this strategy was that care managers visited clients homes every 6 months, so the Council could incorporate a battery of consumer satisfaction measures at a very low cost. This would allow the Council to collect data on all providers in its system. The limitation to this approach was the question of whether such data could be collected in a valid and reliable manner. Care managers have complex, often long-standing, relationships with their clients. We needed to determine whether service satisfaction data collected by care managers would be comparable to those collected by outsiders who had no other relationship with the client and no vested interest in the consumers responses. The test involved having research-trained care managers collect satisfaction data as part of their annual reassessment home visit, and then using independent research staff to reinterview the same client within a few weeks. We sought to validate the data collected by the care manager. The study found that care managers, when appropriately trained, did collect scientifically sound data in a cost-effective way. The data collected by care managers did not differ significantly from those collected by the independent researchers who interviewed the same clients. Test retest reliability scores between the care managers and research interviewers were comparable, and quality ratings were comparable. There were no statistically significant differences on the satisfaction scales of homemaker and home health aide services. (For a complete discussion of this study, see Murdoch et al., 2004.) Based on the Vol. 47, No. 1, 2007 117

findings from this study, the Council was almost ready to implement a wide-scale consumer survey that would generate provider-specific data about consumer assessments of quality. The Council made one additional change prior to full-scale implementation. The initial testing had been done using the HCSM, which has been tested extensively across the United States (Geron et al., 2000). The HCSM had been used exclusively with research-trained interviewers; our pretest and in-depth interviews with care managers found that certain items did not work as well when care managers administered the tool (Murdoch et al., 2004). For example, one of the HCSM questions asks about the worker being the consumer s friend. A positive response would not be consistent with the Council s policy, and care managers felt they could not successfully ask this question. The Council refined the wording of several other questions in order to make them less confusing to older adults. Some of the negatively worded questions were harder for consumers to understand, and the response categories were changed to make it easier for consumers to communicate their experiences. Although the revised instrument, the Service Adequacy Satisfaction Instrument, relied heavily on the pioneering work of the HCSM, the Council revised and tested questions and response categories in preparation for the implementation phase. Implementation It is important to note that prior to full-scale implementation, the Council had been thinking about the concept and had been involved in preliminary testing for several years. The Council was involved in three major activities that it deemed essential to implementation success. First, the idea required organizational support. Using work groups composed of care managers, quality assurance staff, and administrative staff, the Council worked hard to discuss both the importance of getting feedback from consumers and providing information back to providers. Workgroup members also became heavily involved in the first pilot tests of consumer satisfaction data collection approaches. Second, the Council needed to get buy-in from the provider community. The Council communicated the importance of results for quality and also was clear with providers that, during the initial phases of the effort, it would not use the information to punish individual providers. An advisory workgroup that included select providers tested both data collection activities and the optimum format to provide information back to providers. Finally, the Council had to develop expertise in collecting, processing, analyzing, and using data. This involved everything from purchasing a data-entry scanner to developing a template to generate reports for both care managers and providers. Once the Council had become proficient in these activities, it was ready to move into the next phase of development. The implementation phase of the qualityimprovement model included three major components: (a) a large-scale consumer data collection effort that would allow for confidence in individual provider-level results, (b) the production of a provider quality report that would allow home-care agencies to compare their results to average ratings for other providers, and (c) technical assistance that would help providers use data in the quality reports to make improvements in their services. Large-Scale Consumer Data Collection The Council faced two major challenges in collecting consumer satisfaction information from the range of home-care providers in its system: the sheer number of providers and the wide variation in provider size. Some providers served hundreds of clients, whereas others served 15 or 20. With such variation, the Council had to address the question of how many consumers needed to be surveyed minimum sample size for a provider based on the number of clients served. The Council did not want to report data for a provider if the sample size was not sufficient for the data to be valid, reliable, and generalizable. Because of the range of providers and because care managers visited each client every 6 months, all clients still enrolled were surveyed for this pilot project. The sample excluded clients who had severe cognitive impairment, clients who refused to participate, and clients who terminated services prior to their reassessment. The average survey added 10 min to the care manager s visit. We decided not to use proxies in the data collection effort. Six percent of the sample was unable to respond because of physical or cognitive limitations. The use of proxies is feasible and might be necessary in Medicaidwaiver programs that typically serve a more impaired population. Even though care managers surveyed all of the remaining clients, in some instances sample size was an issue in generating the quality report. For example, we wondered: If a provider who served 10 clients only had 3 responses, would this be adequate for reporting purposes? We developed a sample size calculator in order to determine whether a provider had enough observations to be included in the report (Noble, Bailer, Kunkel, & Straker, in press). We based the sample size calculation on a statistical strategy designed to address two specific problems. First, the routine application of a normal approximation to the binomial is ill advised when the units range significantly in size (e.g., from 20 clients to 600 clients). Second, the unknown proportion of interest (i.e., percent satisfied) may vary from provider to provider, making 118 The Gerontologist

Table 1. Sample Provider Quality Report: Consumer Satisfaction Results (Personal Care) Quality Indicators Your Agency Average Score a (2004) Your Change From 2003 Overall Average for All Agencies Your Rank Benchmark Grouping Worker dependability Workers work all their hours 91.0 4.4 93.5 31 out of 37 2 Workers keep their scheduled times 90.1 1.5 92.4 27 out of 37 2 Clients can depend on their workers 93.9 3.0 94.9 24 out of 37 2 Worker Dependability Subscale Scores 91.7 3.0 93.6 29 out of 37 2 Worker competency Workers know how to do their job 94.4 0.5 96.2 31 out of 37 2 Workers do a good job 90.8 1.0 93.5 31 out of 37 2 Workers know what to do 92.8 0.3 94.9 30 out of 37 2 Workers follow client s instructions 93.9 0.7 95.6 27 out of 37 2 Workers do things the way clients want 90.9 0.6 92.6 8 out of 37 2 Worker Competency Subscale Scores 92.5 0.2 94.6 31 out of 37 2 Worker interpersonal Workers care about clients as people 96.2 0.0 97.9 30 out of 37 1 Clients trust their workers 96.4 0.2 97.9 31 out of 37 1 Workers treat clients with respect 97.5 0.3 98.8 34 out of 37 1 Worker Interpersonal Subscale Scores 96.7 0.2 98.2 31 out of 37 1 Agency quality Clients are told changes in worker s schedule 89.2 5.2 92.1 29 out of 37 2 Clients who have never called about a problem (%) 81.6 3.4 85.3 25 out of 37 2 Clients who called with a problem who had a quick response (%) 80.8 5.3 86.9 10 out of 13 2 Overall Service Adequacy and Satisfaction Instrument score 93.1 2.3 95.0 32 out of 37 2 a Satisfaction scores have been converted to a 100 point scale to facilitate interpretation of results. the standard sample size calculation problematic. Because of this problem, the initial quality report excluded 10 of the smallest providers out of 60 providers. The Council hopes to reduce this number in subsequent rounds of data collection. The Provider Quality Report The next step was to generate a provider quality report for all providers with an adequate sample size. Although the anchor of the report was the consumer satisfaction results, it also included a series of additional quality indicators, such as rate of acceptance of the referrals sent to the provider, elapsed time from referral receipt to delivery of service, proportion of units of service delivered compared to units of service ordered, market share, and individual provider reimbursement rate compared to countywide average. The Council issued the first provider quality report in January 2004 to 50 providers. Throughout calendar year 2003, care managers collected satisfaction data on 4,200 consumers receiving homemaking, personal care, and home-delivered meals. A second report issued in January 2005 to 58 providers covered the 2004 calendar year (based on interviews with 4,500 consumers). Table 1 includes a page from the provider quality report showing results from the home-care component of the consumer satisfaction survey. The questions belong to one of four major categories: worker dependability, worker competency, interpersonal interactions, and agency quality. Providers get to see the proportion of clients scoring positively on each item, the change from the previous year, and how they compared to other home-care providers. Providers do not get to see scores of other agencies. Under the expectation that rankings will be less important as all agencies improve, the Council went to a benchmarking group-classification system in which it attempts to identify the score that would classify a provider as a top-performing organization. Agency staff responsible for quality management developed the benchmarks. The report placed providers into one of three levels of quality. Eighteen providers fell into Group 1, representing the topperforming providers. Twenty providers were classified into Group 2, providers who performed well, but had room for improvement. Twelve providers fell into Group 3, the lowest performing providers. Agencies also received a more detailed look at the survey results broken down for each of the five response categories (see Table 2). Providers asked for a more detailed breakdown as they attempted to understand consumer feedback. For example, for negative items, agencies found it useful to distinguish between a never response and a sometimes response. As noted, the provider quality report also includes Vol. 47, No. 1, 2007 119

Table 2. Sample Provider Quality Report: A Closer Look Variable Always Usually Sometimes Hardly Ever Never Workers work all their hours 78 13 5 3 1 Workers keep their scheduled times 68 27 3 1 1 Clients can depend on their workers 79 19 1 0 1 Workers know how to do their job 82 14 3 0 1 Workers do a good job 71 22 5 1 1 Workers know what to do 79 16 4 0 1 Workers follow clients instructions 82 13 3 1 1 Workers do things the way clients want 73 21 5 0 1 Workers care about clients as people 89 7 3 0 1 Clients trust their workers 89 8 2 0 1 Workers treat client with respect 92 6 1 0 1 Clients are told changes in worker s schedule 74 16 5 3 2 Note: Data are percentages. quality indicators to supplement the consumer satisfaction data (see Table 3). For example, one of the important indicators was how quickly the provider was able to deliver the service after it was ordered. Providers received their scores, which were also grouped into a benchmark category. Providers varied in start-up time from 4.5 days to more than 21 days, indicating substantial possibilities for improvement across the network. Providers also reported data on referral acceptance. To streamline the referral process, the Council care managers place orders via e-mail to all providers serving a particular geographic area. The provider quality report shows rates of acknowledged and accepted referrals and the proportion of clients served by each provider. Again, wide variation existed across the provider network, with unacknowledged referrals varying from 0% to 84%. The report also allows providers to track their market share by area. The Council issues the provider quality report annually to providers prior to their bidding/contracting process with the Council. The information is useful to the providers when determining their bid price and contract proposal for the next year. Technical Assistance for Quality Improvement The goal of the provider quality report is to improve the services received by the Council consumers. The vision of the technical assistance effort is to have every provider reach the Group 1 quality level. To achieve this goal, the Council has developed a series of strategies, including training for agency staff and boards on reading and using the report, technical assistance to providers by identifying best practices of high-quality agencies, and integration of quality data into the Council monitoring activities to better assist providers as they are being reviewed. Agency and board training involves ongoing meetings with providers, including an annual meeting highlighting overall findings from the provider quality report and onsite visits to provider agencies from the Council quality staff. Either the Council or the provider can generate such visits. Shifting from a quality-assurance paradigm to one of improvement requires both considerable training and an increased trust factor between the Council and the array of providers who are under contract to deliver services. Most of the quality-assurance interactions in the past focused on complaints or failures in service provision, and although quality staff still respond to such problems, the shift to more of a technical assistance role has been a major change for the organization. One of the technical assistance functions involves helping providers figure out how to actually use data. A great amount of effort and training was required to understand what their scores meant in relationship to the benchmarks, and then applying this to their operations. The data frequently required Table 3. Sample Provider Quality Report: Performance Indicators (Personal Care) Performance Indicator Agency Average Overall Average Benchmark Grouping Acknowledged referrals (%) 17.0 51.0 3 Accepted referrals (%) 3.4 17.0 3 Awarded referrals (%) 2.7 5.0 2 Response time for referrals (days) 4.1 10.6 1 Market share Percentage 3.0 Rank 12/24 120 The Gerontologist

additional information gathering and brainstorming to determine what caused low scores on certain questions. In one example, a home-delivered meals provider scored poorly on items examining meal delivery. Responses to questions such as whether the meals arrived at the same time each day, or at a good time of day, were well below average, even though respondents ranked meal quality very favorably. The provider, who had a long-standing reputation in the community for quality, was unsure about how to react to these findings. The Council technical assistance helped the provider to analyze the results in the context of its practice. In this case, it turned out that the provider, in part because of its fine reputation, attracted a large number of volunteers to deliver meals. Delivery routes varied each day depending on the volunteer s schedule, and this resulted in clients receiving their meals at different times each day of the week. The schedule was convenient for the volunteers but less so for the consumer. The meal provider subsequently revised its scheduling practice to respond to the concerns identified by consumers. A second example involves a provider who operated two distinct satellite agencies and received separate satisfaction scores for each unit. Findings showed that one unit ranked as one of the top performing providers, whereas the other one was underperforming. The provider had allowed different organizational structures and procedures to develop in the two units. Scheduling procedures, hiring practices and staffing, attitudes about quality, and team building all varied between the two units. The provider made modifications in the subperforming unit to address these areas. The unit s consumer satisfaction scores increased significantly during the second year survey. The Council quality staff are also working to develop best practice models for the top-performing providers. The quality staff identify the top-performing agencies and lowest performing agencies for each of the quality indicators and then document differences in process, philosophy, and/or approach for the top-performing agencies. This helps to identify agency best practices. The goal is to spread best practices to other providers. Using the data in this way can also generate practice innovations as providers continue to modify how they operate in an effort to improve their scores. The Council has also incorporated the provider reports into the quality-assurance activities. The provider quality report is a significant part of the review during each provider monitoring visit. The Council has found that, in many cases, the conference with the provider at the end of the monitoring visit is dominated by discussion surrounding the provider quality report results rather than the findings and recommendations from the structural review. The questions that providers ask after reviewing the provider quality report are healthy and frequently lead to the questioning of long-held assumptions and practices used by the provider. The collection and review of these data appear to have had a favorable impact on the provider network overall. A review of benchmark performances of those providers with 2 years of data revealed that 17 providers demonstrated improvement in their benchmark quality classification; 22 had no change, and 3 showed a decline. Challenges and Future Steps Despite recent interest in collecting satisfaction data from consumers, using data to improve service qualityisstillarelativelyrareeventinthecommunitybased care delivery system. The Council experience demonstrates that a practicing agency can collect provider-specific data on a large-scale basis and use the information to improve quality. Although this project has been able to demonstrate the value of information in efforts to improve home-care quality, it raises a number of issues and challenges in looking toward wide-scale implementation. Methodological Issues Implementation of this effort presents several challenges. As noted, spreading the data collection out over a large number of care managers allowed the Council to collect data on a large number of consumers at a very low cost. Although the reinterview study concluded that care managers can collect these data reliably, training was a necessary component of the approach. The Council trained more than 100 care mangers to collect survey data as part of this effort. Agencies involved in an effort of this nature should not underestimate the training requirements associated with this approach. A second challenge involves making sure that there are an adequate number of respondents from each provider to include that agency in the provider quality report. Even though the data collection strategy attempted to survey each consumer (and more than 90% of consumers completed the interviews), for a range of reasons not all consumers participated. Because some providers serve a small number of consumers, it is possible for select providers to be below a minimum sample threshold. Agencies developing a provider-specific report need to recognize this problem. A third major challenge is having the resources and skills to process the data and publish the reports. It took the Council several years to develop the capacity to process thousands of surveys, tabulate the results, and publish the indicators in a report that was easy to read and useful to the providers. The Council bought and learned how to use tools like survey scanning, analytical software, report writing, and other software Vol. 47, No. 1, 2007 121

to develop an efficient and reliable process that transformed data into useful information. Program Issues Two important programmatic issues also arise in implementing this approach. First, such an approach requires the agency to commit time and resources to the effort. Care manager involvement is essential; although the Council made efforts to minimize their time, between training and data collection, the effort required about 20 min of additional time per week. Staff resources for data scanning, processing, analysis, and production of reports were considerable. The Council allocated one half-time position to this effort. Agency staff also spent time with providers to help them interpret quality findings. In addition to individual meetings with providers, the Council holds an annual quality conference with providers. The Council has always allocated a considerable amount of staff time to quality-assurance activities and feels that these efforts have substantially improved its quality efforts and has allowed the Council to adjust how it spends its quality-monitoring resources. Second, the Council has had to address questions about how best to use results. Should they be available to consumers and their families as they make their provider choices? Should they be available to care managers as they help consumers choose providers? Should they be made available to the public, like the Mobil restaurant guide? Should results be available to other providers? Should results be incorporated into the Council s internal qualityassurance/quality-improvement activities? Should the results be used to determine how the agency contracts, pays, and refers clients to providers? The Council entered into this activity with the goal of improving the quality of its services. The provider community was involved in helping to test the provider quality report and to give feedback on the overall approach. Part of the concern with the data is the recognition that there is not a perfect measure. The measures are indicators of quality, but no single measure can tell the entire story about quality. To this point, providers have received summary results for all providers and their agencyspecific results. The Council has not made results available for consumers or care managers. On the one hand, one of the Council s long-standing qualityimprovement principles has been that improvement data should not be used for punishment. The fear is that the data will be more likely to be manipulated and misused if they are being used to monitor or limit resources available to a particular provider. On the other hand, some argue that consumers have the right to have access to information about service providers and that it is the Council s responsibility to share such data with consumers, care managers, and the public at large. Critics argue that consumers now are asked to choose providers, and they should have access to the necessary information to make the best decision possible. In fact, some have argued that if providers have to make the information available, it would create an even better incentive to improve. It is our view that a reasonable goal for the Council would be to eventually make quality data available to consumers. However, this should not happen until it has accumulated adequate experience with the instrument, data collection, processing and analysis, and reporting. The quality-management literature talks about the evolutionary nature of changing the organizational culture and practices about quality (Crosby, 1999; Liker, 2004; Spector, 2001). The shift from a quality-assurance paradigm to an improvement perspective will not happen overnight in any organization. The types of questions the Council is now raising are the normal issues facing organizations as they implement a quality-improvement approach. Input from the range of stakeholders including consumers, providers, and care managers has allowed the Council to evolve to its current level of development. Our expectation is that the answers to these difficult questions will be found by working with these same stakeholders. References Applebaum, R. A., Straker, J. K., & Geron, S. M. (2000). Assessing satisfaction in health and long-term care. New York: Springer. Council on Aging of Southwestern Ohio. (2004, April). Annual report. Cincinnati, OH: Author. Crosby, P. B. (1979). Quality is free: The art of making quality certain. New York: McGraw-Hill. Crosby, P. B (1999). Quality and me: Lessons from an evolving life. San Francisco, CA: Jossey-Bass. Deming, W. E. (1982). Out of the crisis. Cambridge, MA: MIT Center for Advanced Engineering. Geron, S. M., Smith, K., Tennstedt, S., Jette, A., Chassler, D., & Kasten, L. (2000). The home care satisfaction measure: A client-centered approach to assessing the satisfaction of frail older adults with home care services. Journal of Gerontology: Social Sciences, 55B, S259 S270. Hawes, C. (1997/1998). Regulation and the politics of long-term care. Generations, 21(4), 5 9. Liker, J. K. (2004). The Toyota way. New York: McGraw-Hill. Medstat (2003). Participant Experience Survey, User s Guide, Developed for Centers for Medicare and Medicaid Services, Department of Health and Human Services. Murdoch, L. D., Kunkel, S. R., Applebaum, R. A., & Straker, J. K. (2004). Care managers as research interviewers: A test of a strategy for gathering consumer satisfaction. Journal of Applied Gerontology, 23, 234 246. Noble, R. B., Bailer, A. J., Kunkel, S. R., & Straker, J. K. (in press). Estimating the sample size required to achieve a specified level of precision when estimating a finite population proportion using a hierarchical model. Health Services and Outcomes Research Methodology. Spector, R. (2001). Lessons from the Norstrom way. New York: Wiley. Received April 20, 2006 Accepted September 18, 2006 Decision Editor: Nancy Morrow-Howell, PhD 122 The Gerontologist