Determining Like Hospitals for Benchmarking Paper #2778

Similar documents
The Global Quest for Practice-Based Evidence An Introduction to CALNOC

UCSF Stanford Center for Research & Innovation in Patient Care. How to Write a Good Abstract: Dos, Don ts, and Helpful Hints

Modeling Hospital-Acquired Pressure Ulcer Prevalence on Medical-Surgical Units: Nurse Workload, Expertise, and Clinical Processes of Care

Collecting CALNOC Data

Healthcare- Associated Infections in North Carolina

Scottish Hospital Standardised Mortality Ratio (HSMR)

Maximizing the Power of Your Data. Peggy Connorton, MS, LNFA AHCA Director, Quality and LTC Trend Tracker

Introduction to CALNOC The Collaborative Alliance for Nursing Outcomes

ANA Nursing Indicators CALNOC

Final Report No. 101 April Trends in Skilled Nursing Facility and Swing Bed Use in Rural Areas Following the Medicare Modernization Act of 2003

Impact of Financial and Operational Interventions Funded by the Flex Program

SNAPSHOT Nursing Homes: A System in Crisis

Medicare Spending and Rehospitalization for Chronically Ill Medicare Beneficiaries: Home Health Use Compared to Other Post-Acute Care Settings

Frequently Asked Questions (FAQ) CALNOC 2013 Codebook

Gantt Chart. Critical Path Method 9/23/2013. Some of the common tools that managers use to create operational plan

Predicting Transitions in the Nursing Workforce: Professional Transitions from LPN to RN

CalNOC Data Definitions and Calculations: Prevalence Studies Reports

PG snapshot Nursing Special Report. The Role of Workplace Safety and Surveillance Capacity in Driving Nurse and Patient Outcomes

New Facts and Figures on Hospice Care in America

Results of censuses of Independent Hospices & NHS Palliative Care Providers

Frequently Asked Questions (FAQ) Updated September 2007

Long-Stay Alternate Level of Care in Ontario Mental Health Beds

California Community Clinics

Healthcare- Associated Infections in North Carolina

Child Welfare Training from the Individual Worker Perspective

Riverside s Vigilance Care Delivery Systems include several concepts, which are applicable to staffing and resource acquisition functions.

Palomar College ADN Model Prerequisite Validation Study. Summary. Prepared by the Office of Institutional Research & Planning August 2005

THE UTILIZATION OF MEDICAL ASSISTANTS IN CALIFORNIA S LICENSED COMMUNITY CLINICS

Nonprofit Sector: Orange County

ORIGINAL STUDIES. Participants: 100 medical directors (50% response rate).

Is there an impact of Health Information Technology on Delivery and Quality of Patient Care?

NUTRITION SCREENING SURVEY IN THE UK AND REPUBLIC OF IRELAND IN 2010 A Report by the British Association for Parenteral and Enteral Nutrition (BAPEN)

A comparison of two measures of hospital foodservice satisfaction

Table of Contents. Overview. Demographics Section One

Full-time Equivalents and Financial Costs Associated with Absenteeism, Overtime, and Involuntary Part-time Employment in the Nursing Profession

A Regional Payer/Provider Partnership to Reduce Readmissions The Bronx Collaborative Care Transitions Program: Outcomes and Lessons Learned

Medicare Skilled Nursing Facility Prospective Payment System

Cultural Transformation To Prevent Falls And Associated Injuries In A Tertiary Care Hospital p. 1

Forecasts of the Registered Nurse Workforce in California. June 7, 2005

Suicide Among Veterans and Other Americans Office of Suicide Prevention

Massachusetts ICU Acuity Meeting

THE IMPACT OF MS-DRGs ON THE ACUTE HEALTHCARE PROVIDER. Dynamics and reform of the Diagnostic Related Grouping (DRG) System

Safe Staffing- Safe Work

Data Project. Overview. Home Health Overview Fraud Indicators Decision Trees. Zone Program Integrity Contractor Zone 4 Decision Tree Modeling

Client-Provider Interactions About Screening and Referral to Primary Care Services and Health Insurance Programs

Hospital Strength INDEX Methodology

Impact of hospital nursing care on 30-day mortality for acute medical patients

A REVIEW OF NURSING HOME RESIDENT CHARACTERISTICS IN OHIO: TRACKING CHANGES FROM

FUNCTIONAL DISABILITY AND INFORMAL CARE FOR OLDER ADULTS IN MEXICO

2/5/2014. Patient Satisfaction. Objectives. Topics of discussion. Quality for the non-quality Manager Session 3 of 4

Minnesota health care price transparency laws and rules

Policy Brief. Nurse Staffing Levels and Quality of Care in Rural Nursing Homes. rhrc.umn.edu. January 2015

The significance of staffing and work environment for quality of care and. the recruitment and retention of care workers. Perspectives from the Swiss

2016 Survey of Michigan Nurses

State of Kansas Department of Social and Rehabilitation Services Department on Aging Kansas Health Policy Authority

Gender Pay Gap Report. March 2018

Facility Survey of Providers of ESRD Therapy. Number of Dialysis and Transplant Units 1989 and Number of Units ,660 2,421 1,669

Nursing is a Team Sport

Mandated Nurse Staffing Levels Literature Review

Staffing and Scheduling

Medicare Spending and Rehospitalization for Chronically Ill Medicare Beneficiaries: Home Health Use Compared to Other Post-Acute Care Settings

Findings Brief. NC Rural Health Research Program

Incentives and Penalties

William B. Saunders, PhD, MPH Program Director, Health Informatics PSM & Certificate Programs. Laura J. Dunlap, RN

The TeleHealth Model THE TELEHEALTH SOLUTION

Performance Measurement of a Pharmacist-Directed Anticoagulation Management Service

Future Directions of Credentialing Research in Nursing: Workshop Summary. An Overview

Wraparound Services in Substance Abuse Treatment: Are Patients Receiving Comprehensive Care?

Appendix: Data Sources and Methodology

Tracking Functional Outcomes throughout the Continuum of Acute and Postacute Rehabilitative Care

Nursing Home Staffing and Its Relationship to Deficiencies

The Impact of Physician Quality Measures on the Coding Process

Predicting use of Nurse Care Coordination by Patients in a Health Care Home

Exploring the Relationships between Practicing Registered Nurses (RNs) Pharmacology Knowledge and Medication Error Occurrence

Missed Nursing Care: Errors of Omission

Report on the Pilot Survey on Obtaining Occupational Exposure Data in Interventional Cardiology

Risk Adjustment Methods in Value-Based Reimbursement Strategies

QUALITY OF LIFE FOR NURSING HOME RESIDENTS: PREDICTORS, DISPARITIES, AND DIRECTIONS FOR THE FUTURE

ALTERNATIVES TO THE OUTPATIENT PROSPECTIVE PAYMENT SYSTEM: ASSESSING

FY 2017 PERFORMANCE PLAN

time to replace adjusted discharges

Findings Brief. NC Rural Health Research Program

Factors Influencing Acceptance of Electronic Health Records in Hospitals 1

Interagency Council on Intermediate Sanctions

Nurse Staffing and Quality in Rural Nursing Homes

COMPARATIVE PROGRAM ON HEALTH AND SOCIETY 2001/2 WORKING PAPER WORKING PAPER

Analysis of 340B Disproportionate Share Hospital Services to Low- Income Patients

Casemix Measurement in Irish Hospitals. A Brief Guide

National Hospice and Palliative Care OrganizatioN. Facts AND Figures. Hospice Care in America. NHPCO Facts & Figures edition

Patient Driven Payment Model (PDPM) and the MDS: A Total Evolution of the SNF Payment Model

Session 6 PD, Mitigating the Cost Impact of Trends in Hospital Billing Practices. Moderator/Presenter: Sabrina H.

Using Benchmarks to Drive Home health Success

Fleet and Marine Corps Health Risk Assessment, 02 January December 31, 2015

Comparing the Value of Three Main Diagnostic-Based Risk-Adjustment Systems (DBRAS)

Hospital Discharge Data, 2005 From The University of Memphis Methodist Le Bonheur Center for Healthcare Economics

The Role of Analytics in the Development of a Successful Readmissions Program

RUPRI Center for Rural Health Policy Analysis Rural Policy Brief

Overview of the Hospice Proposed Rule

Patient survey report Outpatient Department Survey 2011 County Durham and Darlington NHS Foundation Trust

The Glasgow Admission Prediction Score. Allan Cameron Consultant Physician, Glasgow Royal Infirmary

Transcription:

Determining Like Hospitals for Benchmarking Paper #2778 Diane Storer Brown, RN, PhD, FNAHQ, FAAN Kaiser Permanente Northern California, Oakland, CA, Nancy E. Donaldson, RN, DNSc, FAAN Department of Physiological Nursing, University of California, San Francisco Linda Burnes Bolton, DrPH, RN, FAAN Cedars-Sinai Medical Center, Los Angeles, CA Carolyn Aydin, PhD Cedars-Sinai Medical Center, Los Angeles, CA 1 Learner Objectives: 1. Understand the value of selecting likehospitals for benchmarking of nursingsensitive indicators. 2. Describe the limitations of using hospital size as a characteristic to define likehospitals 2 1

Hospital Environment Challenged to balance efficiency goals which assure patients receive exactly the care they need in systems without waste, with highly reliable care that is consistently safe and clinically effective (high quality). Greatly impacted by the economic downturn Facing escalating health care costs and changing reimbursement models Growing lists of payers who will no longer reimburse hospitals for preventable hospital-acquired conditions Growing scrutiny over issues that erode public trust which are highlighted in the media Public demands for transparency in both cost and quality data have increased 3 Benchmarking Importance Leaders are challenged to identify appropriate benchmarks for comparative data. Benchmarking is an indispensable tool to gauge progress with strategic priorities. Benchmarking with other similar hospitals in a confidential context is an important component of improving performance on public report cards. 4 2

Purpose: To challenge the conventional use of arbitrary administrative comparison groups to define like hospitals by using a gross Average Daily Census (ADC) measure. ADC includes all hospital services, including Maternal Child, Rehabilitation, Psychiatric, etc. CALNOC has the ability to examine statistically appropriate values to determine comparison groups for benchmarking hospital performance using nursingsensitive outcome indicators. Staffing held constant across hospitals after California ratio implementation provided a natural laboratory with reliable unit-based concurrent data. 5 10 Years of CALNOC Data: Small Hospitals Are Statistically Different Unit-based data demonstrates significant differences in falls and hospital acquired pressure ulcers (HAPU) performance. Small hospitals have been administratively defined as ADC of 100 or less. Historical trends that follow demonstrate data from 49 hospitals (318 nursing units) in 2001, and growing to 156 hospitals (951 units) in 2007. 6 3

CalNOC Trends in Total Facility Medians by Hospital Average Daily Census Falls per 1000 Patient Days All Hospitals: 2001-2007 4 3.5 Falls per 1000 pt. days 3 2.5 2 1.5 1 2001 2002 2003 2004 2005 2006 2007 7 Under 100 100-199 200-299 300+ Small Hospitals: Higher Fall Rates Identified prior to the implementation of mandated nurse-patient ratios in 2004. Using hierarchical models for identification of factors associated with falls: Data from 1998 2003 Most falls were in the small hospitals Correlations found with patient age and higher percentages of medical diagnoses (two variables which described the small hospitals in this sample). 8 4

CalNOC Trends in Total Facility Medians by Hospital Average Daily Census Percent of Patients with Hospital Acquired Pressure Ulcers Stage II+ Most Recent Study in Year for Each Hospital All Hospitals: 2001-2007 6 5 Percent of patients 4 3 2 1 0 2001 2002 2003 2004 2005 2006 2007 9 Under 100 100-199 200-299 300+ Small Hospitals: Fewer HAPU Stage 2+ Fewer HAPU Stage 2+ than larger hospitals has continued after the implementation of staffing ratios in 2004 (staffing held constant after ratios). Investigation into these differences held when looking at benchmark data for best and worst performing hospitals. Analysis of 151 hospitals in 2006 Placed into benchmarking quartiles as best performers and worst performers Small hospitals consistently performed the best when looking at rates for any pressure ulcers (any stage, hospital or community acquired), and various stages of hospital acquired. 10 5

CalNOC Pressure Ulcer Prevalence by Hospital Size Benchmark Lower Quartiles (25th %ile) 2006: N=151 Hospitals 12.0% 10.0% 2006 Best Performers (Lower Quartiles) for HAPU look different in hospitals under 100 beds 9.7% Percent of patients with ulcers 8.0% 6.0% 4.0% 7.8% 3.2% 6.5% 4.1% 2.7% 7.3% 3.5% 7.7% 4.8% 2.7% 2.0% 2.1% 2.0% 0.0% 0.5% 0.2% 0.0% 0.0% 0.0% 0.0% 0.0% All Hospitals Under 100 100-199 200-299 300+ Average Daily Census Group Lower Quartile Any Ulcer Lower Quartile Any HAPU Lower Quartile HAPU2+ Lower Quartile HAPU 3+ N=151 total hospitals; Under 100=41; 100-199=65; 200-299=27; 300+ =17 11 Analytical Questions 1. What is the statistically appropriate ADC cut-points that define comparison groups for small and large hospitals to benchmark performance? 2. Are there statistical differences in outcomes between small and large hospitals when using empirically defined categories? 12 6

Methods: Data from 6 quarters/18 months CALNOC participating hospitals reported during 2007 and the first two quarters of 2008. 196 Hospitals: 196 with medical/surgical nursing units (MS), 195 with critical care (CC), and 120 with stepdown (SD). Analyses were completed at the unit-type level (CC, MS, SD) from 1264 nursing units. Unit Level Number of Units Contributing Data % of Total Data CC 308 24 MS 743 59 SD 224 17 13 Average Daily Census (ADC) Table 1: Cal OC Hospital Demographics for 2007-2008 Analyses Total Under 100 100-199 200-299 300+ umber Percent Total Hospitals 62 81 33 20 196 100.0% Percent by Census Category 31.6% 41.3% 16.8% 10.2% Ownership Category ot-for-profit 46 66 27 16 155 79.1% For-profit 9 8 3 0 20 10.2% Federal Government 2 3 0 1 6 3.1% on-federal Government 5 4 3 3 15 7.6% Total 62 81 33 20 196 100.0% Urban/Rural Rural 18 2 0 0 20 10.2% Urban 44 79 33 20 176 89.8% Total 62 81 33 20 196 100.0% Multi-Hospital System o 4 7 3 6 20 10.2% Yes 58 74 30 14 176 89.8% Total 62 81 33 20 196 100.0% 14 7

Variables Three outcomes: Falls (per 1000 patient days), Falls with Injury (excludes no injury or minor injury without loss of function), and HAPU stage 2 or greater. Structure indicators that are controllable by the hospital: Nurse staffing direct care hours, skill mix, patient days, nurse/patient ratios, and contracted staffing utilization, workload intensity (admissions, discharges, transfers), staff voluntary turnover, and use of sitters. Patient population descriptors: patient diagnosis (% medical), age, and gender. 15 Hospital ADC Size Outcome < or > Median for Falls, Falls with Injury, & HAPU 2+ 2X2 Table Question 1: What is the statistically appropriate cut-point to define small hospitals based on these outcomes? Question 2: Are there statistical differences in these outcomes between these small and large hospitals? Hospital Level Unit-Type Level Statistics: Optimal Size Calculation Statistics: t-test to compare Small and Large Hospital Means 16 8

Analysis: Define optimal dichotomous classifications of hospitals into small and large hospital size so that the resulting groups were the best predictors of outcome. Varied hospital size cutoff from 30 to 310 in increments of 10. Calculated the overall sample median rate of outcome (for each specific outcome and unit type) and classified facility rates as low (below median) or high (above median) (cutpoints that are robust to extreme outcome values). Injury falls per 1000 patient days greater than 0.001 were considered high to keep all facilities with no injury falls in the low rate category (median was zero). This process created a two-by-two table of hospital size by outcome level (contiguous size and homogeneous hospital groups relative to the outcome). 17 Statistical Procedures: Accuracy of prediction measured by the logistic regression c- statistic that approximately measures the proportion of accurate classification of units into high and low outcome rate using small or large hospital size as a predictor. The optimal size cut-point was the one that resulted in the highest accuracy of outcome prediction based on the largest c-statistic. The c-statistic value is equal to the area under the sensitivity by (1-specificity) curve, thus an alternative interpretation of the process of maximization of the c-statistic is that we seek the hospital size cutoff that results in the highest sensitivity and specificity for predicting the outcome level. Outcomes, as well as descriptive patient characteristics, and hospital structural variables were compared across small and large hospitals using t-tests for differences in means. 18 9

Facility Level Analysis: What is a Small Hospital? Cut points were not consistent by outcome. For HAPU 2+, small hospitals were identified as ADC < 120. For Falls, small hospitals were identified as ADC < 150. For Injury Falls small hospitals were identified as ADC < 230. 19 Facility Level Analysis: Are Small Hospitals Different? HAPU was statistically different between hospital sizes however Falls/Falls with Injury rates were not. For all analyses, in small hospitals the age of patients was significantly higher. For Falls/Falls with Injury, patient turnover was higher in small hospitals. Hours of care and staffing variables were not significantly different between small and large hospitals for these outcomes. 20 10

Table 3: Overall Analysis At the Facility Level (MS, SD, CC Combined) HAPU 2+ Falls Falls w/injury < and >=120 ADC < and >=150 ADC < and >=230 ADC Variable P-value; Direction Variable P-value; Direction Variable P-value; Direction HAPU 2+ 0.007; SL Falls 0.08* Not Significant Falls w/injury 0.57; SL Not Significant Age 0.0001; SH Workload Intensity 0.0002; SH Workload Intensity 0.003; SH Age 0.0005; SH Age 0.0006; SH % medical diagnosis 0.02; SH Reference for comparison of medians between hospital groups: SH= Small Hospitals Higher; SL= Small Hospitals Lower; * Smaller hospitals had lower median but higher mean due to outliers 21 Unit-Type Level Analysis: Are Small Hospitals Different? For CC, cut-points were not stable (multi-modal) for Falls/Falls with Injury outcomes. Consistent with the facility-level data, the only outcome that was statistically different was HAPU 2+ -- only in critical care with smaller hospitals below the median. One descriptive variable was significantly different for each unit type and for all outcomes patients in smaller hospitals were older. Small hospitals had fewer patient sitter hours and more patient turnover in Med/Surg and SD units, but Falls/Falls with Injury outcomes were not different. Hours of care and skill mix were not significantly different between small and large hospitals for these outcomes with the exception of licensed hours for the Falls outcome in Med/Surg. 22 11

Table 4: Analysis by Unit Type HAPU 2+ Falls Falls w/injury < and >=120 ADC < and >=150 ADC < and >=230 ADC Variable P-value; Direction Variable P-value; Direction Variable P-value; Direction Med/Surg HAPU 2+ 0.37; SL Not Significant Falls 0.24 * Not Significant Falls w/injury 0.62; SE Not Significant Age 0.0001; SH Lic Hrs 0.04; SH Workload Intensity 0.0008; SH Sitter Hrs 0.02; SL Age 0.0002; SH Workload Intensity 0.002; SH Age 0.0001; SH Stepdown HAPU 2+ 0.15; SL Not Significant Falls 0.17; SL Not Significant Falls w/injury 0.42; SL Not Significant %medical 0.02; SH Sitter Hrs 0.05; SL RN Turnover 0.05 * Age 0.003; SH Workload Intensity 0.02; SH %male 0.03; SH Age 0.04; SH Age 0.04; SH CCU HAPU 2+ 0.006; SL %Other 0.01; SL %medical 0.001; SH Age 0.02; SH Reference for comparison of medians between hospital groups: SH= Small Hospitals Higher; SL= Small Hospitals Lower; SE= Small Hospitals Equal; * Smaller hospitals had lower median but higher mean due to outliers. 23 Table 5: Outcomes Data By Small Hospital Cut Points and All Average Daily Census (ADC) ADC Mean SD Median P All Unit Types Combined HAPU 2+ <120 3.31 2.0 3.10.007 120 or > 4.18 2.3 3.97 ALL ADC 3.84 2.2 3.62 Falls per 1000 patient days <150 3.05 1.0 2.81.08 150 or > 2.81 0.7 2.89 ALL ADC 2.94 0.9 2.87 Injury Falls per 1000 patient days <230 0.10 0.2 0.07 0.57 230 or > 0.09 0.1 0.08 ALL ADC 0.10 0.2 0.07 24 12

Table 5: Outcomes Data By Small Hospital Cut Points and All Average Daily Census (ADC) ADC Mean SD Median P Medical/Surgical Units HAPU 2+ <120 2.97 2.3 2.53 0.37 120 or > 3.27 2.1 3.17 ALL ADC 3.16 2.2 2.87 Falls per 1000 patient days <150 3.37 1.2 3.10 0.24 150 or > 3.18 0.9 3.32 ALL ADC 3.28 1.06 3.20 Injury Falls per 1000 patient days <230 0.12 0.2 0.08 0.62 230 or > 0.11 0.1 0.08 ALL ADC 0.12 0.18 0.08 Step Down Units HAPU 2+ <120 3.52 3.2 3.28 0.15 120 or > 4.55 3.5 4.02 ALL ADC 4.30 3.4 3.95 Falls per 1000 patient days <150 2.78 1.4 2.58 0.17 150 or > 3.11 1.1 2.95 ALL ADC 2.98 1.22 2.80 Injury Falls per 1000 patient days <230 0.10 0.2 0 0.42 230 or > 0.13 0.2 0.07 ALL ADC 0.11 0.20 0.02 Critical Care Units HAPU 2+ <120 6.11 5.8 4.92 0.006 120 or > 8.83 7.0 8.32 ALL ADC 7.79 6.7 7.11 25 Discussion There seems to be a direct relationship between the magnitude of the cut-point and the frequency of the outcome. Most frequent outcome was HAPU stage 2+ with a rate of about 3-8% and a cut-point of 120; Followed by falls with a rate of about 3 per 1000 patient days and a cut-point of 150; The rarest outcome was falls with injury with an approximate rate of 1 in 10000 patient days and a cut-point of 230 ADC. From a statistical point of view, this makes sense: As outcomes rates become smaller, a larger hospital size threshold is required to be able to observe more stable rates that facilitate comparison of rates Small hospitals under that threshold would experience no events most of the time. 26 13

Discussion Median Versus Mean: for the Falls outcome, in both the facility-level and unittype level for medical/surgical units Small hospitals were higher than the mean Small hospitals were lower than the median Implications for using means for benchmarking averages can be skewed by outliers. 27 Implications: For benchmarking performance comparison of like-sized hospitals had limited value. Comparison against all hospitals may provide better data for front line staff, managers and leaders, and hospitals boards of directors to better understand their own performance. Leaders may be best advised to seek comparison groups that are more descriptive of like hospitals by criteria other than hospital size rural or critical-access designations population driven descriptors such as Veterans Affairs Hospitals or specialty hospitals types of facilities such as county hospitals or university hospitals. Further research is needed to continue to explore data-based ADC size comparisons. 28 14

Implications: The science of evidence-based comparison groups and risk adjustment for hospital performance indicators must continue as a priority for large datasets. This is an important step to refine hospital benchmarks for the future as the quest for transparency and public reporting continues to take shape. These findings suggest that those using comparative benchmark data to manage, monitor, accredit, acknowledge or reimburse hospitals, need to become increasingly discriminating in viewing and interpreting size-based comparisons. 29 15