A critical evaluation of healthcare quality improvement and how organizational context drives performance

Size: px
Start display at page:

Download "A critical evaluation of healthcare quality improvement and how organizational context drives performance"

Transcription

1 University of Iowa Iowa Research Online Theses and Dissertations Spring 2013 A critical evaluation of healthcare quality improvement and how organizational context drives performance Justin Mathew Glasgow University of Iowa Copyright 2013 Justin Mathew Glasgow This dissertation is available at Iowa Research Online: Recommended Citation Glasgow, Justin Mathew. "A critical evaluation of healthcare quality improvement and how organizational context drives performance." PhD (Doctor of Philosophy) thesis, University of Iowa, Follow this and additional works at: Part of the Clinical Epidemiology Commons

2 A CRITICAL EVALUATION OF HEALTHCARE QUALITY IMPROVEMENT AND HOW ORGANIZATIONAL CONTEXT DRIVES PERFORMANCE by Justin Mathew Glasgow An Abstract Of a thesis submitted in partial fulfillment of the requirements for the Doctor of Philosophy degree in Epidemiology in the Graduate College of The University of Iowa May 2013 Thesis Supervisor: Associate Professor Peter J. Kaboli

3 1 ABSTRACT This thesis explored healthcare quality improvement, considering the general question of why the last decade s worth of quality improvement (QI) had not significantly improved quality and safety. The broad objective of the thesis was to explore how hospitals perform when completing QI projects and whether any organizational characteristics were associated with that performance. First the project evaluated a specific QI collaborative undertaken in the Veterans Affairs (VA) healthcare system. The goal of the collaborative was to improve patient flow throughout the entire care process leading to shorter hospital length of stay (LOS) and an increased percentage of patients discharged before noon. These two goals became the primary outcomes of the analysis, which were balanced by three secondary quality check outcomes: 30-day readmission, in-hospital mortality, and 30-day mortality. The analytic model consisted of a five-year interrupted time-series examining baseline performance (two-years prior to the intervention), the year during the QI collaborative, and then two-years after the intervention to determine how well improvements were maintained post intervention. The results of these models were then used to create a novel 4-level classification model. Overall, the analysis indicated a significant amount of variation in performance; however, subgroup analyses could not identify any patterns among hospitals falling into specific performance categories. Given this potentially meaningful variation, the second half of the thesis worked to understand whether specific organizational characteristics provided

4 2 support or acted as key barriers to QI efforts. The first step in this process involved developing an analytic model to describe how various categories of organizational characteristics interacted to create an environment that modified a QI collaborative to produce measureable outcomes. This framework was then tested using a collection of variables extracted from two surveys, the categorized hospital performance from part one, and data mining decision trees. Although the results did not identify any strong associations between QI performance and organizational characteristics it generated a number of interesting hypotheses and some mild support for the developed conceptual model. Overall, this thesis generated more questions than it answered. Despite this feature, it made three key contributions to the field of healthcare QI. First, this thesis represents the most thorough comparative analysis of hospital performance on QI and was able to identify four unique hospital performance categories. Second, the developed conceptual model represents a comprehensive approach for considering how organizational characteristics modify a standardized QI initiative. Third, data mining was introduced to the field as a useful tool for analyzing large datasets and developing important hypotheses for future studies. Abstract Approved: Thesis Supervisor Associate Professor, Department of Internal Medicine Title and Department October 4, 2011 Date

5 A CRITICAL EVALUATION OF HEALTHCARE QUALITY IMPROVEMENT AND HOW ORGANIZATIONAL CONTEXT DRIVES PERFORMANCE by Justin Mathew Glasgow A thesis submitted in partial fulfillment of the requirements for the Doctor of Philosophy degree in Epidemiology in the Graduate College of The University of Iowa May 2013 Thesis Supervisor: Associate Professor Peter J. Kaboli

6 Graduate College The University of Iowa Iowa City, Iowa CERTIFICATE OF APPROVAL PH.D. THESIS This is to certify that the Ph. D. thesis of Justin Mathew Glasgow has been approved by the Examining Committee for the thesis requirement for the Doctor of Philosophy degree in Epidemiology at the May 2013 graduation. Thesis Committee: Peter Kaboli, Thesis Supervisor James Torner Elizabeth Chrischilles Ryan Carnahan Jason Hockenberry Jill Scott-Cawiezell

7 TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES iv vi CHAPTER 1 INTRODUCTION 1 Study Overview 7 Summary 8 CHAPTER 2 QUALITY IMPROVEMENT COLLABORATIVES 10 The Collaborative Approach to Quality 10 Flow Improvement Inpatient Initiative (FIX) 17 FIX Analysis Overview 21 Conclusions 25 CHAPTER 3 TIME-SERIES METHODS 27 Data Sources 27 Data Elements 29 Patient Cohort 29 Risk Adjustment 31 Time-Series Model 38 Improvement and Sustainability 41 Sub-group Analyses 46 Conclusions 47 CHAPTER 4 TIME-SERIES RESULTS AND DISCUSSION 48 System-Wide Analysis 48 Facility Analysis 52 Evaluation of the Specific Aims 57 Discussion 59 Limitations 65 Conclusions 67 CHAPTER 5 SUPPORTING QUALITY IMPROVEMENT 69 Relationships with Healthcare Quality 69 Relationships with Quality Improvement Efforts 76 Analytic Framework 79 Conclusions 84 ii

8 CHAPTER 6 ANALYTIC VARIABLES AND DATA MINING 86 Organizational Characteristics in VA 86 VA Hospital Organizational Context 89 Data Mining Overview 100 Decision Tree Development 105 Decision Tree Interpretation 110 Conclusions 111 CHAPTER 7 DECISION TREE RESULTS AND DISCUSSION 113 Decision Tree Performance Metrics 113 Individual Decision Trees 116 Discussion 132 Interpreting the Analytic Framework 138 Limitations 140 Conclusions 142 CHAPTER 8 SUMMARY AND FUTURE WORK 145 Project Summary 145 Human Factors and Change Management 151 Recommendations for Improving QI 153 Future Studies 156 Conclusions 159 APPENDIX A RISK ADJUSTMENT MODEL SAS CODE 160 APPENDIX B SAS OUTPUT FOR RISK ADJUSTMENT 168 APPENDIX C FACILITY PERFORMANCE BY SIZE AND REGION 178 APPENDIX D FULL VARIABLE LISTS 181 REFERENCES 185 iii

9 LIST OF TABLES Table 2-1: Reported calculation of cost savings from FIX Table 3-1: List of Outcome Measures Table 3-2: Comparison of risk adjustment cohort to all other FY07 discharges.. 32 Table 3-3: List of potential risk adjustment variables, the number of discrete categories, and a description of how categories were defined Table 3-4: Modeling of age risk adjustment categories Table 3-5: Modeling of race risk adjustment categories Table 3-6: Modeling of service connected risk adjustment categories Table 3-7: Modeling of admission source risk adjustment categories Table 3-8: Modeling of place of discharge risk adjustment categories Table 3-9: Highly correlated risk adjustment variables Table 3-10: Description of full classification categories Table 4-1: Hospital classification across the 5 outcome measures (N = 130) Table 4-2: LOS Improvers classification (N = 45) Table 4-3: Discharge before noon Improvers classification (N = 60) Table 4-4: P-Values from Chi-square tests examining facility performance in subgroups by size and regional location Table 6-1: Categories for different response scales in the CPOS survey Table 6-2: Variables measuring facility structure Table 6-3: Variables measuring QI structure Table 6-4: Calculated and Composite measures of QI Structure Table 6-5: Variables measuring QI process Table 6-6: Calculated and Composite measures of QI Process Table 6-7: Point ranges for composite model classification Table 7-1: Data mining sample performance classifications (N = 100) iv

10 Table 7-2: Decision tree performance metrics Table 7-3: Count of factors in each of the decision trees Table 7-4: List of individual and composite variables in the decision trees v

11 LIST OF FIGURES Figure 2-1: Model of the IHI BTS Collaborative timeline for FY07 FIX Figure 3-1: Decision tree used to classify hospital performance Figure 4-1: Aggregate results for LOS (FY05 - FY09) Figure 4-2: Aggregate results for in-hospital mortality (FY05 - FY09) Figure 4-3: Aggregate results for 30-day mortality (FY05 - FY09) Figure 4-4: Aggregate results for discharges before noon (FY05 - FY09) Figure 4-5: Aggregate results for 30-day readmissions (FY05 - FY09) Figure 5-1: Analytic framework for how organizational context impacts QI Figure 7-1: Full decision tree for LOS performance Figure 7-2: Full decision tree for discharges before noon performance Figure 7-3: Full decision tree for LOS/Noon composite performance Figure 7-4: Full decision tree for overall composite performance vi

12 1 CHAPTER 1 INTRODUCTION In the years since the Institute of Medicine (IOM) reported that as many as 98,000 people die each year as a result of medical errors, 1 the healthcare community has been focused on efforts to improve quality, efficiency, and safety. While considerable efforts have gone into improving healthcare quality, broad measures of quality do not show the expected improvements in quality. One common monitor of quality is the National Healthcare Quality Report (NHQR) which tracks annual performance on several quality measures. In 2008, the report found that there was only a 1.4% average annual increase in all measures of quality with a concomitant 0.9% average annual decrease in scores on patient safety measures. 2 The 2009 report continued the theme, noting that while it was possible to identify small pockets of success, the overall variability across the healthcare industry was too great to claim any success in improving quality and safety. 3 Further confirming the lack of improvement in quality and safety was a recent review of patient medical records by the Centers for Medicare and Medicaid Services (CMS). The review evaluated records of 780 Medicare beneficiaries recently discharged from a hospital and found that 13.5% experienced an adverse event during their hospital stay. 4 Further, an expert panel review of these adverse events determined that 44% of the events were clearly or likely preventable. 4 Taken together, the NHQR reports and the CMS chart reviews suggest a disconnect between what quality improvement (QI) efforts report in the literature and their actual success. The broad driving force behind

13 2 the research reported in this thesis is to understand potential causes for this disconnect and to explore possible modifications to the healthcare environment that will support and increase the probability of successful QI in the future. Two theories have been particularly instructive in approaching and understanding why individual reports of successful QI projects may not translate into widespread improvements in quality. First, human factors theory advocates that when designing a device or a process careful attention must be paid to how innate limitations of human physical and mental capabilities will impact how people interact with the device or process. 5 This concept means that even the greatest of technological solutions can be unsuccessful if people cannot successfully interact with the system. Building from this idea, a potential hypothesis for why there is little overall improvement in quality is that many QI projects propose and implement solutions that may impose too much additional cognitive burden on those tasked with providing high quality care. In this situation, there may be initial success as excitement and energy related to the project are sufficient to overcome the additional cognitive burden. However, as time passes and the improvements become less of a focus there is a reduction in task specific energy. This eventually leads to a point where the additional cognitive burden becomes too overwhelming and performance begins to decline. This sort of process would suggest a QI project that initially appears successful, but overtime cannot sustain performance which would result in a slow decline in quality likely back to baseline performance as providers abandon the new solution for their original process.

14 3 The other instructive theory for understanding the quality disconnect was change management theory. This theory acknowledges that going through and accepting change is a difficult and emotional process that people often resist. 6 This theory suggests that even if a QI effort is technically correct from a human factors perspective, resistance to change from healthcare providers could still result in an unsuccessful QI project. This sort of change resistance could help identify why a successful QI project at one hospital is not successful when translated to other settings. Without the correct institutional QI culture or change management process, QI projects will not sustain their improvements and likely will have difficulty achieving even initial improvements. As a concrete example of how QI solutions may not consider the cognitive or emotional hurdles involved in improving and sustaining quality, consider that many QI solutions rely predominately on provider education as the main component of a QI solution. In the standard approach, providers are gathered together in a meeting room or lecture hall where someone presents them with a problem, for example a growing backlog of patients waiting to be admitted from the emergency department each afternoon. Having established the problem, the speaker asks the group to improve quality by increasing the number of inpatient discharges that occur before noon. After discussions and pushback from the audience, the presenter wraps up the presentation hoping the group is energized and ready to go fix the problem. This approach has a number of short and long term problems which impact the likelihood of long-term improvements in quality. Perhaps the biggest

15 4 barrier to success in this situation is the feasibility in achieving what the speaker proposes. Mornings are generally a busy time for physicians and nurses as they go through their rounds, provide care, and make plans for the rest of the day. This period may already be so busy that adding the extra cognitive task of planning and taking care of a patient discharge may not be feasible. Combine this cognitive difficulty with various emotional reactions, such as change avoidance, denial of the problem (or blaming others), or just simple avoidance, and this intervention would be lucky to make an initial improvement and it certainly will not lead to sustained improvements. While an interesting theoretical example, the real question to ponder is whether actual QI projects generate and sustaining improvements in quality across multiple healthcare settings. Unfortunately, the current QI literature predominately focuses on case reports that describe projects in a single setting and do not provide the in-depth project evaluation necessary to fully understand QI in healthcare. Even systematic reviews of QI have a hard time reaching definitive conclusions as they generally conclude project evaluations are not methodologically sound and cannot establish whether improvements in quality occur and if improvements were present whether those improvements were even causally related to the QI effort. 7-9 With so little focus on establishing whether interventions create initial results, it is not a surprise that few reports broach the subject of sustained quality, nor present any data that covers the period after project completion.

16 5 Since those initial reviews, two approaches to quality improvement, Lean and Six Sigma, have become increasingly popular in healthcare. These two approaches are important as both of them have a specific focus as part of the process that emphasizes the importance of trying to sustain improvements after initial project completion. However, a recent systematic review of these two approaches found that few articles discussed whether project interventions led to sustained improvements. 10 Of the few cases that discussed sustained improvements, two were particularly informative about the challenges healthcare faces as it works towards sustaining QI. The first case involved an intervention targeted to reduce nosocomial urinary tract infections (UTI) using the more general approach of nursing staff education and training. 11 The initial effort resulted in a steady decrease in the number of UTI recorded which lasted for about a year after the intervention. However, after that year the rates slowly began to rise resulting in a loss of the initial improvements and eventually a recording of the highest quarterly rate of UTI observed in a 4-year period. Since the unit was monitoring their UTI rates they did respond to the increase with another round of staff education which lead to at least a temporary reduction in rates. This QI initiative mimics the prior theoretical example and highlights that relying solely on provider education is unlikely to produce sustained improvements in quality. While the root cause behind the loss of quality was not discussed in the article, there certainly could have been emotional or cognitive challenges that contributed to the nurses inability to maintain low UTI rates.

17 6 In contrast, the second case focused on reducing catheter-related bloodstream infections (CRBSI) by identifying solutions that involved process changes that would not only improving quality, but also would reduce provider cognitive burden. The solutions in this project involved developing a system to monitor catheter dwell time, as well as the creation of a catheter insertion kit that ensured all materials were immediately available in one area. 12 This change reduced provider burden in two ways. First, by creating a method for monitoring and alerting providers about catheter dwell time, providers did not have to remember when a catheter was inserted and whether it was time to be changed or removed. Instead, they would receive a reminder when action was appropriate. Second, by creating a procedure kit there was no longer the burden of searching for necessary components in a time pressed environment. Anytime a catheter needed to be placed only one item, the kit, needed to be located and then everything necessary for high quality care would be available. Even while these were effective changes, long-term monitoring of CRBSI rates found a substantial spike the first winter after implementation. Review of that increase led to the identification of a specific subset of patients with characteristics different than those evaluated in the original project. This led to an additional change to the process mandating the use of antibiotic coated catheters for select subsets of patients. While this example paints a more promising picture about the future of quality in healthcare (i.e. that well designed process improvements can improve quality), it also reveals that fixing quality problems may require more than a single intervention.

18 7 Study Overview As established in the introduction this study is driven by the apparent disconnect between reports of successful QI efforts and lack of measured improvements in healthcare quality. There are likely many root causes to this disconnect, but this study will first focus on two potential causes. First, current evaluation approaches may overestimate how well hospitals perform on QI efforts and stronger methodologies may identify that fewer hospitals than expected successfully improve quality. Second, those projects that do successfully improve quality initially may not be able to sustain results long term. In order to explore these two areas, the first objective of this study was to conduct an in-depth examination of whether a collection of Veterans Affairs (VA) hospitals were able to improve and sustain quality after participating in the same quality improvement collaborative, the Flow Improvement Inpatient Initiative (FIX). This analysis will address the following two specific aims: Aim 1: Determine the impact of the FIX collaborative upon quality and efficiency as measured by LOS, percent of patients discharged before noon, in-hospital and 30-day mortality rates, and 30-day readmission rates. Hypothesis 1: The FIX collaborative will shorten patient LOS, increase the percentage of patients discharged before noon. There will be no changes in mortality or readmission rates attributable to FIX. Aim 2: Determine whether improvements attributable to FIX are sustained postimplementation.

19 8 Hypothesis 2a: Improvements in the outcome measures will continue on a downward slope after completion of FIX. Hypothesis 2b: The rate of further improvements in the outcome measures after completion of FIX will be at or below the rate of pre-fix improvements. With this initial description of how well hospitals are able to improve and sustain quality after a QI effort, the next question becomes what can be done to increase the ability of QI to lead to sustained improvements. The goal of this analysis is to understand whether there are any structural issues that may be potential root cause barriers to improvement. Therefore, the second half of this project will focus on an effort to understand what organizational characteristics may be associated with successful and unsuccessful QI projects. This will be accomplished using data mining decision trees to determine which organizational characteristics, as reported on responses to the 2007 Survey of ICUs & Acute Inpatient Medical & Surgical care in VHA (HAIG) 13 and the VA Clinical Practice Organizational Survey (CPOS), 14 are associated with different performance classifications. This analysis meets the third specific aim of this project: Aim 3: Describe how selected organizational structures are associated with sustaining improvements. Summary The following chapters will introduce the reader to relevant portions of the QI literature, cover the study methods, present study results, and disscuss what this means for QI efforts in healthcare. Chapter 2 begins the task of addressing

20 9 the two specific aims by discussing the collaborative approach to QI, examining the current understanding of the approach in the literature, and exploring a specific collaborative that served as the case study for analysis. Chapter 3 discusses the analytic methods and reasons for selecting those methods for analyzing hospital performance during the QI collaborative. Chapter 4 concludes the analysis by presenting and discussing the results of the analysis. The second half of the thesis then addresses the third specific aim of the study. Chapter 5 begins by summarizing the current literature examining how organizational characteristics are related to quality measures and QI efforts. The result of this discussion is the development of a new analytic framework that guides the subsequent analysis. Chapter 6 reviews data from two surveys that serve as the measures of organizational characteristics and then discusses how data mining decision trees are ideal tools for modeling the relationship between organizational characteristics and hospital performance on QI. Chapter 7 presents and discusses the results of the data mining decision trees. Finally, Chapter 8 summarizes the findings from this thesis, overviews some recommendations for hospitals to consider when trying to improve their success with QI, and concludes with a discussion about future studies that will build on this work and improve the overall understanding about how to successful improve and sustain quality in healthcare.

21 10 CHAPTER 2 QUALITY IMPROVEMENT COLLABORATIVES The goal of this chapter is to introduce the collaborative approach to quality improvement (QI), discuss the current evaluation of the approach in the literature and examine a specific QI collaborative. The initial introduction to collaborative QI considers its origins and development by the Institute for Healthcare Improvement (IHI). The IHI collaborative model proscribes a specific approach that has been employed to tackle a broad number of QI issues. The review of the literature evaluates the success of these approaches, the current understanding about the strengths and weakness of the approach, and also considers the strengths and weaknesses of the literature. The next section of the chapter examines the Flow Improvement Inpatient Initiative (FIX) which represents a specific QI collaborative undertaken in the Veteran Affairs (VA) healthcare system. This QI collaborative serves as the case study for all the analyses reported in this study. The review of FIX considers how it fits the IHI collaborative model and its utility as a case study to meet the goals of this thesis as well as to contribute knowledge to the broader literature. Lastly the chapter concludes with an overview of the first two specific aims of this project. The Collaborative Approach to Quality First conceived by Paul Batalden, MD, and refined by others at the IHI, the QI collaborative was viewed as an effective means for overcoming a key limiting factor to improving healthcare quality, diffusion of knowledge. 15 Batalden and the IHI felt that for many topics there was good underlying science on what needed to happen to improve quality, but because hospitals were either unaware of the

22 11 science, unable to disseminate the science among employees, or did not have the resources or experience necessary to make effective improvements they could not implement the science in a meaningful manner to improve quality. They envisioned the QI collaborative as a process that could overcome these barriers and lead to breakthrough improvements in healthcare quality, while also helping to reduce costs. 15 This thinking led to the establishment of the IHI Breakthrough Series (BTS) collaborative, which has become the common framework for QI collaboratives in healthcare. The general concept is to have a group of hospitals that are interested in specific and similar quality goals work together to identify solutions. A benefit of the collaborative format, over traditional in-house QI efforts, is that it allows hospitals to collectively invest in relevant subject matter experts who participate by initially training and then guiding participants through the processes necessary to achieve change and improve quality. The collaborative also establishes a structure through which the participants at different hospitals communicate regularly, allowing participants to be resources to other groups such that everyone learns effective solutions for overcoming the inevitable obstacles that arise during a QI effort. In the BTS model there are three learning sessions with alternating action periods (Figure 2-1, pg.18), most frequently distributed over a year but ranging from 6 15 months. 15 Each learning session is attended by at least three team members from each participating institution as well as the subject matter expert. The first learning session typically focuses on learning about the topic through

23 12 relevant training, refining the team aim, and making plans for change. Some common focuses are learning how to use the Plan-Do-Study-Act (PDSA) change cycle, how to develop specific and measurable aims, and defining the ideal state of care. The second and third learning session brings the teams back together to report experiences, discuss challenges, learn from other teams, and work with QI experts to apply additional skills. There is often also a final conclusion session where teams review their successes and discuss any goals moving forward. The alternating action periods are times where the teams focus on implementing improvement projects at their facility. During the action periods the various participating hospitals interact with each other through conference calls providing regular opportunities to brainstorm solutions for any new problems. The literature reporting on collaborative QI projects suggests the approach can be successful in improving quality and disseminating QI across a variety of settings. Some example collaboratives include efforts to improve chronic heart failure (CHF) patient care, 16 reduce door-to-balloon time for heart attack care, 17 reduce fall-related injuries, 18 and improve medication reconciliation. 19 There are also reports showing collaboratives have worked in other healthcare systems both in developed (Holland, Norway and Australia) and developing countries. 23 While these reports state each collaboratives is a success, it is important to note that there is variation in performance across hospitals in individual collaboratives. There are also some potential systematic barriers that may either prevent participation in or greatly reduce the chance of hospital success with a

24 13 collaborative. As an example, consider the efforts to improve medication reconciliation that aimed to involve all hospitals in the state of Massachusetts. The collaborative was able to recruit 88% of hospitals in the state, but the nonparticipating facilities were clearly distinguished by their small size and often isolated locations. 19 For the participating hospitals, only 50% had success achieving at least partial implementation of the initiatives related to improving the medication reconciliation process. For those hospitals that did not achieve partial implementation of the initiatives, some frequently cited barriers to success were an inability to get people to change the way they work, an inability to get clinician buy in and overall project complexity. 19 These barriers, particularly an inability to get buy-in or get people to change the way they work, are directly related to the change theory and human factors issues discussed as a challenge for QI in healthcare. Another critical consideration about the literature discussing collaboratives was that many articles, much like the broader QI literature, utilized methodologies that were limited in their ability to establish cause-effect relationships proving the effectiveness of collaboratives. The reports often focused on a team s ability to implement planned changes, as in the Massachusetts article, but this does not speak to whether the implementation was effective or lead to any improvements in quality. Another common assessment approach is to have the team self-report of whether they felt their efforts led to improved quality. Although the collaborative format encourages

25 14 rigorous data collection, rarely do publications include any data that would increase the reader s confidence that teams were truly successful. In short, the assessments of collaboratives make it difficult to quantify what measureable improvements to quality the collaborative achieved and further which actions are most directly associated with any improvements. Showing this causal association is particularly important in healthcare as these collaboratives typically target highly publicized quality problems. As such, any observed improvement may be more attributable to outside events, such as continuing education sessions and conferences, which increase awareness about the topic and may result in small modifications to provider behavior. This particular problem was addressed in a study analyzing whether the CHF BTS collaborative led to improvements in care above and beyond that which would have naturally occurred. 16 The study design to achieve this aim involved sampling 4 hospitals from the collaborative and then identifying 4 control hospitals that did not participate in the collaborative and had similar hospital structures, i.e. matched controls. Using a panel of 21 common metrics for CHF care quality, the analysis identified that the collaborative sites exhibited greater improvements on 11 of them, with the strongest improvements associated with patient counseling and education metrics. For some of the metrics where there was no difference between participants and controls there were still sizable improvements in performance. As an example, collaborative hospitals increased by 16% the percentage of patients that had their left ventricular ejection fraction (LVEF) measured, but the controls also increased LVEF testing by 13% leading

26 15 to a non-significant comparison (p = 0.49). 16 This article highlights that observed improvements cannot always be directly attributed to the collaborative and careful consideration should be taken in developing program evaluations that can best establish a causal relationship between measured improvements and collaborative efforts. VA was an early adopter of the BTS model and has used it to target adverse drug events, safety in high-risk areas, home-based primary care, fall risk, and many other patient safety areas An example from primary care was efforts to improve, across a system of nearly 1,300 sites of care, the average number of days until the next available primary care appointment. 24 Over a fouryear period the Advanced Clinic Access collaborative was able to drop the average days until first available appointment from 42.9 to 15.7 days. On the inpatient side, a review of 134 QI teams participating in 5 different VA collaboratives found that somewhere between 51 68% of teams were successful with their efforts. 25 Success in this case was defined as a selfreported reduction in at least one outcome by 20% from baseline and sustained at that level for 2 months before the end of the collaborative. Some example outcomes for the collaboratives were to reduce adverse drug events, reduce infection rates, reduce caregiver stress for home-based dementia care, reduce delays in the compensation process, and reduce patient falls. A unique feature of this article is that it evaluated for whether any organizational, systemic, and interpersonal characteristics of hospitals and teams were associated with performance in the collaborative. When comparing ratings at the end of the

27 16 collaborative to those at the beginning, some key findings were that low performing teams showed reductions in their ratings of resource availability, physician participation, and team leadership. 25 In contrast, high performing teams were more likely to rate that they had worked as a team before, were part of their organization s strategic goals, and had stronger team leadership. A main take away from the analysis of collaboratives in VA, as well as the study of the medication reconciliation collaborative in Massachusetts, was that there may be challenges faced by hospitals that are not directly addressed in the current QI collaborative structure. Two common barriers were a lack of resources and difficulty getting support and buy-in from physicians. One consideration with these barriers, particularly the availability of resources, is whether the presence of such a barrier could be identified prior to a collaborative and if identified whether those hospitals should participate in a collaborative. It may be that a hospital needs to develop a certain baseline of behaviors before success is likely in a QI collaborative and if those behaviors aren t present that may be where the hospital needs to focus first. This question will be addressed as part of the third aim for this study; however, before that can be analyzed it s necessary to measure and understand which hospitals succeed in a QI collaborative. In order to measure and understand which hospitals succeed, it is necessary to get past the current style of reporting which relies too much on prepost analyses (assuming actual quantitative data) that can t establish what measured improvements are due to collaborative participation. The next sections of this chapter will provide an in-depth introduction to the QI collaborative studied

28 17 throughout this research and overview the initial analyses undertaken to establish which hospitals improved and also sustained quality as part of their participation in the collaborative. Flow Improvement Inpatient Initiative (FIX) The collaborative of interest for this study was the Flow Improvement Inpatient Initiative (FIX). This was a system redesign initiative undertaken in VA during fiscal year 2007 (FY07) and closely followed the IHI BTS collaborative model. The aim of the collaborative was to improve and optimize inpatient hospital flow through the continuum of inpatient care. 28, 29 The efforts focused on addressing potential barriers to smooth flow in the emergency department, operating suites, and on the inpatient wards. The objective was simply to identify and eliminate bottlenecks, delays, waste and errors that may hinder a patient s smooth progression through the hospital. Some outcome measures associated with the collaborative were shorter hospital length of stay (LOS) and increased percentage of patients discharged before noon. 30 The goal of these outcome measures was to ensure that sufficient patient beds were available (particularly in the early afternoon) for patients needing to be admitted from the emergency department (ED) or after surgical procedures. By improving bed availability, not only is patient care and safety improved, but VA hopes to reduce the need for fee-service care, where veterans are cared for at VA expense in private hospitals. This collaborative followed the general BTS model with 3 learning sessions and then a final wrap up session, 31 the approximate timing of these events is outlined in Figure 2-1. In total, 130 VA hospitals participated with

29 18 approximately 500 participants attending at least one learning session. 31 Given the need for active participation and interaction during learning sessions, the collaborative was split and implemented in five separate regions (Northeast, Southeast, Central, Midwest, and West). During the action periods, teams met at least weekly to work on their QI projects. Commonly reported projects focused on efforts to reduce LOS, reduce bed turnover time, increase the percentage of patients given a discharge appointment, increase the percentage of patients discharged before noon, decrease the time to admission from the ED, and decrease ED diversion time. 30 Figure 2-1: Model of the IHI BTS Collaborative timeline for FY07 FIX

30 19 Despite the prior experience in VA with collaboratives, there was only limited evaluation plans established for FIX. Teams likely measured their performance as they worked to improve patient flow, but this data was never systematically collected. An external consulting group was tasked with evaluating success after the completion of FIX. This evaluation focused most predominately on determining whether participants were satisfied with the collaborative and felt that they gained knowledge or skills during the process. 32 However, the evaluation also considered whether there was a positive business impact or return on investment based on changes in the Observed Minus Expected LOS (OMELOS) during the collaborative. This pre-post analysis compared the FY06 OMELOS with the FY08 OMELOS for patient time in an ICU or a general acute care floor at 10 hospitals. The process also involved querying the FIX team leader at each of those hospitals so they could estimate what percentage of the improvement they would attribute to FIX. The average of these values was then extrapolated to the entire VA population and used to determine an estimated cost savings. An overview of these results is presented in Table After adjusting for the estimated benefits attributable to FIX the final conclusion was that implementation of FIX saved $141 million. In order to determine a return on investment, the analysis considered the costs at the 10 facilities related to oversight, planning, implementation, and evaluation. The extrapolated costs came to $5.8 million for VA, equating to an overall return on investment of 2,327%.

31 20 Table 2-1: Reported calculation of cost savings from FIX FY08 FY06 # of Annual Amount % Attributed to Cost/day OMELOS Admissions Saved FIX Acute 0.51 days $ ,000 $185 million 40.37% ICU 0.31 days $ ,000 $110 million 52.18% While impressive, these results have their limitations and are insufficient for truly understanding the impact of FIX. One major concern is that the analysis uses a pre-post study design based on an unspecified single time point, i.e. the analysis does not report how many days or patients are averaged together. For any number of reasons these single time points may not accurately reflect a hospitals performance as measured by OMELOS. Particularly, noteworthy is that OMELOS fluctuates, sometimes significantly at smaller hospitals, and yet the report provided no indication of how much variation was associated with the measure. Further, LOS has a documented pre-existing temporal trend, which was not considered and could account for a considerable proportion of the observed improvements. 33, 34 Although the analysis adjusted for self-perceived impact, that measure only considers whether the QI team felt they had targeted activities that would impact OMELOS, and less on whether they felt those activities were responsible for a specific reduction in OMELOS. Beyond the potential misleading conclusions about reductions in OMELOS, the cost savings calculations also have two important limitations. First, the calculations assume that inpatient costs are distributed evenly throughout the inpatient stay, which is unlikely to be the true case. Secondly, with much of the involved costs representing fixed expenses, a reduction in LOS only represents a

32 21 savings to VA if they are able to avoid fee based care. Unfortunately, diversion rates and fee-based care costs are not systematically collected or available for analysis. One final consideration about the analysis, the final report didn t come out until May 2010, yet there was no attempt to consider how well hospitals maintained improvements after the completion of FIX. The sustainability of interventions is a big component to achieving high quality care, yet there is no assessment of this in any collaborative reports. The next section shows how even in a retrospective nature it is possible to conduct an in-depth study of FIX that will provide insight into whether hospitals were able to improve outcomes and then sustain quality after participating in FIX. FIX Analysis Overview There are a number of challenges in developing a study for analyzing FIX, yet the FIX collaborative has some important characteristics that make it an ideal collaborative to study. First, the goals of FIX make it amenable to a retrospective analysis that uses available administrative data sets. The two primary outcomes of FIX, LOS and discharge before noon, are easily ascertained in administrative records of patient stays. Second, since FIX occurred in FY07 there are now two years of data available to analyze whether initial improvements in outcomes are sustained after FIX. Third, FIX occurred at the same time as two major surveys that assessed organizational characteristics in VA hospitals. These two surveys will play a major role in the second half of this study as the study attempts to identify characteristics that distinguish sites on their ability to succeed during FIX.

33 22 As a final strength, FIX was in effect 5 simultaneous collaboratives providing a large sample (130 hospitals) and offering the possibility of some sub-group analyses. Given these strengths FIX was selected to serve as a case study that could help identify whether a QI collaborative leads to quantifiable improvements in quality, whether hospitals sustain those improvements, whether there is significant variation in performance, and whether organizational characteristics might help explain success or failure in the collaborative. As discussed through the literature reviews, the ideal study would involve an analysis that would either establish or provide strong arguments for a cause-effect relationship between specific improvements and changes in the outcomes. Unfortunately, there was no data that defined the specific improvements implemented by teams. Without this, or other qualitative assessments from the teams it was impossible to suggest a causal relationship between FIX and the observations of this study. Instead the study strives to use a methodologically strong quasi-experimental study approach that provides some support for suggesting that any identified improvements were attributable to FIX. One such approach could be a case-control study such as that done to analyze improvements in the CHF collaborative. However, this is not a possibility since FIX involved all VA acute care hospitals, eliminating any natural controls. Additionally, selecting private sector hospitals as controls would be unrealistic as the unique structural characteristics (i.e. federal funding, comprehensive electronic medical record, extensive catchment areas) of VA hospitals make

34 23 direct comparisons difficult. Instead, this study employed an interrupted timeseries analysis. The exclusion of a case-control study combined with the use of administrative data leave the options for analyzing FIX as structural equation modeling, latent growth curve modeling, hierarchical linear modeling and timeseries analysis. 35 Of these four choices, hierarchical linear models and timeseries analysis are best suited for analyzing and understanding the changes over time in outcomes such as LOS and discharges before noon. Since a separate outcome model is planned for each facility, all measures are at the individual level and the utility of hierarchical linear models would be for analyzing the data as a repeated measures model. In comparing a repeated measures approach with a time-series model, the trade off is between a greater ability to model the correlation between individuals (hierarchical model) or the correlation between events over time (time-series). This analysis chooses to focus on the correlation between events over time (i.e. uses a time-series analysis) for three reasons. First, the ability to riskadjust for different patient characteristics provides some protection for correlation between individuals at a facility that may impact their outcome. Furthermore, since most admissions represent a unique case (rather than a related readmission) risk adjustment better adjusts for correlation between individuals than repeated measures hierarchical models would. Second, the use of time-series models allows more flexibility for evaluating and adjusting for auto-regressive relationships between data. There is a notable relationship between outcomes on

35 24 separate days, the strength of which dissipates over time. Further, there is the potential for periodicity effects (e.g. weekly, seasonal). While these are not commonly found in healthcare outcomes an analysis of this type should evaluate for their presence. Third, time-series models are considered most appropriate when the question of interest focuses on the impact of an intervention on a system level rather than on an individual level. 36 While some individuals may have greater benefit from the FIX initiative, the general hypothesis was that FIX resulted in systemic changes to the system and that benefits were essentially uniform across individuals. A risk-adjusted time series model provides the best balance of adjustment for individual characteristics and correlation between data points over time while focusing on the key underlying question of what impact FIX had on the ability of each facility to provide high quality care. Based on these considerations it was determined that an interrupted timeseries evaluation was the strongest study design for taking into account the preexisting temporal trends in the data that might help explain observed improvements as well as indicate whether facilities were able to sustain improvements after FIX. The primary outcomes of the analysis will be LOS and percent of patients discharged before noon in order to directly reflect the goals of FIX. Additionally, three secondary outcomes (in-hospital mortality, 30-day mortality, and 30-day all-cause readmission) will be evaluated. The purpose of these secondary outcomes was to ensure that improvements in the primary outcomes were not associated with reductions in quality for other quality measures. The analyses of FIX address the following two specific aims:

36 25 Aim 1: Determine the impact of the FIX collaborative upon quality and efficiency as measured by LOS, percent of patients discharged before noon, in-hospital and 30-day mortality rates, and 30-day readmission rates. Hypothesis 1: The FIX collaborative will shorten patient LOS, increase the percentage of patients discharged before noon. There will be no changes in mortality or readmission rates attributable to FIX. Aim 2: Determine whether improvements attributable to FIX are sustained postimplementation. Hypothesis 2a: Improvements in the outcome measures will continue on a downward slope after completion of FIX. Hypothesis 2b: The rate of further improvements in the outcome measures after completion of FIX will be at or below the rate of pre-fix improvements. Conclusions This chapter established the background for this analysis of FIX as a case study representing quality improvement in healthcare. The first half of the chapter discussed the IHIs development of the collaborative model and its utility for supporting broad improvements in healthcare quality. This introduction was followed by a review of the collaborative literature which suggested that while collaboratives do generate improvements; individual hospitals vary in their success. Additionally, the findings were weakened because they frequently relied on team self-report of success in implementing project components or on improving outcomes. The second half of the chapter moved from the broad

37 26 literature to discuss the FIX collaborative and how an analysis of that collaborative could improve the understanding of the collaborative as well as begin to address the questions of this thesis. Lastly the chapter reviews several different potential analytic approaches and identifies the reasons for selecting an interrupted-time series model for analyzing FIX. The upcoming chapter provides further detail on the methods used to risk-adjusted the five outcomes on interest and then develop the final time-series models for evaluating FIX.

38 27 CHAPTER 3 TIME-SERIES METHODS This chapter presents the methods used to address the first two specific aims of this research which focus on understanding the impact of the Flow Improvement Inpatient Initiative (FIX) on five outcome measures. The initial sections of the chapter describe the data sources used in this analysis and define the patient cohort. Subsequently, there is a discussion of the process used to develop the risk-adjustment models for each outcome. The risk-adjusted patient values are then input into a time-series model, with the final parameters calculated in this model serving to determine hospital performance on each of the outcomes. Finally, the chapter discusses a classification scheme developed based on potential outcomes from the time-series model that was used to group hospitals into performance categories to facilitate future analyses. Data Sources Data for this study came from VA administrative discharge records. While administrative databases were not originally intended for research, they have played a valuable role in health services research in the Veterans Affairs (VA) healthcare system. 37, 38 Based on the 1972 Uniform Hospital Discharge Data Set (UHDSS) 39 healthcare administrative databases have a standard form which includes patient demographics as well as the International Classification of Diseases, 9 th revision, Clinical Modification (ICD-9-CM) codes that serve as a proxy for clinical status. The accuracy of some ICD-9-CM codes has been challenged, but a VA study on the level of agreement between administrative and medical records data report kappa statistics of 0.92 for demographics, 0.75 for

39 28 principal diagnosis, and 0.53 for bed section. 40 Variables to determine patient outcomes and adjust for severity at admission will come from several existing administrative databases compiled at the Austin Automation Center for all VA hospitals. These files include the: 1) Patient Treatment File (PTF); 2) Enrollment File; and 3) Vital Status File. All files were linked using unique patient identifiers which also allow for monitoring a patient over-time to detect a sequence of hospital visits. PTF data are updated on a quarterly basis as SAS datasets and provided the majority of descriptive variables related to patient outcomes and risk adjustment models. Available data fields were derived from 45,000 data fields contained within the Veterans Health Information Systems and Technology Architecture (VISTA). Quality control protocols ensure data fields contain appropriate numbers and types of characters. VISTA modules cover a variety of important hospital services and functions including admission, discharge, transfer, scheduling, pharmacy, laboratory, and radiology. Enrollment File contains details on basic demographic variables as well as VA specific measures such as a listing of medical conditions that are considered directly connected to military service. Vital Status File combines data from four sources: VA Beneficiary Identification and Record Locator System (BIRLS), VA Patient Treatment File (PTF), Social Security Administration (SSA) death master file, and Medicare vital status file. It provides date of death for VA users with a sensitivity of 98.3% and specificity of 99.8% compared to the National Death Index. 41

40 29 Data Elements This study analyzes five outcomes (Table 3-1), two of which are primary outcomes while the other three are secondary outcomes. The primary outcomes, length of stay (LOS) and percent of discharges before noon, were chosen to reflect the stated goals of the FIX collaborative. As stated in hypothesis 1, FIX is expected to result in improved performance on these outcomes. The secondary outcomes, 30-day all-cause readmission, 30-day mortality, and in-hospital mortality, serve as quality checks focused on identifying if the efforts to improve patient flow led to any unintended consequences. The hypothesis was there would be no changes attributable to FIX associated with any of the secondary outcomes. For the purpose of defining readmissions, an index admission will be any new admission within a 30-day period, with any subsequent admission within 30- days classified as a readmission. A readmission cannot itself count as an index admission for a later admission, although the initial index admission could potentially have multiple associated readmissions. Visits to an emergency department or admissions to a non-va hospital are not captured in this data. Patient Cohort The study population was all patients admitted to acute medical care in each of 130 VA hospitals between FY05 FY09. This includes patients directly admitted as medical patients (as opposed to surgical patients) to an ICU as well as those admitted and discharged under observation status. While observation patients are billed as outpatients, they are important to include in this analysis for

41 30 a couple reasons. First, an ability to discharge a patient within 24 hours (the standard set in VA to maintain observation status), may be a sign of good patient flow, so removing these patients from analyses could inadvertently penalize facilities for some of their improvements. Second, there is inconsistent use of observation status (reflecting policy issues as well as patient flow) across VA. A quick analysis identified 9 facilities that had never used observation status and one facility that classified 50% of admissions as observation patients. With no direct understanding of how high or low use of observation status impacts patient outcomes, exclusion of observation patients could have severe unknown consequences on the evaluation. Lastly, observation patients are treated on the same wards as traditional acute admissions meaning their presence impacts the overall flow and provider workload on medical wards making it inappropriate to exclude them in these analyses. Table 3-1: List of Outcome Measures Variable Type Description Length of Stay Continuous Calculated: Time of Discharge Time of Admission Noon Discharge Rate Percentage of patients discharged before noon 30-Day Readmission Rate Any readmission to any VA Hospital 30-Day Mortality Rate Death recorded during the hospital stay or within 30 days of discharge In-Hospital Mortality Rate Death recorded during the hospital stay

42 31 Risk Adjustment Separate risk-adjustment models were developed for each of the outcome measures before modeling outcomes in the time-series equations. Risk adjustment evaluation was done in a cohort of patients discharged in FY07. Following standard VA procedure a cohort was identified that represented a stratified sample of 10 VA hospitals representing each of the five geographic regions (Northeast, Southeast, South, Midwest, West). 42 One large (>200 medical/surgical beds) and one medium ( medical/surgical beds) VA hospital were randomly sampled to represent each region. Small facilities were not included as their small volumes can lead to dramatic variation which can have adverse affects on the final risk adjustment coefficients. The final risk adjustment cohort represented 42,725 discharges in FY07. Table 3-2 provides a comparison of some basic descriptive statistics between the risk adjustment cohort and all other FY07 discharges. While the vast majority of these comparisons were statistically different, these differences were attributable to the large sample sizes and do not represent meaningful clinical differences. The only concerning difference in the table is the difference between the two groups in terms of missing race information. This example shows why data from small facilities can be problematic and they are not included in risk adjustment model evaluation for VA data. A broad collection of variables, listed in Table 3-3, that measure patient socio-demographics, primary diagnosis, diagnosed comorbidities, admission, and discharge characteristics were evaluated to determine their impact on each

43 32 outcome measure. Modeling for LOS was done in the log-scale due to the skewed nature of LOS data. 43 All other outcomes were treated as rates and modeled with binomial distributions. Table 3-2: Comparison of risk adjustment cohort to all other FY07 discharges Risk Adjustment All Other FY07 (N=42,725) (N=291,484) p-value Age (SD) (12.85) (13.11) <0.001 Male (%) 41,032 (96.0%) 279,735 (96.0%) 0.50 Income (SD) 23,275 (47,775) 22,162 (42,390) <0.001 Race White (%) 26,511 (62.1%) 146,748 (50.4%) <0.001 Black (%) 7,614 (17.8%) 43,571 (15.0%) <0.001 Hispanic (%) 668 (1.6%) 3,171 (1.1%) <0.001 Asian / Pacific Islander (%) 407 (1.0%) 2,111 (0.7%) <0.001 Native American (%) 161 (0.4%) 1,375 (0.5%) Missing (%) 7,964 (18.6%) 97,069 (33.3%) <0.001 ICU Direct Admit (%) 7,471 (17.5%) 53,685 (18.4%) <0.001 Un-adjusted LOS (SD) 5.43 (8.90) 5.22 (8.08) <0.001 Died In-hospital (%) 1,104 (2.6%) 8,311 (2.85%) Discharge Before Noon (%) 7,082 (16.6%) 54,075 (18.6%) <0.001 All Cause Readmit (%) 6,332 (15.3%) 42,995 (15.3%) 0.85

44 33 Table 3-3: List of potential risk adjustment variables, the number of discrete categories, and a description of how categories were defined Categories Description Everyone under 45* Age 10 5-year increments from Everyone 85 and older Sex 2 Male*, Female Marital Status 6 Married*, Divorced, Never Married, Separated, Unknown, Widowed Income Continuous variable Race 4 White*, Asian / Pacific Islander, Missing, Other (includes Black, Hispanic, Native Am.) Percentage that admission condition is Service 3 connected to military service Connected 0%*, 10 90%, 100% Primary Diagnosis 25 Major Diagnostic Code Categories Circulatory System* Comorbidities 41 44, 45 Quan adjustment to Elixhauser algorithm Direct*, VA Nursing Home, Community Source 9 Nursing Home, Outpatient, Observation, Community Hospital, VA Hospital, Federal Hospital Direct to ICU Yes / No Socio-demographics Admission Discharge Place of Discharge Type of Discharge Died In- Hospital Transferred out of Hospital * Reference category 13 5 Community*, Irregular, Death, VA Hospital, Federal hospital, Community hospital, VA nursing home, Community nursing home, State home nursing, Boarding house, Paid home care, Home-basic primary care, hospice Regular*, Discharge of a committed patient for a 30-day trial, Discharge of a nursing home patient due to 6-month limitation, Irregular, Transfer, Death with autopsy, Death without autopsy Yes / No Yes / No

45 34 The first decision in the risk adjustment process was to identify the appropriate number of categories for some of the variables. This was done by running univariate categorical models determining the predictive association between each category and LOS. The goal in this process was to maximize model fit (as measured by Akaike information criterion (AIC)), while working towards a parsimonious list of categories. The goal was to identify a collection of categories for each variable in which the individual point estimates for each category were statistically significant. As an example, the field for place of discharge took on 26 different values in the administrative files, with 14 of these fields having non-significant point estimates in the initial full model. While a number of these categories are meaningful for administrative purposes, they have no significance clinically. Therefore categories such as military hospital, other federal hospital and other government hospital were grouped together and models re-evaluated. For some small groups, there were no ideal clinical comparisons, in which case groups may have been grouped by the similarity of their initial point estimates. This process was iterated, trialing different grouping as necessary, until the best model (lowest AIC, all categories significant) was identified. Full details on the modeling process for Age (Table 3-4), Race (Table 3-5), Percent Service Connected (Table 3-6), Admission Source (Table 3-7), and Place of Discharge (Table 3-8) are available in their respective tables. No changes were necessary for Marital Status or Type of Discharge. A full description of the individual categories has been published elsewhere. 46

46 35 Table 3-4: Modeling of age risk adjustment categories Model # of Categories AIC Description 1 Continuous < 60, < 40, [40, 60), [60, 80), <20, 5 year increments, <20, 10 year increments, <25, 5 year increments, <45, 5 year increments, 85 Table 3-5: Modeling of race risk adjustment categories Model # of Categories AIC Description 1 6* Native American, Hispanic were non-significant White, Asian/Pacific Islander, Missing, All others Black, Asian/Pacific Islander, Missing, All others * Coded categories are: White, Black, Hispanic, Asian/Pacific Islander, Native American, Missing Table 3-6: Modeling of service connected risk adjustment categories Model # of Categories AIC Description 1 11* ,40,50,70,90 all insignificant , [10,90], Grouped in increments of , [10,50], [60,90], 100 * Service connected is recorded in increments of 10 from 0-100

47 36 Table 3-7: Modeling of admission source risk adjustment categories Model # of Categories AIC Description 1 19* E, 1H, 1J, 1L, 1R, 1S, 2A, 2B, 2C, 3B, 3E were non-significant 1E, 1J, 1L,1R, 1S all paired with 1P; 1G with 1H; 2A, 2B, 2C grouped as 2A; 3B, 3E all paired with 3C A (p=.0573) A paired with 1M A paired with 1P * See VA data documentation for complete listing of fields 46 Table 3-8: Modeling of place of discharge risk adjustment categories Model # of Categories AIC Description 1 26* , 2, 3, 12, 13, 15, 16, 19, 20, 21, 27, 29, 34, 35 were non-significant 1,2,3, paired as 3; 12,13,15,20 paired with 11; 16,19 paired with 17; 27 paired with 5; 21,29,35 paired as 21; 34 paired with is non-significant (including 29 & 35) paired with -3 * See VA data documentation for complete listing of fields 46 Categories 9, 10, & 14 were not recorded for any discharges in the study Once the final categorizations were set the next step in the risk adjustment model development was to evaluate the univariate relationships between each outcome and the potential risk adjustment variables. All variables having a p<0.1 association in univariate analyses were included in the initial full model for that outcome. Reduced models were then generated by removing variables that did not meet a determined threshold; the full sequence of steps taken to develop each model is detailed in the SAS code available in Appendix A. The goal of

48 37 model selection was to identify the simplest model with the best AIC. In instances where the AIC were too similar (within 2 points) the model with the greatest number of variables, even if some were marginally significant was selected. The model evaluation process also evaluated for potential correlations between variables. Correlation was tested between single level variables (ex. comorbidities, Direct Admission to ICU). The Correlation between multi-level variables was not compared, but potential correlations such as place of discharge and type of discharge were never relevant in identifying the best model. The key correlations that were identified and evaluated if necessary during the model development are listed in Table 3-9. Table 3-9: Highly correlated risk adjustment variables Variable 1 Variable 2 Correlation (ρ) Rheumatic Arthritis Arthritis 0.88 Paralysis Hemiparesis 0.82 Renal Disease Complicated Hypertension 0.81 Renal Failure Complicated Hypertension 0.81 Mild Liver Liver 0.98 Nonmetastic Cancer Malignancy 0.92 Ulcer No Bleed Peptic Ulcer 0.82 Renal Disease Renal Failure 1.00 Once final risk adjustment models were developed a second cohort of 60,000 patients was randomly sampled from all FY07 discharges, this random sample included patients from small facilities and discharges from the original risk adjustment cohort. The final models were run in this cohort to verify model performance and generate point estimates for use in risk-adjustment. A listing of

49 38 these final point estimates is available in Appendix B. These risk-adjustment point estimates were used to calculate the expected outcome for each patient, which was used to determine the indirect adjusted outcome. Time-Series Model There are several issues, many discussed in Chapter 2, to consider in determining how to best model and evaluate the impact of FIX. This study employed an interrupted time-series model given the study designs ability to account for pre-existing temporal trends, allow for evaluation of the outcomes after the intervention, and to protect against some threats to internal validity in comparison to other quasi-experimental designs All outcomes were individually modeled using a time series analysis covering 5 years, starting in FY05 (October 1, 2004) through to the end of FY09 (September 30, 2009). This provided two years of data prior to FIX which establish the baseline performance, a year of data identifying whether hospitals made improvements during FIX, and two years of data identifying whether those hospitals that improved were able to sustain those improvements. After determining the risk adjusted outcomes, the next step was determining the best level of outcome aggregation. At the individual patient level, LOS or rates of the other outcomes were highly variable. So modeling at that discrete of a level would make it difficult to detect meaningful changes in any outcome due to excessive variability or noise in the signal. Conversely, modeling at a highly aggregated level, such as a 6-month mean, would potentially ignore key fluctuations in the outcome measures. This study settled on having each

50 39 data point represent a fourteen-day average which results in 26 data points per year, or 130 data points over the 5 study years. This level of outcome aggregation was based on power calculations determining the appropriate tradeoff between variability and overall number of time points. Assuming moderate autocorrelation (φ = 0.3), these models have a power of 0.88 to detect a change in the outcome in response to the intervention equivalent to one 47, 48 standard deviation (Power = 0.87 for detecting sustainability). While a simple 14-day average works well for the LOS and discharges before noon models, it presents a challenge for the other models. Most notably in smaller VA hospitals where it is reasonable to expect 14-day periods without any observed outcomes, particularly for in-hospital mortality. To avoid the unnecessary variance introduced by this possibility, the outcome models for readmission and mortality rates were plotted every 14-days, but, each point represents a moving average of the previous 70-days (5 data points). This did result in the reduction of these time-series by 4 data points at the beginning of these models. The final concern in developing this model, which supports the selection of a time-series model, is how these data are unlikely to meet the assumptions of standard linear regression. Most importantly, while each discharge was essentially an independent event, it was not appropriate to assume independent error terms. Therefore, all models were evaluated and adjusted for correlation between error terms. The potential for autocorrelation was evaluated up to 26 times points, allowing for capture of seasonal correlation up to a year. The

51 40 second concern was that the measures may not have homoscedastic variance. There were two potential sources of heteroscedasticity in this analysis. First, there may be different number of discharges averaged in each 14-day measure. Secondly, as the outcomes improve, they may be approaching a floor in which no further improvements are possible and thus the variance around that point is likely to tighten. All models were evaluated for and when identified corrected for autocorrelation and heteroscedasticity. 49 With the above considerations, the following is the final form of the basic outcome model: In the above model, β 1 β 5 represent the slope associated with the modeled outcome during FY05 FY09 respectively. The time component is parameterized in order to create a continuous linear regression, so t 05 counts from 0-129, while t 06 is 0 for the first 27 time points and then begins counting. This parameterization continues, with each subsequent year beginning 26 points later, thus t 07 = 1 at 53, t 08 at 79, and t 09 at 105. The β 6 term represents a quadratic component to the overall trend. This parameter was only included in models where it was significant (p<0.05). The final component of this model, v t represents the autocorrelated error term, shown below: In this equation, represents the degree of correlation between the error terms of the current time point and any prior point. For these models only those

52 41 correlations that were statistically significant (p<0.05) were included in the final model of v t. The final component of the model is the remaining error term, : ~ 0,1 For these models e t represents the typical assumption in linear regression that error terms are normally distributed with mean 0 and variance σ 2. However, as discussed this data may not fit this assumption, so when heteroscedasticity is detected h, as calculated below, is used to estimate and correct for the changing error variance. Improvement and Sustainability With the time-series equation developed, the final step was to develop a classification approach that would identify whether hospitals improved on any outcome measure and then which hospitals went on to sustain those improvements. The final classification system, listed in Table 3-10, defined 11 sub-categories that collapse into 4 major categories. This approach to classifying performance predominately focuses on the results of parameters β 3 β 5. β 1 and β 2 serve to establish a baseline of performance and control for improvements that would be expected, based on historical trends, had FIX not occurred. The first major category is those hospitals classified as having No Change, meaning no statistical (p<0.05) changes were observed for β 1 β 4 (FY05 FY08). The purpose of this category is to separate out those facilities whose outcome performance was characterized by high variance, meaning any

53 42 signal was buried in among a significant amount of noise. It is potentially important to note this type of performance in quality improvement, as high variation suggests a lack of consistently performing process which is a different quality improvement challenge than saying a hospital was unsuccessful in their efforts to improve a process. For this reason the No Change category was kept separate from the No Benefit category. On last consideration about the No Change category, some of these hospitals did exhibit a detectable change in the outcome in FY09 but were still classified here for two reasons. First, given the high variability displayed by many of these facilities it seemed that any detected change in FY09 was unlikely to be a true change and more likely represented a chance occurrence. Secondly, any improvement observed in FY09 was too distant from the occurrence of FIX to suggest any association. No Change Improve Not Sustain Improve and Sustain No Benefit Table 3-10: Description of full classification categories A.1 No changes observed from FY05 FY09 A.2 No changes observed from FY05 FY08, improvement in FY09 A.3 No changes observed from FY05 FY08, decline in FY09 B.1 Immediate Loss: Improve in FY07, return to baseline in FY09 B.2 Delayed Loss: Improve in FY07, return to baseline in FY09 B.3 Delayed Impact: No change in FY07, improve in FY08 C.1 High Sustain: Additional improvements observed in FY08/09 C.2 Moderate Sustain: No additional improvements in FY08/FY09 C.3 Weak Sustain: Diminishing improvements, better than FY05/06 D.1 No change in FY07, but statistical changes observed elsewhere D.2 Decline observed in FY07

54 43 The other three categories deal with hospitals that had observable statistical changes during the first four years of the study. For these hospitals the first step in the classification was to examine the performance in FY07 (β 3 ). Figure 3-1 is the flow chart depicting the decision process used to classify each hospital s performance. Starting with Part B of the figure, any facility that showed a decline in performance during FIX was classified as D.2. While it may be possible that facilities would show improvements in FY08 or FY09, given the lack of directly observed data it was impossible to determine if these improvements represented a delayed effect of FIX, the effect of a different QI project, or simple regression to the mean. With this consideration, it was determined there was no need to further sub-classify hospitals based on their outcomes in FY08/09 if there was a decline in performance relative to baseline in FY07. Next Part C of the flow chart represented those hospitals whose performance during FY07 was flat (i.e. performance continued on the baseline trend established by performance in FY05 & FY06). The outcomes for these hospitals fell into one of two categories. First, the hospital could record an improvement in FY08 leading to classification as B.3. This was recorded as a possible improvement attributable to FIX with the reasoning that FIX was a yearlong effort that aimed to improve outcomes across an entire hospital. It seemed reasonable that not all hospitals would have an immediately measurable impact in FY07 but would instead record the biggest gains in the latter half of FY07 and into the first half of FY08. This is certainly the weakest category for

55 44 asserting that improvements were associated with FIX and should be interpreted appropriately. The other possibility for hospitals that had flat performance in FY07 was that they would continue on the pre-established baseline or exhibit some decline in FY08. These hospitals were classified as D.1 and deemed to have had no benefit attributable to FIX. The No Benefit category, representing hospitals with a D.1 or D.2 classification, marks those hospitals that initially performed with low variability allowing detection of a clear baseline trend which suggests they had processes in place that performed with some consistency. The key feature of these hospitals was that, as measured by the individual outcome, they were unable to make improvements to that process as part of their participation in FIX. The last set of hospitals is those that had an initial improvement during FY07, which is charted in Part A. All of these hospitals are classified as improving; it just becomes a question of whether they sustain those improvements. Hospitals that made an improvement in FY08 or FY09 with no declines in either time period were classified as C.1 or a high sustainer since they not only sustained initial improvements but went on to make further improvements. A facility that neither declined nor improved (i.e. just continued the new baseline established in FY07) were classified as C.2 or moderate sustainer. The last category of sustainer (C.3) was those hospitals that exhibited a decrease in the rate of improvement in FY08 or FY09. However, their overall performance did not decline such that their performance on the outcome returned to pre-fix levels. This category acknowledges that rates of improvement may

56 45 Figure 3-1: Decision tree used to classify hospital performance A. Hospitals showing an initial improvement during FY07 B. Hospitals with decreased performance in FY07 C. Hospitals with non statistical (p>0.05) performance in FY07

57 46 level off after a QI collaborative completes, but hospitals may still maintain a high level of performance. The final category was those hospitals that were unable to sustain the improvements. If the hospitals returned to baseline performance in FY08 they were classified as B.1, immediate loss. If however, they had a slower return to baseline with it not occurring until FY09 then they were classified as B.2, delayed loss. Sub-group Analyses Although the later chapters of this study will provide an in-depth evaluation of the relationship between organizational characteristics and hospital performance, this initial evaluation did consider three sub-group comparisons. The first comparison evaluated hospitals by size to determine if the collaborative was effective across all size categories. Hospitals were classified as either large ( 200 beds), medium ( ) or small (< 100) based on the number of approved medical/surgical beds. The second comparison evaluated whether performance varied based on which learning session the team attended. Since 130 hospitals participated in FIX, the learning sessions were broken into five separate regions (Northeast, Southeast, Central, Midwest, and West) to allow all participants to actively engage. 31 Lastly, the final comparison examines whether facilities that improved (whether they sustained or not) on the primary outcomes had a different distribution of performance on the other outcomes (particularly the secondary outcomes) compared to the full group. This comparison ensures that these hospitals did not have higher than expected rates of classification into No Benefit

58 47 on the secondary outcomes. All of these comparisons were done using Pearson chi-square tests comparing the distribution of the relevant sub-group to that of the overall group. Conclusions This chapter has discussed the methods used to evaluate performance at each hospital for each of the five outcomes of interest. Overall, the chapter covered the data sources, defined the patient cohort and provided a detailed description of the risk adjustment and time-series modeling processes. Although the time-series methods used in this analysis were not novel, they have not been applied in this manner to evaluate healthcare QI. Additionally, the classification algorithm generated to aggregate facilities based on performance was a new approach. This classification approach focused on understanding how facilities may be grouped to facilitate later analyses examining how organizational characteristics impact QI efforts. The next chapter will present and discuss the results of this time-series evaluation and classification algorithm.

59 48 CHAPTER 4 TIME-SERIES RESULTS AND DISCUSSION This chapter concludes the first half of this study by presenting the results from the analysis of the Flow Improvement Inpatient Initiative (FIX). This analysis first considers results at the aggregate Veterans Affairs (VA) level by grouping patient discharges across hospitals. This provides some understanding of the overall impact of FIX. However, the real purpose of this analysis is to examine the performance of each individual hospital using the time-series approach outlined in Chapter 3. After presenting these results, the chapter continues with an in-depth discussion. First, the discussion focuses on addressing the first two specific aims for the project. Second, it considers the greater implications of the findings for quality improvement in healthcare and whether there is support for using large collaboratives, such as FIX, to improve quality in healthcare. System-Wide Analysis Although the main interest of this analysis was to understand performance at each individual hospital, it was useful to first understand the aggregate impact of FIX for VA as a system. Viewing the data at the aggregate level provides some understanding of the average performance providing a basis for comparing high and low performing hospitals. The five years of data in this study covered 1,690,191 discharges from 130 VA hospitals. Three of the outcome measures, LOS, in-hospital mortality, and 30-day mortality exhibited a natural 3-4% annual improvement in performance prior to FIX. For LOS, Figure 4-1, the time-series model identified a subtle statistical increase in the rate of improvement during

60 49 FIX which was sustained through the post-intervention period. This was in contrast to in-hospital and 30-day mortality which showed no aggregate improvements associated with FIX. In-hospital mortality, Figure 4-2, showed no statistical changes in FY07 FY09 from the pre-established trends. For 30-day mortality there was a slight decline in performance in FY07, although as seen in Figure 4-3 this decline does not mean 30-day mortality rates were rising, instead it only signified a leveling of 30-day mortality rates. Most likely this simply reflects that 30-day mortality rates were reaching optimal potential performance leaving few achievable improvements. The other two outcomes in this study, discharges before noon and 30-day all-cause readmission, both were statistically flat prior to FIX. The aggregate results for discharges before noon are perhaps the most intriguing in this study. As shown in Figure 4-4, there is a clear improvement during and after FIX with discharges before noon jumping to near 24% from a baseline of 17%. Unfortunately, part way through FY08 the percentage of patients discharged before noon began to decline, reaching a rate around 20% at the end of the study. While this level of performance is still improved at the end of the study compared to the baseline, it is unclear whether performance will level off at 20% or continue to decline back to baseline. Lastly, 30-day readmissions, Figure 4-5, showed highly variable performance with an overall worsening of performance during FIX.

61 Days FY05 0 FY06 26 FY07 52 FY08 78 FY Observed Time-Series Model Figure 4-1: Aggregate results for LOS (FY05 - FY09) 5.0% Percent of Discharges 4.5% 4.0% 3.5% 3.0% 2.5% 2.0% 1.5% FY05 0 FY06 26 FY07 52 FY08 78 FY Observed Time-Series Model Figure 4-2: Aggregate results for in-hospital mortality (FY05 - FY09)

62 51 6.5% Percent of Discharges 6.0% 5.5% 5.0% 4.5% 4.0% 3.5% 3.0% 2.5% FY05 0 FY06 26 FY07 52 FY08 78 FY Observed Time-Series Model Figure 4-3: Aggregate results for 30-day mortality (FY05 - FY09) 25% Percent of Discharges 23% 21% 19% 17% 15% FY05 0 FY06 26 FY07 52 FY08 78 FY Observed Time-Series Model Figure 4-4: Aggregate results for discharges before noon (FY05 - FY09)

63 % Percent of Discharges 15.5% 15.0% 14.5% 14.0% 13.5% 13.0% FY05 0 FY06 26 FY07 52 FY08 78 FY Observed Time-Series Model Figure 4-5: Aggregate results for 30-day readmissions (FY05 - FY09) Facility Analysis Working with this initial introduction of how FIX impacted VA performance, the focus now shifts to classifying individual hospitals using the classification approach outlined in Chapter 3. The breakdown in performance for all 130 hospitals across each of the 5 outcomes was listed in Table 4-1. These results suggest there was considerable variation both within each hospital on individual outcomes and also across each of the five outcomes. Beginning with LOS, there were 45 (35%) hospitals that made an initial improvement with 27 (60%) able to sustain the improvements. Further, 14 out of the 45 (31%) hospitals classified as improvers had a delayed onset of improvements which means they were not evaluated for whether they sustained the improvements. Of course these

64 53 successes are balanced by 36 hospitals (28%) in whom there were no statistical changes over the entire study and 49 (38%) that saw a decline or showed no benefit associated with FIX. Table 4-1: Hospital classification across the 5 outcome measures (N = 130) Noon 30-Day 30-Day In-Hospital LOS Discharge Readmission Mortality Mortality No Change A A A Improve Sustain No Benefit B B B C C C D D This break-down between categories is in contrast to how hospitals performed on efforts to improve the percent of patients discharged before noon. Interestingly, there was the exact same number of hospitals, 36, that showed no statistical changes. However, it was not the same hospitals as only 13 were categorized as no change for both LOS and discharge before noon. For improvements, a greater number made initial improvements, 60 (46%), but fewer were able to sustain (19 out of 60, 32%). Once again there were a fair number of

65 54 hospitals, 21 out of 60 improvers (16%), which exhibited flat performance during FY07 but recorded improvements in FY08. Lastly, 34 (26%) of the facilities did not record any benefit from their participation in FIX related to increasing the rate of discharges before noon. For the secondary outcomes, these were included mainly to determine whether there were declines during FIX, there is less of an expectation that hospitals would improve these outcomes in response to FIX. This expectation was supported as few hospitals showed improvements with a substantial percentage either recording no statistical changes over the study or no benefit from FIX. Of the hospitals not recording any benefit from FIX, those classified as D.2 would be the most concerning as that would signify a decline in performance during FIX that might mean FIX had a negative impact. For the mortality rates, 39 (30%) had declining in-hospital mortality performance and 42 (32%) had declining 30-day mortality performance. While these are concerning numbers, perhaps the more telling association would be if there is a strong association between improvement on the primary outcomes and a subsequent decline on the secondary outcomes. As shown in Table 4-2 (LOS) and Table 4-3 (Discharges before Noon), the distribution of facilities that improved on either of these outcomes is no different than the overall distributions of all facilities, suggesting that improvements attributable to FIX were not associated with direct declines on the secondary outcomes. The last feature to notice in Table 4-1 was the high proportion of hospitals (65%) that showed no statistical change on 30-day readmission. This fits with the aggregate readmission graph (Figure 4-5, p. 52)

66 55 which suggests hospital readmissions are highly variable and potentially even associated with a random process. No Change Table 4-2: LOS Improvers classification (N = 45) Noon Discharge 30-Day Readmission 30-Day Mortality In-Hospital Mortality A A A Improve Sustain B B B C C C No Benefit D D X 2 p-value (df) 0.73 (10) 0.96 (9) 0.93 (9) 0.55 (9) The other sub-group analyses showed that performance did not vary by hospital size or region. Table 4-4 displays the results (p-values) of the chi-square tests comparing the hospital size or region sub-group to the overall population distribution. None of the comparisons were statistically significant at the p< 0.05 level, which given the number of comparisons may have been inappropriately conservative. The full break down showing the number of hospitals classified into each performance category by hospital size and region is available in Appendix C.

67 56 Table 4-3: Discharge before noon Improvers classification (N = 60) 30-Day 30-Day In-Hospital LOS Readmission Mortality Mortality No Change A A A Improve Sustain B B B C C C No Benefit D D X 2 p-value (df) 0.87 (9) 0.75 (9) 0.94 (9) 0.93 (9) Table 4-4: P-Values from Chi-square tests examining facility performance in subgroups by size and regional location Size Category (N) LOS Noon 30-Day 30-Day In-Hospital Discharge Readmission Mortality Mortality Small (54) Medium (60) Large (16) Region Northeast (23) Southeast (26) Central (25) Midwest (29) West (27)

68 57 Evaluation of the Specific Aims The first specific aim evaluated in this study was whether FIX positively impacted quality and efficiency as measured by the five selected outcomes. An evaluation of both the aggregate results and the individual facility results leads to the conclusion that FIX did result in a reduction in LOS, an increase in the percent of patients discharged before noon and that these improvements were not associated with any systematic negative impacts as measured by mortality or readmission rates. The aggregate results for both primary outcomes showed improvements in FY07 that were greater than expected given preliminary trends. Both of the mortality rates had promising aggregate results with in-hospital mortality showing a continuation of the preexisting trend during FIX. The observed leveling in the rate of 30-day mortality rates during FIX likely reflects that there was little to improve on that outcome. The 30-day readmission rate did show a slight increase during FIX, but given the high variability of this outcome (at the individual hospital level 65% had no statistical changes over the entire study) and the lack of association between high performance on LOS or discharges before noon with poor performance on readmissions, this increase in the rate of 30-day readmissions was unlikely a direct effect of FIX. This conclusion is supported by prior work which showed no increase in hospital readmissions with lower hospital LOS. 34 Although the aggregate results are impressive, the analyses at the individual hospital level provide a more complex evaluation of FIX. At the

69 58 individual hospital level only 35% of hospitals improved LOS and 46% improved discharges before noon. Looking at it from another perspective, 50 hospitals (39%) did not improve on either of the primary outcomes and 30 (23%) did not improve on any of the five outcomes. These results are similar but somewhat lower than other published reports of hospital success with collaboratives. The most likely explanation for this difference is that most of the other reports focused on team self reports of success. Certainly a small selection of teams that believe they succeeded would have not produced any measureable improvements. Overall, the conclusion is that FIX was successful, based on the aggregate results, but it is important to recognize that despite receiving the same training individual hospital performance was quite variable. This variation suggests that while successful there were components of QI collaboratives that can be improved in order to help all hospitals have measureable benefits from the effort. The second specific aim evaluated whether those hospitals that achieved initial improvements as part of FIX sustained the improvements for two years post-intervention. This evaluation only considers the results from the primary outcomes given that as predicted few hospitals recorded improvements on readmission or mortality rates during the intervention period. The two primary outcomes paint distinctly different pictures. For LOS, considering only those hospitals that improved in FY07 (i.e. those classified as B.1, B.2, C.1, C.2, or C.3) 87% of them sustained improvements (27 out of 31). Further, 59% (16 out of 27) of the sustaining hospitals were classified as

70 59 high sustainers (C.1), meaning not only did they sustain a new rate of improvement, but exhibited additional improvements after FIX. From these results it would appear the collaborative was successful, with some individual variation, in creating sustained quality. In contrast, for discharges before noon, only 49% of hospitals improving in FY07 sustained the improvements (19 out of 39). Although fewer hospitals sustained improvements, those that did sustain were frequently high sustainers (13 out of 19, 68%). The results of this outcome, paint a less promising picture about sustainability. With only 50% of hospitals sustaining improvements and a clear declining trend in the aggregate results it is hard to conclude that any solutions developed during the collaborative were specifically designed for sustained improvements. Considering the results of these two outcomes, the overall picture suggests that it is possible to improve and sustain quality after a collaborative; however, there may be some important lessons to learn from the observation that more facilities improved and sustained for LOSA compared to discharges before noon. Discussion This analysis found that a selection of hospitals achieved sustained improvements as part of their participation in FIX. However, individual hospital performance was highly variable, suggesting an opportunity to improve on the success of hospitals participating in a QI collaborative. Since variation in performance was consistent across all 5 regions and across hospital size

71 60 categories it appears the collaborative was successfully implemented, just not all hospitals had measureable benefits from the experience. Given this mixed evaluation, it is important to remember the complexity of these outcomes and that many different factors impact the final measurement. So while FIX strove to take a system wide approach to improving patient flow, there may still be factors that the framework of the collaborative did not address which would explain the limited success at hospitals. Further, it may have been difficult to widely disseminate improvements across all medical patients in the course of a single year. Despite these inherent limitations to the effort to evaluate FIX, the results of the study still uncovered some interesting challenges to achieving high quality healthcare. These challenges are perhaps best highlighted by the overall performance on the efforts to increase the number of discharges before noon. While not every hospital was successful, it is evident that improvements were achieved systemwide (Figure 4-4). Yet, once the collaborative ended and focus on the performance metric was reduced, only 49% of those that made improvements in FY07 sustained that performance. If many QI initiatives have a similar response profile (initial success that regresses back to the mean over time) that would explain why there have been limited measureable improvements in quality. In contrast, the other primary outcome, LOS, had 87% of improving hospitals go on to sustain improvements. However, there may be inherent differences between these two outcomes which explain the greater success in sustaining LOS compared to discharges before noon.

72 61 Perhaps the most significant difference is the long history in healthcare of LOS as a performance metric. As such providers generally accept the premise that they should work to shorten LOS, recognize potential personal benefit from shortened LOS, know their average LOS (at least for physicians), and how their performance compares to others. The major benefit of these features is that providers are likely to be less resistant to change, suggesting there would be a low barrier for teams to overcome in implementing and sustaining an intervention designed to shorten LOS. The only real obstacle would be ensuring the intervention was well designed and did not create unmanageable burden. This environment would be in stark contrast for the environment around increasing the rate of discharges before noon, which was a newly introduced performance metric. In that case providers will not have considered it before, know nothing or very little about current performance, and have no basis for comparing performance. If the hospital culture is not otherwise accepting of change this is likely to be a change adverse situation. Successful sustainment of improvement therefore would require a solution that not only improves outcomes but also works to help providers accept and maintain the change. It may be this last part, how to handle and maintain change, where implementation teams were most likely to be unsuccessful in sustaining improvements related to discharges before noon during FIX. It is not surprising that teams would have difficulty achieving sustained improvements related to discharging before noon considering that many morning activities work in direct conflict with the process of trying to discharge patients.

73 62 The morning is when physician teams round on patients, nurses provide medications, phlebotomy collects blood, and the labs run tests. Not only do these activities represent a significant effort on providers but many of them, particularly results of morning lab tests, provide critical information for deciding whether or not to discharge a patient. With all of these barriers, a proposed solution must not only be effective but it must also reduce workload burden and address information needs. If the proposed solutions did not achieve all of these necessities, it is reasonable to predict that providers trialed the proposed solution, found it unacceptable and then returned to the old way of caring for patients in the morning. Such a response certainly fits with the overall observed aggregate profile where initial improvements are quickly lost with a trend back to pre-implementation performance levels. Despite these concerns, it cannot be forgotten that at the aggregate level the observed percentage of patients discharged before noon was still above the baseline level. The final observed rate of 20% of patients discharged before noon is a 3% absolute increase, or about a 15% relative increase, in the rate compared to the baseline of 17%. It is worth considering the possibility that a decline from a high around 23% of patients to a final rate of 20% may not indicate worse care or poorer hospital flow. Instead, particularly if performance levels off and does not continue to decline, a final rate of 20% may mean that hospitals have achieved an appropriate balance between provider workload burden and meeting the flow needs for their hospital. Other measures that would better understand the flow concerns of a hospital could be emergency

74 63 department (ED) diversion rates, ED to medicine admission times or the amount of fee basis care for medical admissions. These however were not considered as measured outcomes during FIX, nor are they systematically collected. An important lesson here is that while the metric of discharges before noon was potentially useful for driving improvements, it needed to be evaluated in tandem with a more clinically or business relevant metric to determine true success in improving flow. A secondary consideration from this analysis was the higher than expected percentage (28%) of hospitals classified as No Change on the primary outcomes. In fact more hospitals, 13 compared to 5, recorded No Change on both primary outcomes compared to recording Sustain on both outcomes. While discharges before noon did have a flat baseline at aggregate, to have so many hospitals exhibit no statistical change in LOS which has a distinct trend was particularly surprising to observe from hospitals participating in a QI collaborative. This data serves as a stark reminder that improvements in quality require a standardized process that can be analyzed and improved. QI teams should remember that they first must understand the relevant process, or lack of process, before trying to make change. In the end, whether the implemented solutions were ineffective, not needed, or unacceptable the end result is that individual performance varied considerably despite all participants receiving the same training and having access to national resources. Not only did just a fraction of hospitals show sustained improvements through FY09, but 50 (39%) hospitals did not show any

75 64 improvement on the two primary outcomes. This leads to the concern that perhaps the collaborative approach to QI does not add any additional value compared to a more individual hospital approach to QI (i.e. a QI project not associated with a collaborative). The main reason for making this comparison is that a collaborative can be an expensive undertaking, the estimated cost of FIX for VA was $5.8 million. 32 When hospitals have to pay for this directly out of their budget (it is not clear who bore the individual costs of FIX) they may not want to participate should they recognize that anywhere from a third to a half of participating hospitals would not improve on measured patient outcomes. However, there are two points worth considering when evaluating the tradeoff between in-house QI projects and QI collaboratives. First, collaboratives likely provide many benefits beyond measureable improvements in outcomes. A key purported benefit of a collaborative is that it brings hospitals together to learn skills, coordinate activities, and share knowledge. For a hospital with limited experience with QI, a collaborative may provide many worthwhile benefits even if that collaborative cannot be directly associated with improved quality. These sorts of benefits have been noted in prior analyses of collaboratives which often acknowledge important cultural changes. 17 Unfortunately, there was no data collected on these types of benefits during FIX so it was not possible to factor any benefits of this nature into the analysis. While hospitals could achieve some of these benefits from an in-house QI effort, if they have to bring in outside resources to providing initial training the cost of this is likely to be the same if not more than the cost of training at a

76 65 collaborative. Second, there is no good basis for understanding the individual success rate with in-house QI efforts. Further, there is little data about the costs of these QI projects. Considering that individual QI efforts are not uniformly successful and have many associated costs as well, investing in a collaborative may represent little additional risk. Given the general poor knowledge about QI success rates and costs, there is no clear conclusion about whether a QI collaborative is a worthwhile investment. However, given the potential collaboration between hospitals working together there should be a general benefit from participating in a collaborative. Therefore, the second half of this study works to develop an understanding of what factors may predict an ability to succeed in a collaborative. This understanding can help hospitals decide if they can succeed in a collaborative, and if they cannot succeed identify issues they should focus on in order to create an environment that will support a successful QI collaborative. Limitations While this study generated some intriguing results, it is important to remember that these were exploratory analyses that were subject to some key limitations. First, these results are based on administrative data. This means there are many unmeasured and unaccounted for variables. A key consideration is that this meant the analyses could not be tied to specific areas of a hospital if improvements were initially trialed on specific units before dissemination. In the case of FIX, a hospital-wide approach is supported and in some ways most appropriate. Since FIX aimed to improve flow throughout the entire hospital,

77 66 improvement projects should have targeted broad initiatives that improved flow for all patients not just a small subset. This is also why the internal evaluation of FIX considered all patients, not just those on targeted wards. So if teams did only make small improvements during FIX, while this would be beneficial, there is reason to argue that this would not have been a fully successful collaborative experience. Second, these results cannot isolate the impact of FIX. FIX was not a onetime, isolated QI initiative, but rather the first of many systems redesign collaboratives (some examples are Patient Flow Center, Transitioning Levels of Care, and Bedside Care Collaborative) some of which occurred during the twoyear follow-up period. Additionally, VA hospitals have been encouraged to conduct numerous local QI efforts each year. Some of these other projects are likely to impact the measured outcomes (LOS and 30-day readmission in particular) meaning the detected improvements can only weakly be attributed to FIX. However, the impact of these other QI projects is of limited concern for two reasons. First, the time-series analysis accounts for baseline trends in the outcomes. So to the degree VA hospitals maintain a regular focus on QI projects, the national focus on FIX represents a single increase in effort and all other QI projects would be accounted for by the baseline trends. Second, it is reasonable to expect that for complex outcomes, such as LOS and discharge before noon, sustained quality will not come out of a single QI project. Instead the importance of any single QI project may be the attention it brings to a topic, the training it provides team members, and its contribution to a greater culture focused on QI.

78 67 With these considerations, sustained results that were generated by a continuous cycle of improvement generated in response to FIX would be just as meaningful. The final limitation of this analysis was the lack of information at the individual team level. Key metrics such as team leadership quality, support from hospital leadership, and actual team engagement with FIX would provide critical information for distinguishing high and low performers. FIX was a mandated QI collaborative, thus much of the variation in performance may simply be due to varying levels of engagement by teams or hospitals with the collaborative. Even if this is the reason for non-success, it is telling for VA and other policy makers that simple presence at a QI collaborative did not ensure success. Conclusions This chapter brings to a conclusion the first half of this study, which utilized a five-year time series analysis to evaluate whether a large QI collaborative lead to sustained improvements in quality as measured by two primary outcomes, LOS and discharges before noon. The analyses found that in aggregate there were improvements in LOS and discharges before noon. However, performance at individual hospitals was quite variable and not all hospitals showed improvements. For those hospitals that improved, there was a high likelihood of sustaining LOS but a low likelihood of sustaining discharges before noon. Some of the decline may just reflect a balance between patient flow and provider workload. However, if many other newly introduced quality metrics seem a similar post-implementation decline, it will be difficult to achieve substantial

79 68 improvements in quality. The study also considered three secondary outcomes which, as expected showed little change and little impact associated with FIX. Based on this analysis, there are four important findings. First, in comparison to the traditional pre-post study involving team reported success, an analysis that accounts for pre-existing temporal trends in patient outcomes leads to the identification of a smaller than expected group of QI teams that made initial improvements. Second, there may be significant loss of quality, or regression to the mean, after the completion of QI projects. Third, this novel classification approach highlighted that many hospitals operate with processes that lead to highly variable performance. These hospitals likely need to focus on creating a standardized process before undertaking serious efforts to improve any of those processes. Fourth, this analysis showed that success can be achieved across multiple hospital settings; but given the overall variation there needs to be a better understanding of what factors predict success in a collaborative.

80 69 CHAPTER 5 SUPPORTING QUALITY IMPROVEMENT This chapter begins the second half of this project which considers another body of literature, develops an analytic framework, and analyzes survey data in conjunction with the results from the prior analysis to meet the goals of the studies third specific aim. This specific aim was to describe how selected components of an organization s structure were associated with an ability to sustain improvements in quality. The first half of this chapter reviews the extensive literature that evaluates the relationships between different organizational characteristics and high quality healthcare. The second half then works from the conclusions reached in this literature to develop a guiding analytic framework. The goal of this analytic framework is to posit a relationship of how different classes of organizational characteristics interact to generate an environment that may or may not support successful QI initiatives. The next chapter in this section then discusses how the framework was applied to analyze FIX and the methods used to generate hypotheses based on those results. The third chapter of this section then presents and discusses the results of the analysis as well as discussing their implications for QI and the overall framework. Relationships with Healthcare Quality There have been a number of studies that evaluated whether different features or characteristics of an organization were associated with higher quality healthcare. This literature has been nicely summarized in three systematic reviews. The first of these systematic reviews evaluated 81 publications that examined the relationship between a measured organizational variable and

81 70 mortality rates. 50 While mortality rates were the primary outcome of interest the review also included studies that evaluated other adverse healthcare outcomes such as nosocomial infections, falls, and medication errors. The review considered structural variables (professional expertise, professionalization, nurse to patient ratio, care team mix, not for profit status, teaching status, hospital size, technology use, and location), organizational process variables (measures of caregiver interaction, patient volumes), and clinical process variables (implicit quality, explicit quality, and computer decision support). The general conclusion of the review was that the body of evidence for each of the organizational variable categories was equivocal at best. The only organizational variable with consistently positive impact on mortality rates was having high levels of technology, which at the time of these studies meant having access to equipment 50, 51 such as ventilators and pacemakers. The second review built on this first review by focusing the literature review on how each study operationally defined the outcome of interest. The objective was to determine whether the operational definitions for the studied adverse events identified a mechanism through which altering an organizational characteristic could realistically improve a care process and result in fewer adverse events. 52 Based on the lack of consistent evidence showing an association between any single organizational characteristic and improved quality, the authors theorized that perhaps adverse events were too broadly defined meaning there were too many factors impacting quality and thus the measured characteristics could not reasonably lead to improved quality. This

82 71 review analyzed 42 articles that provided 67 measures of different organizational characteristics and their association with medical errors and patient safety outcomes. The measured organizational characteristics broke down into 13 groups: team quality, implementation of standard operating procedure use, feedback, technology, training, leadership, staffing, communication, simplification of the work process, culture, organization structure, employee empowerment, and group decision making. The operational definitions for adverse events in the studies included medication errors, medication complications, diagnostic errors, treatment-related errors, patient falls, specimen labeling errors, and other nonspecified patient safety concerns. The authors noted that while most of the studies focused on medication errors and complications, there was no consistency across studies in how to define and measure a medication error or complication. This made drawing any systematic conclusions about organizational characteristics and adverse events a challenge. Additionally, the authors noted that only 9 of the studies provided sufficient detail that would allow the reader to identify a specific relationship between an organizational variable and the measured adverse event. Given these limitations, as well as others, the review concluded that there were no generalizable statements about how a specific organizational factor could address errors or safety in healthcare. 52 The third of these systematic reviews continued to refine the process, this time by using Donabedian s structure-process-outcome model as a framework for structuring the analysis. 53 This review identified 92 articles and analyzed them to understand whether sequentially close Donabedian relationships (e.g. process

83 72 outcome) had more consistent and positive findings than distant relationships (e.g. structure outcome). 54 The review also examined whether studies considered definitions of quality that included improving services rather than simply defining quality as a reduction in negative events. The study evaluated 19 structure-process, 58 structure-outcome, 20 process-outcome, and 9 processprocess relationships. Much like the prior reviews, this systematic review found that the preponderance of organizational factors studied were associated with non-significant findings. 54 These non-significant findings were most frequently found when examining the distant structure outcome relationships, which was the most commonly examined relationship in the literature. A general concern with these studies was that they did not consider or evaluate any of the intervening process variables which would help enlighten the understanding of why some studies identified positive impacts while others had negative or nonsignificant outcomes. When studies examined the sequential Donabedian relationships of structure-process or process-outcome, cross study results were more consistent and there were greater odds of detecting a statistically significant relationship between an organizational variable and a measure of improved quality. The review of this literature highlights that components of organizational structure and care quality have a complex relationship that was difficult to analyze. Those components that have a direct cause-effect relationship (e.g. certain forms of technology, nurse-patient ratios) quite frequently have positive effects on quality. However, more peripheral factors (e.g. affiliation with a medical

84 73 university) that do not have that direct linear relationship will show contradictory results across studies leading to a conclusion of non-significant impact when analyzed in aggregate. One key conclusion from this research was that multiple organizational characteristics contribute to any single measure of quality. Therefore, any analysis that does not appropriately model the complex relationships between organizational characteristics and quality outcomes cannot expect to ascertain a strong relationship between factors. This approach would likely require a multilevel analysis that could test how different variables interact and mediate each other to support quality. Few studies have the data for this type of analysis, but when such data was available it did help identify meaningful relationships, even helping identify how important intervening factors could inhibit quality. For example, an analysis of reengineering efforts across 497 hospitals initially found that reengineering was detrimental from a cost competitive standpoint. 55 However, when using a multivariable analysis that adjusted for indicators of organizational support and quality of the implementation, the study identified trends showing that if successfully implemented the reengineering efforts were beneficial. 55 Of course, as potentially indicated by the variability in performance with FIX, the question of how to successfully implement QI is an important and little examined topic. Before addressing the literature related to the implementation of QI, there were some key limitations associated with these reviews and the studies they summarized. The first limitation was the difficulty associated with defining and

85 74 measuring quality. Early studies focused on efforts to reduce mortality rates, which as a generally rare and complex event was difficult for any broad organizational characteristic to significantly impact. 50 Later efforts identified more modifiable targets of quality (e.g. reduce adverse events, improve patient satisfaction) and were able to uncover some relationships. However, the operational definition of the same outcome frequently varied between studies making it difficult to determine whether any relationships existed across healthcare institutions or only in those where the studies occurred. Some of these same problems plagued the analysis of FIX, LOS and discharges before noon represented composite outcomes that likely did not measure the true quality goals and this limitation will impact the results of the analyses in this study. Some recent efforts have addressed these issues and will lead to better and more consistent measures of quality. As one example, the National Healthcare Quality Report published annually since 2003 promotes the systematic collection of quality measures allowing comparisons between hospitals. 2 A second limitation identified in the reviews was weak methodology. One weakness of the early studies was they did not adjust for patient severity. However, now that risk adjustment is an accepted standard in health services research the more recent studies all used appropriate risk-adjustment procedures. However, even with risk-adjustment these studies often suffered from methodologically weak study designs. Most of these studies employed an observational study design and could not address any characteristics that varied

86 75 between different healthcare institutions and how those variables may confound any observed relationships. In fact, given the number of postulated organizational factors that may impact quality with each individual study only considering a few organizational characteristics they all potentially suffered from significant unmeasured confounding. A few studies did use a stronger methodology and utilize an interventional design with quality measured before and after a change in the organizational characteristic. These studies however utilized pre-post designs, did not consider any natural trends in the outcomes, frequently analyzed distant structure-outcome relationships, and only reported results from a single site. A number of biases, particularly historical bias and regression to the mean, threaten the validity of these studies. While not an inherent limitation of these systematic reviews, on final consideration was that the reviews only focused on how the presence or absence of different organizational characterists were associated with quality. However, it may be more important to evaluate how an organizational characteristic supports the process of improving quality. This concept moves away from efforts tocused on identifying distant relationships between features and instead explores how QI teams conduct improvement projects and how they use resources and otherwise interact with their surrounding environment. The first step in this process was to examine whether different organizational characteristics were associated with successful QI initiatives.

87 76 Relationships with Quality Improvement Efforts The relationship between organizational characteristics and quality improvement efforts has been less studied, but there are three notable studies to consider. The first of these studies considers the process of organizational learning in neonatal intensive care units (NICU). 56 This paper synthesizes theories from best-practice transfer, team learning, and process change to develop hypotheses testing the relationship between concepts such as learnwhat (activies related to learning what the best practice is), learn-how (activities related to operationalizing or implementing best practice), and psychological safety with success in a QI initiative. The data in the study represents 1,440 survey respondents spread over 23 NICUs. The results of the survey indicated that perceived implementation success was associated with respondents feeling there was a greater body of evidence supporting the intervention, a greater sense of psychological safety at the insitution, and high use of learn-how activities. They did not find any association with learn-what activities, nor did any of the control variables measuring structural characteristics have any impact. Some limitations of the study were that it only studied 23 NICUs that all had selfselected into the collaborative. Additionally, among all the NICUs participating in the collaborative, there was a low response rate to participate in this study and a low response rate among providers at the NICUs that did participate in the survey. Although this study did not examine more traditional organizational characteristics, it did establish that certain characteristics are associated with percieved success at implentation of a QI collaborative.

88 77 The next critial article was a systematic review that examined how organization context was related to quality improvement success. The majority of the 47 studies in the review examined QI projects associated with the Total Quality Management (TQM) or Continuous Quality Improvement (CQI) approaches. 57 The analyzed studies most frequently measured success with QI based on pre-post data. A small selection of the studies only reported team perceived success. Factors that were associated with improvement were management leadership, organizational culture, use of information systems, and prior experience with QI. Additionally, there was support for physician involvement, microsystem motivation to change, available resources and quality of the QI team leadership. The findings of this review were difficult to interpret since it could only measure those factors included in the reports, none of which had the specific goal of testing the role of specific organizational characteristics. As such, any individual factor was only mentioned in 20% of articles leading to small smaple sizes to draw any conclusions from. The strength of the paper is that it starts to identify a collection of variables that studies should evaluate when working to identify which organizational characteristics best support QI. The last article to considered reported on 99 interviews conducted at 12 hospitals that participated in the Door-to-Balloon (D2B) Alliance. 17 The hospitals were recruited into this study based on their reported influence of the D2B Alliance on improving care at their hospital, with 6 reporting a strong influence and 6 a limited influence. Their qualitative analysis of the interviews was based on a realistic evaluation framework focused on identifying the contextual

89 78 environment that led to the hospitals percieved impact of the D2B Alliance. This anlaysis revealed that a perceived need to change, openness to external sources of information, and a strong champion for change were all contextual factors consistently associated with the D2B Alliance having a strong impact. While this study only considers a small number of hospitals, the interviews provided a wealth of information on various organizational characteristics, providing the best assurance that the identified associations between organizational characteristics and QI success were at least true associations at those individual hospitals. This collection of articles suggested that a number of factors can impact a team s success with a QI effort. In contrast to the prior section, the supported organizational characteristics are generally closely associated along the causal pathway with the measured outcome of interest. The most notable exceptions to this concept were the more broadly defined features such as psychological safety and organizational culture. While it is important to recognize that the identified organizational characteristics were associated with successful QI efforts there is little available information on what constitued a good organizational culture or supportive leadership. The next challenge for healthcare QI may be in determining how to best create the environment and necessary support structures to allow effective QI. In concluding this literature synthesis around the relationship between organizational characteristics and healthcare quality, there were three key concepts that stand out. Future studies should focus on these concepts as they work to overcome the limitations of this prior work as well as begin to develop an

90 79 understanding of how to best improve healthcare quality. First, there should be consideration of how organizational features and processes interact to support quality. Second, the overall context of an organization impacts their QI efforts. As such, analyses need to compare across multiple organizations in order to best understand the relationships between organizational characteristics and outcomes. Third, longitudinal analyses related to specific interventions will help establish a causal relationship showing how structures support quality. This study s analysis of the results from the FIX collaborative addresses some of these limitations. The analyses use survey data collected during FIX to understand how a large collection of organizational characteristics were associated with performance during FIX. The focus was to identify whether any modifiable organizational characteristics were part of a collection of characteristics commonly associated with success in FIX. The identified characteristics would then be potential targets for intervention allowing an unsuccessful hospital to adopt changes that will help support future QI efforts. Analytic Framework In order to best understand how organizational characteristics related to FIX performance, the first step was to develop an analytic framework to structure the analyses. The starting point in this process was to identify a theoretical approach to guide the development. Based on the literature review, there was no established theoretical approach guiding the field. After surveying a selection of organizational theories, realistic evaluation was selected as the approach that best matched the purpose of this analysis. Realistic evaluation theory, originally

91 80 developed for improving the quality of evaluation for public policy interventions, focuses on understanding the context of the situation where an intervention occurs and how factors interact to lead to the observed result. 58 A common quote that succinctly summarizes the theory is to understand what works for whom in what circumstances. 58 In effect, the work argues that success in one situation will not always translate to another situation and that it is a complex interaction of factors that results in improvement or failure. This theory contributes two important characteristics to this analysis. First it led to the decision to use a data mining approach to analyze the data. The support for this decision will be discussed in the next chapter. Second, it provides the superstructure for the framework. This superstructure conceptualizes a QI effort (in this case FIX) as an external stimulus applied to a specific organizational context. This organizational context responds to the QI effort and produces a set of measureable outcomes. A model of this framework is outlined in Figure 5-1. This superstructure however, does not address the key objective of realist evaluation, which was to thoroughly understand the characteristics of the organizational context and how those characteristics interact to generate the outcomes. To understand this required developing a more detailed model of the organizational context that shapes a QI effort. This process began with a consideration of the organizational characteristics covered in the literature. This consideration revealed that there was no succinct list of factors, but instead suggested that factors may be categorized into specific classes. A further refinement of this concept came from a review of the SQUIRE (Standards for

92 81 QUality Improvement Reporting Excellence) Guidelines. 59 These publication guidelines encourage authors to describe various aspects of the organizational context that might impact a QI project. The consideration of these two factors led to the identification of four classes of contextual factors that may impact success with QI efforts, 1) Facility structure, 2) QI structure, 3) QI processes, and 4) Team character. Figure 5-1: Analytic framework for how organizational context impacts QI The first class, facility structure, represented factors describing the basic structural characteristics of the healthcare institution. These factors were conceptualized as generally unmodifiable variables (e.g. facility size). Despite their unmodifiable nature, these factors create a critical foundation that not only

ABMS Organizational QI Forum Links QI, Research and Policy Highlights of Keynote Speakers Presentations

ABMS Organizational QI Forum Links QI, Research and Policy Highlights of Keynote Speakers Presentations ABMS Organizational QI Forum Links QI, Research and Policy Highlights of Keynote Speakers Presentations When quality improvement (QI) is done well, it can improve patient outcomes and inform public policy.

More information

Begin Implementation. Train Your Team and Take Action

Begin Implementation. Train Your Team and Take Action Begin Implementation Train Your Team and Take Action These materials were developed by the Malnutrition Quality Improvement Initiative (MQii), a project of the Academy of Nutrition and Dietetics, Avalere

More information

How Allina Saved $13 Million By Optimizing Length of Stay

How Allina Saved $13 Million By Optimizing Length of Stay Success Story How Allina Saved $13 Million By Optimizing Length of Stay EXECUTIVE SUMMARY Like most large healthcare systems throughout the country, Allina Health s financial health improves dramatically

More information

Pursuing the Triple Aim: CareOregon

Pursuing the Triple Aim: CareOregon Pursuing the Triple Aim: CareOregon The Triple Aim: An Introduction The Institute for Healthcare Improvement (IHI) launched the Triple Aim initiative in September 2007 to develop new models of care that

More information

PG snapshot Nursing Special Report. The Role of Workplace Safety and Surveillance Capacity in Driving Nurse and Patient Outcomes

PG snapshot Nursing Special Report. The Role of Workplace Safety and Surveillance Capacity in Driving Nurse and Patient Outcomes PG snapshot news, views & ideas from the leader in healthcare experience & satisfaction measurement The Press Ganey snapshot is a monthly electronic bulletin freely available to all those involved or interested

More information

Quality Management Program

Quality Management Program Ryan White Part A HIV/AIDS Program Las Vegas TGA Quality Management Program Team Work is Our Attitude, Excellence is Our Goal Page 1 Inputs Processes Outputs Outcomes QUALITY MANAGEMENT Ryan White Part

More information

MALNUTRITION QUALITY IMPROVEMENT INITIATIVE (MQii) FREQUENTLY ASKED QUESTIONS (FAQs)

MALNUTRITION QUALITY IMPROVEMENT INITIATIVE (MQii) FREQUENTLY ASKED QUESTIONS (FAQs) MALNUTRITION QUALITY IMPROVEMENT INITIATIVE (MQii) FREQUENTLY ASKED QUESTIONS (FAQs) What is the MQii? The Malnutrition Quality Improvement Initiative (MQii) aims to advance evidence-based, high-quality

More information

2014 MASTER PROJECT LIST

2014 MASTER PROJECT LIST Promoting Integrated Care for Dual Eligibles (PRIDE) This project addressed a set of organizational challenges that high performing plans must resolve in order to scale up to serve larger numbers of dual

More information

Are You Undermining Your Patient Experience Strategy?

Are You Undermining Your Patient Experience Strategy? An account based on survey findings and interviews with hospital workforce decision-makers Are You Undermining Your Patient Experience Strategy? Aligning Organizational Goals with Workforce Management

More information

REPORT OF THE COUNCIL ON MEDICAL SERVICE. Hospital-Based Physicians and the Value-Based Payment Modifier (Resolution 813-I-12)

REPORT OF THE COUNCIL ON MEDICAL SERVICE. Hospital-Based Physicians and the Value-Based Payment Modifier (Resolution 813-I-12) REPORT OF THE COUNCIL ON MEDICAL SERVICE CMS Report -I- Subject: Presented by: Referred to: Hospital-Based Physicians and the Value-Based Payment Modifier (Resolution -I-) Charles F. Willson, MD, Chair

More information

Healthcare- Associated Infections in North Carolina

Healthcare- Associated Infections in North Carolina 2018 Healthcare- Associated Infections in North Carolina Reference Document Revised June 2018 NC Surveillance for Healthcare-Associated and Resistant Pathogens Patient Safety Program NC Department of Health

More information

Mandatory Public Reporting of Hospital Acquired Infections

Mandatory Public Reporting of Hospital Acquired Infections Mandatory Public Reporting of Hospital Acquired Infections The non-profit Consumers Union (CU) has recently sent a letter to every member of the Texas Legislature urging them to pass legislation mandating

More information

The Center For Medicare And Medicaid Innovation s Blueprint For Rapid-Cycle Evaluation Of New Care And Payment Models

The Center For Medicare And Medicaid Innovation s Blueprint For Rapid-Cycle Evaluation Of New Care And Payment Models By William Shrank The Center For Medicare And Medicaid Innovation s Blueprint For Rapid-Cycle Evaluation Of New Care And Payment Models doi: 10.1377/hlthaff.2013.0216 HEALTH AFFAIRS 32, NO. 4 (2013): 807

More information

Case-mix Analysis Across Patient Populations and Boundaries: A Refined Classification System

Case-mix Analysis Across Patient Populations and Boundaries: A Refined Classification System Case-mix Analysis Across Patient Populations and Boundaries: A Refined Classification System Designed Specifically for International Quality and Performance Use A white paper by: Marc Berlinguet, MD, MPH

More information

EXECUTIVE SUMMARY. The Military Health System. Military Health System Review Final Report August 29, 2014

EXECUTIVE SUMMARY. The Military Health System. Military Health System Review Final Report August 29, 2014 EXECUTIVE SUMMARY On May 28, 2014, the Secretary of Defense ordered a comprehensive review of the Military Health System (MHS). The review was directed to assess whether: 1) access to medical care in the

More information

DA: November 29, Centers for Medicare and Medicaid Services National PACE Association

DA: November 29, Centers for Medicare and Medicaid Services National PACE Association DA: November 29, 2017 TO: FR: RE: Centers for Medicare and Medicaid Services National PACE Association NPA Comments to CMS on Development, Implementation, and Maintenance of Quality Measures for the Programs

More information

A Publication for Hospital and Health System Professionals

A Publication for Hospital and Health System Professionals A Publication for Hospital and Health System Professionals S U M M E R 2 0 0 8 V O L U M E 6, I S S U E 2 Data for Healthcare Improvement Developing and Applying Avoidable Delay Tracking Working with Difficult

More information

University of Michigan Health System Part IV Maintenance of Certification Program [Form 12/1/14]

University of Michigan Health System Part IV Maintenance of Certification Program [Form 12/1/14] Report on a QI Project Eligible for Part IV MOC: Improving Medication Reconciliation in Primary Care Instructions Determine eligibility. Before starting to complete this report, go to the UMHS MOC website

More information

HIMSS Submission Leveraging HIT, Improving Quality & Safety

HIMSS Submission Leveraging HIT, Improving Quality & Safety HIMSS Submission Leveraging HIT, Improving Quality & Safety Title: Making the Electronic Health Record Do the Heavy Lifting: Reducing Hospital Acquired Urinary Tract Infections at NorthShore University

More information

Introduction Patient-Centered Outcomes Research Institute (PCORI)

Introduction Patient-Centered Outcomes Research Institute (PCORI) 2 Introduction The Patient-Centered Outcomes Research Institute (PCORI) is an independent, nonprofit health research organization authorized by the Patient Protection and Affordable Care Act of 2010. Its

More information

Improving Hospital Performance Through Clinical Integration

Improving Hospital Performance Through Clinical Integration white paper Improving Hospital Performance Through Clinical Integration Rohit Uppal, MD President of Acute Hospital Medicine, TeamHealth In the typical hospital, most clinical service lines operate as

More information

Re: Rewarding Provider Performance: Aligning Incentives in Medicare

Re: Rewarding Provider Performance: Aligning Incentives in Medicare September 25, 2006 Institute of Medicine 500 Fifth Street NW Washington DC 20001 Re: Rewarding Provider Performance: Aligning Incentives in Medicare The American College of Physicians (ACP), representing

More information

Psychiatric rehabilitation - does it work?

Psychiatric rehabilitation - does it work? The Ulster Medical Joumal, Volume 59, No. 2, pp. 168-1 73, October 1990. Psychiatric rehabilitation - does it work? A three year retrospective survey B W McCrum, G MacFlynn Accepted 7 June 1990. SUMMARY

More information

CMS-0044-P; Proposed Rule: Medicare and Medicaid Programs; Electronic Health Record Incentive Program Stage 2

CMS-0044-P; Proposed Rule: Medicare and Medicaid Programs; Electronic Health Record Incentive Program Stage 2 May 7, 2012 Submitted Electronically Ms. Marilyn Tavenner Acting Administrator Centers for Medicare and Medicaid Services Department of Health and Human Services Room 445-G, Hubert H. Humphrey Building

More information

Value-Based Contracting

Value-Based Contracting Value-Based Contracting AUTHOR Melissa Stahl Research Manager, The Health Management Academy 2018 Lumeris, Inc 1.888.586.3747 lumeris.com Introduction As the healthcare industry continues to undergo transformative

More information

Critique of a Nurse Driven Mobility Study. Heather Nowak, Wendy Szymoniak, Sueann Unger, Sofia Warren. Ferris State University

Critique of a Nurse Driven Mobility Study. Heather Nowak, Wendy Szymoniak, Sueann Unger, Sofia Warren. Ferris State University Running head: CRITIQUE OF A NURSE 1 Critique of a Nurse Driven Mobility Study Heather Nowak, Wendy Szymoniak, Sueann Unger, Sofia Warren Ferris State University CRITIQUE OF A NURSE 2 Abstract This is a

More information

The Determinants of Patient Satisfaction in the United States

The Determinants of Patient Satisfaction in the United States The Determinants of Patient Satisfaction in the United States Nikhil Porecha The College of New Jersey 5 April 2016 Dr. Donka Mirtcheva Abstract Hospitals and other healthcare facilities face a problem

More information

Advanced Measurement for Improvement Prework

Advanced Measurement for Improvement Prework Advanced Measurement for Improvement Prework IHI Training Seminar Boston, MA March 20-21, 2017 Faculty: Richard Scoville PhD; Gareth Parry PhD Thank you for enrolling in IHI s upcoming seminar on designing

More information

Are physicians ready for macra/qpp?

Are physicians ready for macra/qpp? Are physicians ready for macra/qpp? Results from a KPMG-AMA Survey kpmg.com ama-assn.org Contents Summary Executive Summary 2 Background and Survey Objectives 5 What is MACRA? 5 AMA and KPMG collaboration

More information

Admissions and Readmissions Related to Adverse Events, NMCPHC-EDC-TR

Admissions and Readmissions Related to Adverse Events, NMCPHC-EDC-TR Admissions and Readmissions Related to Adverse Events, 2007-2014 By Michael J. Hughes and Uzo Chukwuma December 2015 Approved for public release. Distribution is unlimited. The views expressed in this

More information

Cultural Transformation To Prevent Falls And Associated Injuries In A Tertiary Care Hospital p. 1

Cultural Transformation To Prevent Falls And Associated Injuries In A Tertiary Care Hospital p. 1 Cultural Transformation To Prevent Falls And Associated Injuries In A Tertiary Care Hospital p. 1 2008 Pinnacle Award Application: Narrative Submission Cultural Transformation To Prevent Falls And Associated

More information

Big Data NLP for improved healthcare outcomes

Big Data NLP for improved healthcare outcomes Big Data NLP for improved healthcare outcomes A white paper Big Data NLP for improved healthcare outcomes Executive summary Shifting payment models based on quality and value are fueling the demand for

More information

A Primer on Activity-Based Funding

A Primer on Activity-Based Funding A Primer on Activity-Based Funding Introduction and Background Canada is ranked sixth among the richest countries in the world in terms of the proportion of gross domestic product (GDP) spent on health

More information

Medicaid HCBS/FE Home Telehealth Pilot Final Report for Study Years 1-3 (September 2007 June 2010)

Medicaid HCBS/FE Home Telehealth Pilot Final Report for Study Years 1-3 (September 2007 June 2010) Medicaid HCBS/FE Home Telehealth Pilot Final Report for Study Years 1-3 (September 2007 June 2010) Completed November 30, 2010 Ryan Spaulding, PhD Director Gordon Alloway Research Associate Center for

More information

5D QAPI from an Operational Approach. Christine M. Osterberg RN BSN Senior Nursing Consultant Pathway Health Pathway Health 2013

5D QAPI from an Operational Approach. Christine M. Osterberg RN BSN Senior Nursing Consultant Pathway Health Pathway Health 2013 5D QAPI from an Operational Approach Christine M. Osterberg RN BSN Senior Nursing Consultant Pathway Health Objectives Review the post-acute care data agenda. Explain QAPI principles Describe leadership

More information

Introduction and Executive Summary

Introduction and Executive Summary Introduction and Executive Summary 1. Introduction and Executive Summary. Hospital length of stay (LOS) varies markedly and persistently across geographic areas in the United States. This phenomenon is

More information

Boarding Impact on patients, hospitals and healthcare systems

Boarding Impact on patients, hospitals and healthcare systems Boarding Impact on patients, hospitals and healthcare systems Dan Beckett Consultant Acute Physician NHSFV National Clinical Lead Whole System Patient Flow Project Scottish Government May 2014 Important

More information

August 25, Dear Ms. Verma:

August 25, Dear Ms. Verma: Seema Verma Administrator Centers for Medicare & Medicaid Services Hubert H. Humphrey Building 200 Independence Avenue, S.W. Room 445-G Washington, DC 20201 CMS 1686 ANPRM, Medicare Program; Prospective

More information

PATIENT ATTRIBUTION WHITE PAPER

PATIENT ATTRIBUTION WHITE PAPER PATIENT ATTRIBUTION WHITE PAPER Comment Response Document Written by: Population-Based Payment Work Group Version Date: 05/13/2016 Contents Introduction... 2 Patient Engagement... 2 Incentives for Using

More information

Community Performance Report

Community Performance Report : Wenatchee Current Year: Q1 217 through Q4 217 Qualis Health Communities for Safer Transitions of Care Performance Report : Wenatchee Includes Data Through: Q4 217 Report Created: May 3, 218 Purpose of

More information

Scenario Planning: Optimizing your inpatient capacity glide path in an age of uncertainty

Scenario Planning: Optimizing your inpatient capacity glide path in an age of uncertainty Scenario Planning: Optimizing your inpatient capacity glide path in an age of uncertainty Scenario Planning: Optimizing your inpatient capacity glide path in an age of uncertainty Examining a range of

More information

Risk Adjustment Methods in Value-Based Reimbursement Strategies

Risk Adjustment Methods in Value-Based Reimbursement Strategies Paper 10621-2016 Risk Adjustment Methods in Value-Based Reimbursement Strategies ABSTRACT Daryl Wansink, PhD, Conifer Health Solutions, Inc. With the move to value-based benefit and reimbursement models,

More information

Executive Summary. This Project

Executive Summary. This Project Executive Summary The Health Care Financing Administration (HCFA) has had a long-term commitment to work towards implementation of a per-episode prospective payment approach for Medicare home health services,

More information

Executive Summary: Utilization Management for Adult Members

Executive Summary: Utilization Management for Adult Members Executive Summary: Utilization Management for Adult Members On at least a quarterly basis, the reports mutually agreed upon in Exhibit E of the CT BHP contract are submitted to the state for review. This

More information

Staffing and Scheduling

Staffing and Scheduling Staffing and Scheduling 1 One of the most critical issues confronting nurse executives today is nurse staffing. The major goal of staffing and scheduling systems is to identify the need for and provide

More information

Creating a Patient-Centered Payment System to Support Higher-Quality, More Affordable Health Care. Harold D. Miller

Creating a Patient-Centered Payment System to Support Higher-Quality, More Affordable Health Care. Harold D. Miller Creating a Patient-Centered Payment System to Support Higher-Quality, More Affordable Health Care Harold D. Miller First Edition October 2017 CONTENTS EXECUTIVE SUMMARY... i I. THE QUEST TO PAY FOR VALUE

More information

Medication Management Checklist for Supportive Living Early Adopter Initiative. Final Report. June 2013

Medication Management Checklist for Supportive Living Early Adopter Initiative. Final Report. June 2013 Medication Management Checklist for Supportive Living Early Adopter Initiative Final Report June 2013 Table of Content Executive Summary... 1 Background... 3 Method... 3 Results... 3 1. Participating

More information

State FY2013 Hospital Pay-for-Performance (P4P) Guide

State FY2013 Hospital Pay-for-Performance (P4P) Guide State FY2013 Hospital Pay-for-Performance (P4P) Guide Table of Contents 1. Overview...2 2. Measures...2 3. SFY 2013 Timeline...2 4. Methodology...2 5. Data submission and validation...2 6. Communication,

More information

Quality Standards. Process and Methods Guide. October Quality Standards: Process and Methods Guide 0

Quality Standards. Process and Methods Guide. October Quality Standards: Process and Methods Guide 0 Quality Standards Process and Methods Guide October 2016 Quality Standards: Process and Methods Guide 0 About This Guide This guide describes the principles, process, methods, and roles involved in selecting,

More information

Identifying Research Questions

Identifying Research Questions Research_EBP_L Davis_Fall 2015 Identifying Research Questions Leslie L Davis, PhD, RN, ANP-BC, FAANP, FAHA UNC-Greensboro, School of Nursing Topics for Today Identifying research problems Problem versus

More information

The Link Between Patient Experience and Patient and Family Engagement

The Link Between Patient Experience and Patient and Family Engagement The Link Between Patient Experience and Patient and Family Engagement Powerful Partnerships: Improving Quality and Outcomes Mission to Care Florida Hospital Association Hospital Improvement Innovation

More information

Decreasing Environmental Services Response Times

Decreasing Environmental Services Response Times Decreasing Environmental Services Response Times Murray J. Côté, Ph.D., Associate Professor, Department of Health Policy & Management, Texas A&M Health Science Center; Zach Robison, M.B.A., Administrative

More information

Building a Safe Healthcare System

Building a Safe Healthcare System Building a Safe Healthcare System Objectives 2 Discuss the process of improving healthcare systems. Introduce widely-used methodologies in QI/PS. What is Quality Improvement? 3 Process of continually evaluating

More information

Adopting Accountable Care An Implementation Guide for Physician Practices

Adopting Accountable Care An Implementation Guide for Physician Practices Adopting Accountable Care An Implementation Guide for Physician Practices EXECUTIVE SUMMARY November 2014 A resource developed by the ACO Learning Network www.acolearningnetwork.org Executive Summary Our

More information

Proceedings of the 2005 Systems and Information Engineering Design Symposium Ellen J. Bass, ed.

Proceedings of the 2005 Systems and Information Engineering Design Symposium Ellen J. Bass, ed. Proceedings of the 2005 Systems and Information Engineering Design Symposium Ellen J. Bass, ed. ANALYZING THE PATIENT LOAD ON THE HOSPITALS IN A METROPOLITAN AREA Barb Tawney Systems and Information Engineering

More information

CMS-3310-P & CMS-3311-FC,

CMS-3310-P & CMS-3311-FC, Andrew M. Slavitt Acting Administrator Centers for Medicare & Medicaid Services Hubert H. Humphrey Building 200 Independence Ave., S.W., Room 445-G Washington, DC 20201 Re: CMS-3310-P & CMS-3311-FC, Medicare

More information

Report on Feasibility, Costs, and Potential Benefits of Scaling the Military Acuity Model

Report on Feasibility, Costs, and Potential Benefits of Scaling the Military Acuity Model Report on Feasibility, Costs, and Potential Benefits of Scaling the Military Acuity Model June 2017 Requested by: House Report 114-139, page 280, which accompanies H.R. 2685, the Department of Defense

More information

Two Keys to Excellent Health Care for Canadians

Two Keys to Excellent Health Care for Canadians Two Keys to Excellent Health Care for Canadians Dated: 22/10/01 Two Keys to Excellent Health Care for Canadians: Provide Information and Support Competition A submission to the: Commission on the Future

More information

Running Head: READINESS FOR DISCHARGE

Running Head: READINESS FOR DISCHARGE Running Head: READINESS FOR DISCHARGE Readiness for Discharge Quantitative Review Melissa Benderman, Cynthia DeBoer, Patricia Kraemer, Barbara Van Der Male, & Angela VanMaanen. Ferris State University

More information

Evaluation of the WHO Patient Safety Solutions Aides Memoir

Evaluation of the WHO Patient Safety Solutions Aides Memoir Evaluation of the WHO Patient Safety Solutions Aides Memoir Executive Summary Prepared for the Patient Safety Programme of the World Health Organization Donna O. Farley, PhD, MPH Evaluation Consultant

More information

Adopting Standardized Definitions The Future of Data Collection and Benchmarking in Alternate Site Infusion Must Start Now!

Adopting Standardized Definitions The Future of Data Collection and Benchmarking in Alternate Site Infusion Must Start Now! Adopting Standardized Definitions The Future of Data Collection and Benchmarking in Alternate Site Infusion Must Start Now! Connie Sullivan, RPh Infusion Director, Heartland IV Care Lyons, CO CE Credit

More information

Guidance for Developing Payment Models for COMPASS Collaborative Care Management for Depression and Diabetes and/or Cardiovascular Disease

Guidance for Developing Payment Models for COMPASS Collaborative Care Management for Depression and Diabetes and/or Cardiovascular Disease Guidance for Developing Payment Models for COMPASS Collaborative Care Management for Depression and Diabetes and/or Cardiovascular Disease Introduction Within the COMPASS (Care Of Mental, Physical, And

More information

Faculty Session 1 Time Title Objectives Tied to others Brent James, MD. Always together w/pragmatic 1. Always together w/modelling Processes 1

Faculty Session 1 Time Title Objectives Tied to others Brent James, MD. Always together w/pragmatic 1. Always together w/modelling Processes 1 Faculty Session Time Title Objectives Tied to others Managing Clinical Processes: An Definition of processes Always together w/ Methods Introduction to Clinical QI Quality improvement as the science of

More information

Factors that Impact Readmission for Medicare and Medicaid HMO Inpatients

Factors that Impact Readmission for Medicare and Medicaid HMO Inpatients The College at Brockport: State University of New York Digital Commons @Brockport Senior Honors Theses Master's Theses and Honors Projects 5-2014 Factors that Impact Readmission for Medicare and Medicaid

More information

The influx of newly insured Californians through

The influx of newly insured Californians through January 2016 Managing Cost of Care: Lessons from Successful Organizations Issue Brief The influx of newly insured Californians through the public exchange and Medicaid expansion has renewed efforts by

More information

A Battelle White Paper. How Do You Turn Hospital Quality Data into Insight?

A Battelle White Paper. How Do You Turn Hospital Quality Data into Insight? A Battelle White Paper How Do You Turn Hospital Quality Data into Insight? Data-driven quality improvement is one of the cornerstones of modern healthcare. Hospitals and healthcare providers now record,

More information

Emergency Department Throughput

Emergency Department Throughput Emergency Department Throughput Patient Safety Quality Improvement Patient Experience Affordability Hoag Memorial Hospital Presbyterian One Hoag Drive Newport Beach, CA 92663 www.hoag.org Program Managers:

More information

NHS Greater Glasgow and Clyde Alison Noonan

NHS Greater Glasgow and Clyde Alison Noonan NHS Board Contact Email NHS Greater Glasgow and Clyde Alison Noonan alison.noonan@ggc.scot.nhs.uk Title Category Background/ context Problem Effective Discharge Planning and the Introduction of Delegated

More information

Final Report. Karen Keast Director of Clinical Operations. Jacquelynn Lapinski Senior Management Engineer

Final Report. Karen Keast Director of Clinical Operations. Jacquelynn Lapinski Senior Management Engineer Assessment of Room Utilization of the Interventional Radiology Division at the University of Michigan Hospital Final Report University of Michigan Health Systems Karen Keast Director of Clinical Operations

More information

PG snapshot PRESS GANEY IDENTIFIES KEY DRIVERS OF PATIENT LOYALTY IN MEDICAL PRACTICES. January 2014 Volume 13 Issue 1

PG snapshot PRESS GANEY IDENTIFIES KEY DRIVERS OF PATIENT LOYALTY IN MEDICAL PRACTICES. January 2014 Volume 13 Issue 1 PG snapshot news, views & ideas from the leader in healthcare experience & satisfaction measurement The Press Ganey snapshot is a monthly electronic bulletin freely available to all those involved or interested

More information

CITY OF GRANTS PASS SURVEY

CITY OF GRANTS PASS SURVEY CITY OF GRANTS PASS SURVEY by Stephen M. Johnson OCTOBER 1998 OREGON SURVEY RESEARCH LABORATORY UNIVERSITY OF OREGON EUGENE OR 97403-5245 541-346-0824 fax: 541-346-5026 Internet: OSRL@OREGON.UOREGON.EDU

More information

Training Requirements for Home Care Workers: A Content Analysis of State Laws

Training Requirements for Home Care Workers: A Content Analysis of State Laws Training Requirements for Home Care Workers: A Content Analysis of Contributors: Christopher M. Kelly, Jennifer Craft Morgan & Kendra Jason Pub. Date: 2017 Access Date: January 27, 2017 Academic Level:

More information

LESSONS LEARNED IN LENGTH OF STAY (LOS)

LESSONS LEARNED IN LENGTH OF STAY (LOS) FEBRUARY 2014 LESSONS LEARNED IN LENGTH OF STAY (LOS) USING ANALYTICS & KEY BEST PRACTICES TO DRIVE IMPROVEMENT Overview Healthcare systems will greatly enhance their financial status with a renewed focus

More information

My Discharge a proactive case management for discharging patients with dementia

My Discharge a proactive case management for discharging patients with dementia Shine 2013 final report Project title My Discharge a proactive case management for discharging patients with dementia Organisation name Royal Free London NHS foundation rust Project completion: March 2014

More information

Food for Thought: Maximizing the Positive Impact Food Can Have on a Patient s Stay

Food for Thought: Maximizing the Positive Impact Food Can Have on a Patient s Stay Food for Thought: Maximizing the Positive Impact Food Can Have on a Patient s Stay Food matters. In sickness and in health, it nourishes the body and feeds the soul. And in today s consumer-driven, valuebased

More information

Healthcare- Associated Infections in North Carolina

Healthcare- Associated Infections in North Carolina 2012 Healthcare- Associated Infections in North Carolina Reference Document Revised May 2016 N.C. Surveillance for Healthcare-Associated and Resistant Pathogens Patient Safety Program N.C. Department of

More information

Oh No! I need to write an abstract! How do I start?

Oh No! I need to write an abstract! How do I start? Oh No! I need to write an abstract! How do I start? Why is it hard to write an abstract? Fear / anxiety about the writing process others reading what you wrote Takes time / feel overwhelmed Commits you

More information

What Job Seekers Want:

What Job Seekers Want: Indeed Hiring Lab I March 2014 What Job Seekers Want: Occupation Satisfaction & Desirability Report While labor market analysis typically reports actual job movements, rarely does it directly anticipate

More information

GRADUATE PROGRAM IN PUBLIC HEALTH

GRADUATE PROGRAM IN PUBLIC HEALTH GRADUATE PROGRAM IN PUBLIC HEALTH CULMINATING EXPERIENCE EVALUATION Please complete and return to Ms. Rose Vallines, Administrative Assistant. CAM Building, 17 E. 102 St., West Tower 5 th Floor Interoffice

More information

A Comparison of Job Responsibility and Activities between Registered Dietitians with a Bachelor's Degree and Those with a Master's Degree

A Comparison of Job Responsibility and Activities between Registered Dietitians with a Bachelor's Degree and Those with a Master's Degree Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 11-17-2010 A Comparison of Job Responsibility and Activities between Registered Dietitians

More information

Gantt Chart. Critical Path Method 9/23/2013. Some of the common tools that managers use to create operational plan

Gantt Chart. Critical Path Method 9/23/2013. Some of the common tools that managers use to create operational plan Some of the common tools that managers use to create operational plan Gantt Chart The Gantt chart is useful for planning and scheduling projects. It allows the manager to assess how long a project should

More information

snapshot Improving Experience of Care Scores Alone is NOT the Answer: Hospitals Need a Patient-Centric Foundation

snapshot Improving Experience of Care Scores Alone is NOT the Answer: Hospitals Need a Patient-Centric Foundation SATISFACTION snapshot news, views & ideas from the leader in healthcare satisfaction measurement The Satisfaction Snapshot is a monthly electronic bulletin freely available to all those involved or interested

More information

Alberta Health Services. Strategic Direction

Alberta Health Services. Strategic Direction Alberta Health Services Strategic Direction 2009 2012 PLEASE GO TO WWW.AHS-STRATEGY.COM TO PROVIDE FEEDBACK ON THIS DOCUMENT Defining Our Focus / Measuring Our Progress CONSULTATION DOCUMENT Introduction

More information

Abstract Development:

Abstract Development: Abstract Development: How to write an abstract Fall 2017 Sara E. Dolan Looby, PhD, ANP-BC, FAAN Assistant Professor of Medicine, Harvard Medical School Neuroendocrine Unit/Program in Nutritional Metabolism

More information

IMPROVING HCAHPS, PATIENT MORTALITY AND READMISSION: MAXIMIZING REIMBURSEMENTS IN THE AGE OF HEALTHCARE REFORM

IMPROVING HCAHPS, PATIENT MORTALITY AND READMISSION: MAXIMIZING REIMBURSEMENTS IN THE AGE OF HEALTHCARE REFORM IMPROVING HCAHPS, PATIENT MORTALITY AND READMISSION: MAXIMIZING REIMBURSEMENTS IN THE AGE OF HEALTHCARE REFORM OVERVIEW Using data from 1,879 healthcare organizations across the United States, we examined

More information

Family and Community Support Services (FCSS) Program Review

Family and Community Support Services (FCSS) Program Review Family and Community Support Services (FCSS) Program Review Judy Smith, Director Community Investment Community Services Department City of Edmonton 1100, CN Tower, 10004 104 Avenue Edmonton, Alberta,

More information

Using Secondary Datasets for Research. Learning Objectives. What Do We Mean By Secondary Data?

Using Secondary Datasets for Research. Learning Objectives. What Do We Mean By Secondary Data? Using Secondary Datasets for Research José J. Escarce January 26, 2015 Learning Objectives Understand what secondary datasets are and why they are useful for health services research Become familiar with

More information

VA Compensation and Pension Capstone

VA Compensation and Pension Capstone VA Compensation and Pension Capstone Design Team Carrie Abbamonto, Chelsey Bowman, Jeffrey Condon, Kevin Urso Design Advisor Prof. James Benneyan Abstract The United States government has made a promise

More information

HCAHPS: Background and Significance Evidenced Based Recommendations

HCAHPS: Background and Significance Evidenced Based Recommendations HCAHPS: Background and Significance Evidenced Based Recommendations Susan T. Bionat, APRN, CNS, ACNP-BC, CCRN Education Leader, Nurse Practitioner Program Objectives Discuss the background of HCAHPS. Discuss

More information

Innovation and Diagnosis Related Groups (DRGs)

Innovation and Diagnosis Related Groups (DRGs) Innovation and Diagnosis Related Groups (DRGs) Kenneth R. White, PhD, FACHE Professor of Health Administration Department of Health Administration Virginia Commonwealth University Richmond, Virginia 23298

More information

improvement program to Electronic Health variety of reasons, experts suggest that up to

improvement program to Electronic Health variety of reasons, experts suggest that up to Reducing Hospital Readmissions March/2017 The readmission rate for patients discharged to a skilled nursing facility is 25% within 30 days1. What can senior care providers do to reduce these hospital readmissions?

More information

Course Instructor Karen Migl, Ph.D, RNC, WHNP-BC

Course Instructor Karen Migl, Ph.D, RNC, WHNP-BC Stephen F. Austin State University DeWitt School of Nursing RN-BSN RESEARCH AND APPLICATION OF EVIDENCE BASED PRACTICE SYLLABUS Course Number: NUR 439 Section Number: 501 Clinical Section Number: 502 Course

More information

Employee Telecommuting Study

Employee Telecommuting Study Employee Telecommuting Study June Prepared For: Valley Metro Valley Metro Employee Telecommuting Study Page i Table of Contents Section: Page #: Executive Summary and Conclusions... iii I. Introduction...

More information

Medicare Quality Payment Program: Deep Dive FAQs for 2017 Performance Year Hospital-Employed Physicians

Medicare Quality Payment Program: Deep Dive FAQs for 2017 Performance Year Hospital-Employed Physicians Medicare Quality Payment Program: Deep Dive FAQs for 2017 Performance Year Hospital-Employed Physicians This document supplements the AMA s MIPS Action Plan 10 Key Steps for 2017 and provides additional

More information

DRAFT Complex and Chronic Care Improvement Program Template. (Not approved by CMS subject to continuing review process)

DRAFT Complex and Chronic Care Improvement Program Template. (Not approved by CMS subject to continuing review process) DRAFT Complex and Chronic Care Improvement Program Template Performance Year 2017 (Not approved by CMS subject to continuing review process) 1 Page A. Introduction The Complex and Chronic Care Improvement

More information

Licensed Nurses in Florida: Trends and Longitudinal Analysis

Licensed Nurses in Florida: Trends and Longitudinal Analysis Licensed Nurses in Florida: 2007-2009 Trends and Longitudinal Analysis March 2009 Addressing Nurse Workforce Issues for the Health of Florida www.flcenterfornursing.org March 2009 2007-2009 Licensure Trends

More information

Mental Health Accountability Framework

Mental Health Accountability Framework Mental Health Accountability Framework 2002 Chief Medical Officer of Health Report Injury: Predictable and Preventable Contents 3 Executive Summary 4 I Introduction 6 1) Why is accountability necessary?

More information

Faculty of Nursing. Master s Project Manual. For Faculty Supervisors and Students

Faculty of Nursing. Master s Project Manual. For Faculty Supervisors and Students 1 Faculty of Nursing Master s Project Manual For Faculty Supervisors and Students January 2015 2 Table of Contents Overview of the Revised MN Streams in Relation to Project.3 The Importance of Projects

More information

UNC2 Practice Test. Select the correct response and jot down your rationale for choosing the answer.

UNC2 Practice Test. Select the correct response and jot down your rationale for choosing the answer. UNC2 Practice Test Select the correct response and jot down your rationale for choosing the answer. 1. An MSN needs to assign a staff member to assist a medical director in the development of a quality

More information

Medicare Spending and Rehospitalization for Chronically Ill Medicare Beneficiaries: Home Health Use Compared to Other Post-Acute Care Settings

Medicare Spending and Rehospitalization for Chronically Ill Medicare Beneficiaries: Home Health Use Compared to Other Post-Acute Care Settings Medicare Spending and Rehospitalization for Chronically Ill Medicare Beneficiaries: Home Health Use Compared to Other Post-Acute Care Settings May 11, 2009 Avalere Health LLC Avalere Health LLC The intersection

More information