Emergency Department Wait Time Modelling and Prediction at North York General Hospital

Size: px
Start display at page:

Download "Emergency Department Wait Time Modelling and Prediction at North York General Hospital"

Transcription

1 Emergency Department Wait Time Modelling and Prediction at North York General Hospital by Amanda Y. S. Bell A thesis submitted in conformity with the requirements for the degree of Master s of Applied Science Department of Mechanical and Industrial Engineering University of Toronto Copyright by Amanda Bell 2015

2 Emergency Department Wait Time Modelling and Prediction at North York General Hospital Abstract Amanda Y. S. Bell Master s of Applied Science Department of Mechanical and Industrial Engineering University of Toronto 2015 At North York General Hospital Yellow Zone (YZ), providers seek real-time forecasts of waiting times to improve communication and prescribe actions to relieve patient crowding. Time series of patient waits to physician assessment (PIA) and to discharge from the YZ (PTE) were collected. Multivariable regression with ARIMA errors was fit to determine the patient-based and systematic predictors of wait times for YZ patients at time of arrival. A regression on occupancy with ARIMA(1,0,0)(0,1,1) 24 errors yielded the lowest error in forecasting PIA. Accuracy improved when grouping patients by acuity and complaint. ARIMA methods yielded low accuracy in forecasting PTE, and no correlations were found between PTE and acuity. 90% of patients waited up to minutes more than predicted for PIA and up to 118 minutes more for PTE. The results suggest that ARIMA models have utility in forecasting PIA but that alternative models should be used to improve PTE prediction. ii

3 Acknowledgements Foremost, I would like to express my heartfelt gratitude to Professor Michael Carter, who first opened my eyes to the field of healthcare engineering and gave me a multitude of opportunities to pursue the work that I love. His continued support, patience, sense of humour, and wealth of knowledge were integral to the completion of this thesis. Thank you for everything. I extend my sincere thanks to the members of my thesis committee, Professor Dionne Aleman and Dr. Amy Liu, for their generously given time, helpful insights, and kind encouragement. Thank you as well to the team at North York General Hospital, namely Andrea Ennis, George Paraghamian, Dr. Kuldeep Sidhu, Sandy Marangos, and Stephanie Robinson, for their expertise and effort in supporting the research conducted in this thesis. Thank you to my labmates Carly, Tsoleen, and Jacqueline, for providing support, friendship, and for keeping me sane. Finally, thank you to both my London and Toronto families for helping me keep my head above water, for being patient when times were difficult, and for not asking me to explain my research too often. iii

4 Contents List of Figures... vi List of Tables... viii 1. Context of Wait Time Reporting Initiatives at North York General Hospital Emergency Department Literature Review Factors affecting Patient Wait Time in the ED Precedence in Wait Time Modelling Online Emergency Wait Time Reporting in Ontario Online Emergency Wait Time Reporting in Canada Criticism of Online Emergency Wait Time Reporting for Canada Techniques for Wait Time Prediction Discrete Event Simulation Queuing Theory Time Series Modelling and Dynamic Regression Models Summary of Findings from Literature Methodology Overview of Research Methodology Data Collection, Preparation, and Partitioning Model Validation and Prediction Accuracy Analysis and Results for Regression Models fitted to PIA Time Sample ARIMA Model Fitting Methodology History of Attempted Methodologies Descriptive Analysis of PIA Time Results of Regression on PIA Time Forecasting for All Yellow Zone Patients Forecasting for Patients by Acuity and Complaint Results Summary and Validation for PIA Time Models Discussion of Results for PIA Time Models Precision and Accuracy of PIA Time Models iv

5 Significance of the Order of ARIMA Errors in PIA Time Models Occupancy by CTAS as a Regressor for PIA Time Forecasting Analysis and Results for Regression Models fitted to PTE Time Descriptive Analysis of PTE Time Results of Regression on PTE Time Forecasting for All Yellow Zone Patients Forecasting for Patients by Acuity and Complaint Results Summary and Validation for PTE Time Models Discussion of Results for PTE Time Models Discussion Implications of Forecasting Waits for YZ Operational Improvement Implications for the Design of Real-Time Wait Reporting and Patient Satisfaction Research Limitations and Future Work Conclusion Appendix A: Yellow Zone Current State Documentation Appendix B: Data Sources Appendix C: Results Descriptive Analysis for PIA Time Appendix D: Results Descriptive Analysis for PTE Time Appendix E: ARIMA Results Supplemental Figures for PIA Time Appendix F: Early Forecasting Attempts Results for Yellow Zone Patient cohorts without Seasonal component Appendix G: Early Forecasting Attempts for All Yellow Zone Patients by Monthly Interval 109 Appendix H: Time Series Approximation Appendix I: ARIMA Results Supplemental Figures for PTE Time v

6 List of Figures Figure 1. High Level Process Map of the Yellow Zone treatment process, highlighting the two major wait time durations and where process timestamps are recorded Figure 2. Frequency of Presenting Complaints to the Yellow Zone Figure 3. Simplified process of the Yellow Zone patient pathway, illustrating where timestamps are created and durations can be derived Figure 4. Correlation Matrix showing strength of correlation between response values and available external regressors. A darker shade of blue indicates a stronger positive correlation.. 30 Figure 5. The autocorrelation found in the unchanged PIA Time series (a,b) is retained in the errors and residuals (d) of a linear regression of occupancy on the time series (c) Figure 6. The average hourly PIA Time plotted against the time of day Figure 7. ACF plot for the residuals of the optimal model for fitting the PIA Time series (all patients, one year) Figure 8. Length of stay stratified by patient cohort Figure 9. Significant changes in wait time means over the course of the yearly PIA, with average daily arrival volumes noted per PIA interval Figure 10. Rolling PIA Time forecasts for All, CTAS 2, and CTAS 3 patients Figure 11. Weekly Pattern of PIA Time showing Weekend vs. Weekday variation Figure 12. Correlation Matrix showing strength of correlation between response values and available external regressors. A darker shade of blue indicates a stronger positive correlation.. 56 Figure 13. The average hourly PTE Time plotted against the time of day Figure 14. Length of stay stratified by patient cohort Figure 15. Breakout analysis shows no significant change in the median PTE time of 2.09 hours over the course of the year surveyed Figure 16. Rolling PTE Time forecasts for All, CTAS 2, and CTAS 3 patients Figure 17: Map of the Yellow Zone in the context of the Emergency Department Figure 18. Process Map for Yellow Zone Patient Treatment and Intake Figure 19. Boxplot, showing PIA Time by Total Occupancy Volume Figure 20: Boxplots showing partial linearity and homoscedasticity for PIA Time grouped by CTAS 2, 3, and 4 Occupancy Figure 21. Boxplots showing partial linearity and homoscedasticity for PIA Time grouped by Pre-PIA CTAS 2, 3, and 4 Occupancy vi

7 Figure 22. Boxplots showing linearity and homoscedasticity for PIA Time grouped by Pre-PIA and Post-PIA Occupancy Figure 23. Boxplot, showing PTE Time by Total Occupancy Volume Figure 24. Boxplots showing linearity and homoscedasticity for PTE Time grouped by Pre-PIA and Post-PIA Occupancy Figure 25. Boxplot showing linearity and homoscedasticity for PTE Time grouped by Volume of Admissions to the hospital Figure 26. Time Series Display of residuals for model fitting PIA Time, all patients in Interval 1. ACF and PACF show possible autocorrelation of errors at lag 13, but portmanteau test rejects autocorrelation hypothesis Figure 27. Fitted model (green) shown over actual time series (black) for CTAS 2 Patients, Interval Figure 28. Fitted model (green) shown over actual time series (black) for CTAS 3 Patients, Interval Figure 29. Fitted model (green) shown over actual time series (black) for Abdominal pai Patients, Interval Figure 30. Distribution of Original Time Series (Blue) and Time Series after Spline Interpolation (Blue), CTAS 2 patients Figure 31. Distribution of Original Time Series (Blue) and Time Series after Spline Interpolation (Blue), CTAS 3 patients Figure 32. Distribution of Original Time Series (Blue) and Time Series after Spline Interpolation (Blue), Abdominal Pain patients Figure 33. Time Series Display of residuals for model fitting PTE Time, all patients in Interval 1. ACF and PACF show possible autocorrelation of errors at lag 24, but portmanteau test rejects autocorrelation hypothesis Figure 34. Fitted model (green) shown over actual time series (black) for CTAS 2 Patients, Interval Figure 35. Fitted model (green) shown over actual time series (black) for CTAS 3 Patients, Interval Figure 36. Fitted model (green) shown over actual time series (black) for Abdominal Pain Patients, Interval vii

8 List of Tables Table 1. Guidelines for categorizing presenting complaints into the major complaint categories, estimated to comprise 80% of Yellow Zone presenting complaints Table 2. Yellow Zone Covariates as correlated with PIA Time Table 3. Descriptive Analysis for patient population for September 2013 to August Table 4. Summary of distinct intervals found in the studied PIA time series Table 5. Coefficient values for ARIMA model for Interval 1 PIA time series Table 6. Size and stationarity of patient subgroups by acuity and presenting complaint for Interval Table 7. Model results for patient acuity and complaint subgroups Table 8. Summary of PIA Time Model Results Table 9. Analysis of Residuals and Validation Criteria by Model Table 10. Validation results of PIA Time models, in minutes Table 11. Summary of ARIMA Error Orders in all regression models Table 12. Yellow Zone Covariates as correlated with PTE Time Table 13. Descriptive Analysis for patient population for September 2013 to August Table 14. Coefficient values for ARIMA model for Interval 1 PTE time series Table 15. Model results for patient acuity and complaint subgroups Table 15. Summary of PTE Time Model Results Table 16. Analysis of Residuals and Validation Criteria by PTE Time Model Table 18. Validation results of PTE Time models, in hours and minutes Table 20. Voice of Business Analysis with Yellow Zone Project Team Table 21: Yellow Zone Individual Patient Record Data Field Definitions Table 22: AC Zone (Acute Care Zone) Data Field Definitions Table 23: Acute Zone Data Field Definitions Table 24: Clinical Decisions Unit (ADU) Data Field Definitions Table 25: Ambulatory Care Zone Data Field Definitions Table 26: Emergency Department (all) Data Field Definitions Table 27. Sample language variations in input for Presenting Complaint (Curr Imp / CC) Table 28. Top 20 Complaints by Frequency, with iterative correction of variations in entry Table 30. Forecasting confidence bounds for non-seasonal models by patient cohort viii

9 Table 31. Results of KPSS tests of Stationarity for each month in Sept.2013-Aug Table 32. ARIMA Model Results for Monthly Time Series Segments Table 33. ARIMA Results for PIA Time, Early Seasonal Forecasting Attempts ix

10 Emergency Department Wait Time Modelling and Prediction at North York General Hospital 1. Context of Wait Time Reporting Initiatives at North York General Hospital Emergency Department With wait times for emergency care in Ontario exceeding provincial targets, there is frequent demand by patients for information about the duration and cause of waiting times for care [1]. Concurrently, there is demand among emergency medicine clinicians for real-time information to support patient demand for care with resource allocation and capacity management strategies [2]. At North York General Hospital (NYGH), this demand for information about emergency wait times is highest among mid-level acuity patients [3]. Accordingly, emergency care providers at NYGH have expressed the desire to a) improve both patient and provider understanding of emergency department wait times, b) improve the flow of wait time information to and from patients and between Emergency Department staff, and c) prescribe actions to manage patient demand for emergency services [3]. It was proposed that a mathematical model be formulated to describe historical and projected wait times for individual patients and patient populations in NYGH s Emergency Department (ED). Furthermore, a literature review was proposed to summarize and compare possible applications of a wait time model, and develop recommendations to NYGH management about best practices for ED wait time information usage. 1

11 At NYGH, real-time wait time reporting for the ED has been motivated by St. Mary s General Hospital s (SMGH) use of Oculys Health Informatics ED Wait Times Information (EDWIN) Module to advertise the current wait time and the projected wait times for a patient to see a practitioner. The goal of the SMGH project is to level load patient demand to times in the day when the wait time is shorter, and do so in real time [4]. There have also been other efforts by Ontario hospitals and hospital networks to provide low-acuity patients with wait time estimates prior to arrival at the ED [5]. Empirical evidence has shown improved patient satisfaction upon provision of wait time information, which aligns with NYGH s current strategic focus on patient satisfaction [6, 7]. The NYGH team assigned to the project comprised of Sandy Marangos (Director of Mental Health and Emergency Services), Stephanie Robinson (Quality Improvement Specialist), Andrea Ennis (ED Manager), and Dr. Kuldeep Sidhu (Emergency Medicine). As of April 2008, the Ontario Ministry of Health and Long-Term Care (MoHLTC) has been utilizing monthly reports from across the province to analyze and monitor ED treatment times in Ontario hospitals. Treatment time encompasses both the wait time to see a physician and the wait time from seeing the physician to leaving the ED, whether by discharge or admission [5]. NYGH s May 2014 reports to the MoHLTC show that NYGH is surpassing Ontario s target of treating 90% of all low-acuity ED patients in under four hours, with a 90 th percentile value of 3.3 hours. For high-acuity patients, NYGH performs better than the provincial average, but remains 1.2 hours over the 90 th percentile provincial target of 8 hours. This treatment time results in 90% of high-acuity patients staying in the ED for up to 9.2 hours, and leads to crowding in the ED while patients await discharge or admission instructions [8]. Accordingly, long wait times for emergency care continue to be a frequent complaint for patients across Canada, and are 2

12 correlated with adverse outcomes, reduced quality of care, patients leaving without being treated, and provider losses [3, 9]. Before investing in a system like Oculys, it is necessary to assess how such a system might perform at NYGH, and whether the investment would constitute an improvement to patient experience in the ED. Therefore, it is necessary to prove a) that individual patient wait times can be predicted for patients entering the ED, given real-time knowledge of the current demand and service capacity; and b) that wait time information can be used strategically to improve the patient experience. These two hypotheses will be tested in the Yellow Zone (YZ) area of the ED, an area for subacute patients to undergo diagnostic testing and assessment by a physician to determine the severity of their disposition [10]. The YZ was created in 2007 with the aim of adding capacity and improving patient flow in the ED by re-directing certain sub-acute patients those who were able to use a chair rather than a bed to wait for test results to a separate waiting area. Approximately 30% of the population arriving at the ED is triaged to the YZ. Once in the YZ, a patient is assessed by a registered nurse while seated, and then seen by a physician in one of four assessment rooms (Figure 1). Since its creation, the YZ schedule has been updated to include three nurses at peak hours, dedicated physicians, and additional coverage hours to reflect known demand for sub-acute care [10]. A detailed map of the Yellow Zone and a process map of a patient s experience in the Yellow Zone can be found in Appendix A (Figure 18). 3

13 Wait time from Arrival to first Physician Assessment Timestamp: Status Arrival" Wait to be called to Triage (<10min) Triage as Yz Patient Wait Registration (Patient given Yellow Folder) Wait Nurse Assessment Wait Arrival Timestamp: Status MD ED Name Entered" Physician Assessment (1 of 4 Yz rooms) Wait Any of: Medication, Imaging, Consult Wait Disposition Decision Patient Leaves Yz Timestamp: Status Discharge" Timestamp: Status Admit" Wait time from first Physician Assessment to Disposition Decision Figure 1. High Level Process Map of the Yellow Zone treatment process, highlighting the two major wait time durations and where process timestamps are recorded. The wait time reporting precedent set by SMGH is only one of a number of rationales for analyzing patient experience in the YZ. There is agreement among many sources that a) EDs are overcrowded, b) overcrowding negatively impacts wait times and length of stay, and c) mathematical models provide a valuable tool for understanding crowding and improving resource allocation [11, 12, 13]. However, these sources further indicate that the healthcare community s understanding of the factors that impact patient wait times and experience is still incomplete, and varies largely from ED to ED [14]. The challenges of informational gaps and inadequate knowledge transfer are estimated to cost up to 30% of Ontario s healthcare budget in errors, losses, and inefficiencies [15, 16]. Due to these challenges, the published body of research on ED patient experience deals primarily with length of stay (LOS) and wait time retroactively. No studies found thus far have 4

14 successfully made wait time predictions for individual patients at time of presentation to ED or at a given point during patient stay. The knowledge created from these studies describes the average patient experience during a discrete time period, such as one day or one shift, and does not provide a method for estimating the costs of variation for individual patients. While these studies may inform future improvement efforts at one or more EDs, they cannot be used continuously to improve patient experience for individuals at the time of presentation to the ED [12]. 5

15 2. Literature Review The high complexity of each ED environment requires an evolving approach to knowledge modelling and management, which integrates known information about the factors affecting patient experience with applicable operations research and statistical techniques Factors affecting Patient Wait Time in the ED While Ontario and Alberta lead other provinces for provincial wait time reporting, Canadians overall continue to wait longer in EDs than in 10 other developed countries, and there has been no trend of improvement in the last decade [17]. The factors which may affect emergency care wait times are frequently explored in healthcare improvement literature in an attempt to design interventions to manage and improve long wait times. Asplin et al. developed a widely-used and -adapted conceptual model that categorizes potential factors as Input, Throughput, or Output variables [18]. The American Academy of Emergency Medicine likens the categories of the model to Demand, Supply, and Capacity factors, respectively, and proposes operations research and management techniques for their measurement and improvement [11]. While the AAEM suggests over 10 measures per category, Hoot and Aronsky report that the most commonly studied inputs were non-urgent patients, frequent flyers, and seasonal influenza spread [9]. The most commonly studied throughput factor was ED staffing availability. The most commonly studied output factors were the number of available beds and number of patients in alternate level of care (ALC), two bottlenecks that may affect the hospital s ability to remove patients from the ED. Reports from the Canadian Institute for Health Information (CIHI) corroborate that staff shortages, high patient volumes, high volumes of ALC patients, and slow discharge processes are 6

16 correlated with longer wait times in Ontario EDs [19, 20, 21]. CIHI adds that large teaching hospitals with larger service areas see longer wait times in the ED despite carrying fewer ALC patients than smaller hospitals. Patient input factors such as old age, high degree of urgency or illness, and decision to admit are also correlated with a longer overall stay in the ED, though high-urgency patients see a physician more quickly. Lastly, CIHI reports that certain times and dates correlate with higher patient volumes and longer wait times in the ED: Service times are longest from 8:00am to 8:00pm, on weekdays rather than weekends, and during the spring and autumn. In recent studies of wait time, the impactful factors suggested by CIHI and by Hoot and Aronsky s review are demonstrated to hold, not only at the larger population level, but at individual hospitals [9, 19, 20]. A study of one Belgian hospital found that the number of patients present in the ED at any given time was well-correlated with how long a population of medium-acuity to high-acuity patients would wait for assessment by a physician [22]. The study has the caveat that correlation does not hold for other patient acuities in this setting, and occupancy should not be the only factor considered in a model of wait time. A Hong Kong-based study found that ED patients with similar treatment pathways also showed similar lengths of stay when grouped with a clustering algorithm [23] Precedence in Wait Time Modelling While ED wait time reporting has not yet been adopted nationally in Canada, a number of pilot projects have demonstrated the manner in which reported ED factors affecting wait time can be used to provide real-time information to patients. As of December 2014, online ED reporting websites exist for certain hospitals and health systems in Ontario, Alberta, and British Columbia. 7

17 The publication of wait times estimates is intended to influence the choices of low-urgency patients and function as decision-making tool for hospital management and clinicians [24, 25, 26]. The literature describing these projects may be an early indicator of the potential impact of wait time modelling on both patient populations and health care providers Online Emergency Wait Time Reporting in Ontario Saint Mary s General Hospital was the first hospital in Ontario to report emergency wait times online, with the website launching in April SMGH s website utilizes Oculys Health Informatics EDWIN Module, which purports to provide real-time information about emergency department activity for prospective patients, and is updated every 20 minutes. EDWIN displays the current estimates for wait time to see physicians or nurse practitioners, the number of people waiting for care, the number being treated, and a six-hour wait time trend forecast [27]. Designed with input from hospital staff, patients, and visitors to the hospital, EDWIN aims to make the ED wait time situation more transparent. According the SMGH website, EDWIN helps prospective patients choose when and where they seek care and helps clinicians recognize and mitigate high-risk patient occupancy scenarios [25]. The developers of the model behind EDWIN concluded that system-dependent factors had a greater impact on LOS than patient-dependent factors. Accordingly, the developers state that the model uses the ED s basic queuing behavior and a crowding factor to calculate wait time estimates [28]. Though EDWIN overestimates individual wait times 87% of the time, St. Mary s has reported high website traffic and positive outcomes since EDWIN s launch on their website [29]. Two years after the launch, SMGH has seen a sustained trend of 12% fewer low-acuity patients presenting to the ED, and observed shorter wait times for both low- and high-acuity patients [30, 31]. In response to criticisms that 8

18 the system would encourage high-acuity patients to avoid the ED, SMGH reports that volumes of high-acuity patients have not dropped since the launch of the website. At Grand River Hospital (GRH) in Kitchener and Hotel-Dieu Grace Hospital (HDGH) in Windsor, online wait time reporting is in a pilot stage at the time of this writing. Encouraged by the success of Oculys at SMGH, GRH launched the EDWIN module in Spring 2014, though the hospital has not yet confirmed whether they are experiencing the same benefits that SMGH observed [31]. HDGH currently uses the Oculys VIBE product internally, a decision-support tool which reports real-time data on patient flow measures such as bed utilization, admissions, discharges, transfers, alternate level of care volumes, and emergency room and operating room volumes [32]. Since adopting the technology, HDGH has reported improved speed in admissions to the hospital from the ED, as well as more efficient bed meetings. As of early 2013, the hospital expected to see further improvements in capacity management due to improved access and use of information, though no more recent reports are available [33] Online Emergency Wait Time Reporting in Canada Beyond Ontario, real-time wait time reporting is a recent development in Canada. Alberta Health Services began reporting wait times for four Calgary EDs in June 2011, and has since added five Urgent Care Centres in Calgary and ten Edmonton Area EDs [1]. The Alberta Health Services website automatically draws wait information from each hospital s ED software and shows a simpler picture of wait time than the EDWIN module: A list of the EDs and an estimate of the current wait time to see a physician [24]. The estimate is calculated based on the service rate of the available staff and the number and acuities of patients in the ED, and is updated every two minutes [24]. Despite fair website-usage rates and favourable public reception since the 9

19 website s launch, Alberta Health Services have not confirmed a significant improvement to wait times or ED arrivals, which remain at par with national averages. Providers and patients have speculated that the lack of improvement is representative of resource shortages and other barriers to primary care in the greater healthcare system [34, 35]. In April 2013, a dashboard similar to that of Alberta Health Services was launched by Vancouver Coastal Health and Providence Health Care for six Emergency Department in British Columbia s Lower Mainland [36]. In addition to displaying the average and 90 th percentile estimated wait time to see a physician, British Columbia s dashboard includes a trendline showing the trend of average wait times from the previous two hours [26]. As of December 2014, no improvements to wait times or low-acuity arrival rates have been reported for the six EDs reporting to the dashboard Criticism of Online Emergency Wait Time Reporting for Canada Amid announcements that more Ontario hospitals will begin publishing real-time ED wait estimates, critics of real-time wait time reporting maintain that the initiative could be harmful to Canada s healthcare system. In addition to the aforementioned concern that advertising busy EDs may cause high-acuity patients to delay or altogether avoid necessary treatment, critics warn that shifting the responsibility to evaluate medical urgency from the system to the patient is at odds with Canada s aims to improve access to care [35, 37, 38]. In the United States, many hospital websites now provide updating estimates for ED wait time, including hospitals in California, Missouri, Michigan, Nevada, New Jersey, South Carolina, Utah, and Virginia. However, in the United States competitive healthcare system, advertising wait time serves not only to divert patients away from crowded EDs, but as a marketing tool to 10

20 attract new patients to EDs with short wait times [39]. Critics state that in Canada, the added transparency only serves to advertise that current wait times are high and does little to incentivize internal ED process improvement, which would lead to lower turnaround times and higher quality patient experience [38, 40]. Even in light of SMGH s successes with Oculys, SMGH President Don Shilton agrees that technologies which divert patients from the ED only work when supported by accurate wait time estimates, hospital-wide patient flow and process improvement, and timely access to primary care, home care, and mental health services [1, 35, 37] Techniques for Wait Time Prediction The majority of research on ED modelling has used discrete event simulation, queuing methods, and time series regression methods. These methods use data and parameters derived from common statistical methods based on normal or skewed distributions and generalized linear models [12, 41]. For each approach, it is necessary to determine how complex healthcare data is utilized to produce accurate and valid models of the ED, and its feasibility for use in the current study Discrete Event Simulation When applied to healthcare, discrete event simulation (DES) can represent the ongoing ED operation as a series of individual patient events, such as an arrival or start to treatment, that change the state of the entire ED system over time [42]. The valid application of DES requires a large amount of probabilistic data about the system being modelled. However, given valid data, simulation can model uncertainty, as well as describe the system s sensitivity to dynamic demand, process interdependencies, and capacity constraints [11, 43]. DES is well-suited to 11

21 modelling the ED and other healthcare settings because of the randomness of patient arrivals, highly dynamic resources, process performance variation, and the complexity of the rules and protocol which are used to determine patient treatment priorities. Accordingly, simulations can be useful for assessing the utilization of expensive ED resources, locating systemic bottlenecks and process inefficiencies, and evaluating ED improvement alternatives [44]. However, DES is a costly and time-consuming activity, which can limit its usefulness in fastpaced and high-risk environments [43]. Despite the many studies applying DES to the ED, the proportion that have had their recommendations implemented and tested in the ED is small [12]. In order to be useful for generating real-time wait time predictions for YZ patients, a DES model would require a costly information technology implementation to supplement NYGH s Emergency Department Information System (EDIS), which does not currently capture detailed process-timing data. The current study s time restrictions on data collection make DES modelling infeasible, but simulation could be used on a smaller scale to test interventions in future work Queuing Theory Queuing theory is used in ED modelling for predicting steady-state queue lengths and population waiting times [45]. Queuing theory algorithms in healthcare attempt to find the shortest total wait time by estimating probabilistic waits at each stage of the process being modelled, using the arrival distribution of patients to the ED, the service time distribution, and the number of clinicians in the system [13]. The output of a successful queuing theory application is a description of the demand for ED services at equilibrium, which can be used to inform resource allocation and process design decisions. Currently, many prefabricated queuing models exist for 12

22 commercial application, encompassing a variety of possible service environments and model objectives, but expert practitioners warn that applying the wrong model could be very costly [13]. Reports from the Institute of Medicine advocate the use of queuing theory in EDs because the cost-saving aims of queuing models align with the conflicting goals of healthcare delivery minimizing the costs and wait times of healthcare delivery while maximizing the utilization of healthcare resources, both human and machine [44]. However, the low number of instances of queuing theory s use in healthcare might be attributable to the restrictive assumptions of queuing models, which conflict with the realities of the ED environment [13]. Queuing models assume that queues can be infinitely long, that all patients can be served by the existing capacity, and that patients do not leave the queue. These assumptions are all empirically untrue in the emergency department setting, which has limited service capacity and restrictions on physical space. These assumptions, paired with the over-simplicity of assuming that clinicians have system-independent service times [11], makes queuing theory application ill-suited for use with research at NYGH Time Series Modelling and Dynamic Regression Models Time series models utilize parameter estimates and statistical principles to characterize the distribution of a dependant variable, e.g. service time, over a finite era. Where data follows the normal distribution, researchers may make inferences about the mean value and standard deviation of a sample of wait times, as well as make estimates of the proportion of patients experiencing a given wait time [41]. In the ED, where factors such as increasing hospital occupancy have been found to directly correlate with rising wait times, regression models may 13

23 be able to describe a mathematical relationship between the occupancy variable and LOS [46, 47]. Models developed using regression methods provide a more robust estimate of variance and mean than descriptive statistics alone, since regression calculates variance as the difference from an underlying trend. Accordingly, regression is less sensitive to outlying, skewed, and heavytailed data. Health care LOS data recorded over a long period of time will often contain a set of atypicallylong patient cases, meaning its empirical distribution is non-normal [48]. As well, LOS observations are autocorrelated, in that sequential recordings of wait time indicate rising and falling patterns over the course of a day rather than a random series [9]. In the case of autocorrelated data, linear regression models are inapplicable, since the assumption that error terms are independent is violated. Valid application of regression models to healthcare is contingent on understanding which models apply best to the data set available, and what data tendencies may impact the validity of the derived correlations. In many of the studies aiming to determine the correlation between time-ordered observations of length of stay and ED supply and demand variables, multivariable regression and autoregression terms are used in conjunction. In studies of ED LOS time series data, autoregressive integrated moving average (ARIMA) models are commonly applied to the errors of a linear regression model to account for the autocorrelation in the errors in the data [18, 49]. When fitted to time series data, the model asserts that the next value in the series can be made based upon a starting value, autoregressive (AR) terms, and moving average (MA) terms [50]. The combination of multivariable regression and ARIMA, also known as dynamic regression, allows healthcare researchers to forecast LOS in terms of an input variable, such as occupancy, as well as in terms of the systematic and random variation in the data. 14

24 In healthcare-specific studies using ARIMA to model wait times, ARIMA models were concluded to be useful for informing management about the generalized impacts of various ED factors on increasing wait times and crowding [49, 51, 52]. Separate regressions were used for distinct patient stratifiers, usually patient acuity, identified by the researchers [48]. However, a major shortcoming of the time series models in the studies reviewed is that the forecasts were made to predict an average wait time over a long interval of time. Among studies found applying ARIMA to healthcare, the shortest interval used for wait time prediction was 8 hours, and high accuracy (p-value < 0.05) forecasts are often made only at the daily level [49, 53]. This forecasting horizon results in an inability for such models to be used for real-time decision support or to make accurate wait-time predictions for individual patients, whose waits may vary minutes or hours from the average. Another shortcoming is the common practice of modelling the wait time as a single duration, rather than as a series of sequential waits, each with differing correlations to covariates [48, 53, 51]. This type of model ignores the reality of the ED patient experience, which often contains distinct phases of care. Another limitation of using ARIMA in the healthcare setting is that resultant models may be less extensible because of the specificity of the model formulation. In formulation of a regression model, the factors which correlate to wait time are highly dependent on the setting. For example, hospital admissions were correlated with wait time in the ED studied by Rathlev et al. in 2011, but not at the ED studied by Forster et al. in 2003 [53, 51]. Furthermore, while time series and regression models may describe factors associated with high wait times, it is difficult to claim causality based on a study s findings. 15

25 2.4. Summary of Findings from Literature All of the studies reviewed agree that there is no universal fix for poor ED operational performance, and that the use of models must be tempered with insight from management and with clinician expertise. This limitation is evident from the large number of distinct publications attempting to achieve similar patient flow improvements in different EDs [12]. Where ARIMA modelling methods are used, the studies rarely assert new theories, formulae, or principles of the method employed. The methods used by these studies imply that the ARIMA model is wellsuited for time series modelling it has been used successfully in other service industries for decades and that the difficulty lies in the application and adaption of the techniques to the new healthcare service settings. Differences between ED settings require the modelling strategy to be tailored to the specific issues present in the ED, and be based on input, throughput, and output measures available to the modeler. Recent studies of ED overcrowding show, that in a variety of settings, patient arrival and occupancy volumes are fair indicators of wait times for medium-acuity to high-acuity patients, the target population of the current study [3]. Patient volumes fluctuate by time of day, by weekday, and occasionally by season, and therefore a forecasting model of ED wait time must incorporate the effects of time intervals into predictions made. Measures of patient age, acuity, type of illness, treatment pathway, and admission decision also are likely to be covariates of LOS at NYGH, and stratifications of the patient population may be drawn along these lines. Since the availability of resources affects service times, and high patient volumes put strain on resources, staff and material resource availability, such as inpatient beds, should be measured alongside the aforementioned demand variables. In order to make the model useful for real-time decision support, the model must make accurate short-term predictions and should indicate the 16

26 measurable thresholds at which action should be taken by clinical staff to mitigate crowding risks. The successes and limitations of existing online ED wait time models set a benchmark for the use of ED information to improve patient experience. Anecdotal evidence shows that using realtime data to describe and publicly report the ED s performance improves operational transparency for both patients and clinicians, and improves patient perception of the ED. However, precautions must be taken to ensure that the information created does not conflict with the aims of the hospital or cause harm to the patient population. The forecasts made must not only be understandable and useful to clinicians, but also must complement the delivery of safe and high-quality health care in the ED. 17

27 3. Methodology NYGH s goal to improve both patient and provider understanding of emergency department wait times in the Yellow Zone aligns the current study with past efforts to model the ED. The assertion that individual patient wait times can be predicted for patients entering the YZ is based upon the concept that autoregression and time series can be used to find meaningful correlations between ED indicators and the wait time that individual patients experience. The commonly held ED model of input, throughput, and output factors and their effects on length of stay was used to influence which independent factors were tested against the dependent result of wait time in the YZ. In order to model the Yellow Zone, we assume that there exists a controlled process within the YZ from which service and wait times distributions can be derived. Additionally, the wait time of YZ patients must be viewed as the downstream effect of occupancy and other independent factors. In order to be useful to YZ clinicians, the information created by the model must be shown to be reliable through low variance from the true process. Variance is assessed as the proportion of wait time estimates falling within a given number of minutes of the empirical value. This theoretical framework asserts that a validated, reliable model can be used to report on ED performance, forecast demand for ED services, aid in ED capacity planning, and improve patient satisfaction through the creation and distribution of information. Based on the desire for information for both trends in wait times and the impacts of systematic and individual factors on wait times, a multivariable regression model with ARIMA errors was fit to the available wait time data. The model is intended to estimate the correlation between the current wait time of an individual observation and previous observations of wait times, previous 18

28 estimations of wait times, seasonal effects, and system covariates such as patient occupancy. Usage of the model is proposed to inform ED management of the status of YZ operations, which in turn can be used to mitigate high wait time scenarios that increase clinician workload and cause patient dissatisfaction. Furthermore, the model is expected to provide guidance for strategies in communicating wait time estimates to patients. Beyond the applications in the Yellow Zone, the current study aims to build upon the current framework for understanding patient experience by adding new knowledge about the dependencies, policies, and practices which determine patient wait times in the ED. An accurate and validated model may be able to predict wait times at EDs other than NYGH that utilize similar processes or treat a similar patient cohort Overview of Research Methodology The ARIMA model is an extension of the Autoregressive Moving Average (ARMA) model in which one or more first differences are taken from the series to create a stationary trend that can be forecasted more easily [54]. ARIMA models without seasonal factors are commonly denoted as ARIMA(p,d,q), where p, d, and q are the number of AR terms, first differences, and MA terms to be used in the formulation of the model. The suffix (p,d,q) is referred to as the order of the ARIMA model. An undifferenced time series can be represented as an ARMA model or as an ARIMA model with d = 0, as seen in the generalized ARMA forecasting equation (Equation 1). The formulation posits that if the series is stationary and does not contain seasonal trends, a prediction of the next value in the series can be made based upon a constant, zero or more AR terms, and zero or more MA terms [50]. 19

29 (1) In Equation 1, x t is the value at time t in the time series, ε t is the error term at time t, c is a constant at the process mean, p and q are the orders of the AR and MA terms, and and are coefficients for the i th AR term and the k th MA term, respectively. Where the time series has a non-stationary mean or linear trend, which can be tested statistically in a number of ways, a stationary, differenced series can be created by subtracting the previous observation from the latest observation,. In the case that the series has seasonal trends, the differenced series is creating by subtracting the one-season-lagged observation,, from the latest observation. This differenced series allows for the creation of a model based on period-to-period or season-toseason change, denoted by and respectively, rather than on distribution around a constant or mean [50, 53]. Once differenced, AR terms can be calculated via maximum likelihood estimation, which describes the degree to which the predicted value is linearly dependent on the previous sequence of values. An MA term describes the degree to which unexpected shocks in the process are smoothed to fit the linear model [55]. The seasonal ARIMA model with season length s is denoted as ARIMA(p,d,q)(P,D,Q) s, where the suffix (P,D,Q) s is added to describe the number of seasonal terms to be added to the nonseasonal ARIMA model. A seasonal trend in a time series is removed by taking one seasonal difference (D = 1), and orders of autoregression (P) and smoothing (Q) for the one-season-lagged difference can be added to the model to improve the forecast. Autocorrelation function (ACF) and partial autocorrelation function (PACF) plots can be used to determine whether to add AR terms, MA terms, or both to the model [51]. In practice, healthcare researchers derive the number 20

30 of AR and MA terms by incrementing the order of p and q until a model with low mean-squared error is found [46, 48, 54]. The goodness of fit of one ARIMA model over another can be assessed by the Akaike Information Criterion (AIC), an estimate of the distance between the probability density function of the selected model and the probability density function of the time series data [55, 56]. The overall capability of the model can be evaluated based on the root mean squared error (RMSE), which functions as an estimate of the white noise standard deviation [52]. (2) { } (3) { } { } The multivariable regression with seasonal ARIMA errors model formulae are depicted above as a cascade of a multivariable regression model on the series and a seasonal ARIMA model on the series of autocorrelated regression errors, [54]. For ease of understanding, in Equation 3, ordinary ARIMA terms are shown in the first set of brackets, and seasonal ARIMA terms in the second. In Equations 2 and 3, is the wait time at time t, is the constant coefficient of the independent covariate, is the value of the independent covariate at time t, s is the length of one season, is the autocorrelated error term of at time t, is the independent error term of the model at time t, are the order of the AR, I, and MA terms, respectively 21

31 are the order of the Seasonal AR and MA terms, respectively, and,, are coefficients for the i th AR term, k th MA term, m th SAR term, and n th SMA term, respectively. The regression equation was fit to time-series data from the Yellow Zone process using Package forecast created by Rob J. Hyndman for the R statistical programming language. This method uses maximum likelihood estimation to determine the ARIMA and independent variable coefficients for the given time series and independent regressors [56]. The function was run iteratively to test whether better models could be found by incrementing or decrementing the orders of the ARIMA terms. The model s forecasting capability was assessed using variance of residuals, AIC, and the root-mean-squared-error (RMSE) value. Seasonal ARIMA models were fit separately to the two major phases in the care process in order to account for the differing effect of YZ factors on wait time before and after seeing a physician [48, 53]. In order to account for the effect of the factors that are patient-based as opposed to process-based, patients were clustered into groups based on characteristics found to be significant. For each distinct population of patients found, a separate regression was calculated, with the aim of reducing the total variance of all regressions Data Collection, Preparation, and Partitioning The dataset used for analysis was drawn from the Wellsoft Information System used by the ED, and from Yellow Zone and ED reports prepared by members of the Patient Experience and Quality team at NYGH. Two years of historical data about the ED and YZ were collected, spanning September 2011 to August 2013, as well as two years of individual patient visit data, 22

32 spanning September 2012 to August The historical data contained daily or weekly values of summary statistics, such as the cumulative number of patient visits to a given ward, and the patient visit data contained more detailed information about each patient, such as acuity, wait time, and presenting complaint. The complete description of the data collected can be found in Appendix B. As mentioned in the methodological overview, the variations to the process introduced by patient-based factors were accounted for in order to make more accurate individual predictions. Therefore, the patient visit records were clustered according to easily identifiable, yet significant indicators to reduce the variation of all regressions. Utilizing the Pareto Principle, or rule, along with guidance from NYGH, it was assumed that a small number of distinct presenting complaints accounted for the majority of patients treated in the YZ (Table 1) [57]. The 12,300 distinct plaintext entries describing the patient s chief health complaint (titled Curr Imp/CC in Appendix B) were manually reviewed first for spelling errors in entry, then for categorization according to the guidelines. Fields were created in order to store the new information while keeping the original data. For example, the sample entry Ruq Pian/Weak was first corrected to read Right Upper Quadrant Pain/Weak and then categorized as the major complaint Abdominal Pain. When the patient had multiple complaints which fell into more than one category, abdominal pain took precedence over other symptoms. If the patient did not experience abdominal pain, then the first complaint was taken to be the primary complaint. 23

33 Table 1. Guidelines for categorizing presenting complaints into the major complaint categories, estimated to comprise 80% of Yellow Zone presenting complaints. Major Complaint Category Abdominal Pain Bleeding Shortness of Breath Headache Fever Common Presentations Right Lower Quadrant (RLQ) Pain, Right Upper Quadrant (RUQ), Left Lower Quadrant (LLQ), Left Upper Quadrant (LUQ), Flank Pain, Epigastric Pain, Lower Abdominal Pain, Possible Appendicitis, Returning Ultrasound / CT Scan Per Vaginal bleeding, Pregnant bleeding, Possible Ectopic Pregnancy, Hematuria, Urinary Retention, Per Rectum bleeding, Blood in Stool, Epistaxis Asthma, Cough, Foreign Body in Throat, Difficulty Breathing, Swallowing, Sore throat, Possible Peritonsillar abscess, Gasping, Spinning, Chest Pain, Wheezing Headache, Migraine, Double vision, Vertigo, Numbness, Facial droop, Weakness, Dizziness, Fainting, Fatigue, Neck pain, Multiple Complaints Dysuria, Back Pain, Cellulitis, Rash, Hives, Leg swelling For example, of the 25,500 individual patient records in 2013, 16,939 were encompassed by the five major complaint categories specified by NYGH (66.4%). Ergo, more major categories were required to constitute 80% of the patient population. An additional category was created for Ambulatory complaints, such as falls, lacerations, and musculoskeletal pain, which are low-urgency complaints directed to the YZ when the Ambulatory unit is closed at nights. Inspection of the most commonly presented complaints led to the creation of the additional major categories Allergic Reaction, Bowel Symptoms, Injury (Ambulatory), Rectal Issue, Specific Pain, and others. Inspection of the Pareto Chart of the revised major complaints (Figure 2) indicated which major complaints should be selected in order to encompass 80% of patients in the YZ. 80% Population is attained after the 24

34 inclusion of the Fever category. Additional figures showing iterative correction of the presenting complaints can be found in Appendix B. In order to prepare the materials received from NYGH for descriptive and time series analysis, a number of transformations and aggregations of the raw data were required. For daily measures of ED and YZ factors, the daily values were joined by date index for the two years of data collected. Furthermore, numerical values were converted to decimal format in order to make process timings directly comparable to one another. Figure 2. Frequency of Presenting Complaints to the Yellow Zone. 25

35 For each individual patient visit record, three timestamps are recorded: Patient arrival, initial assessment by physician, and time leaving the YZ. From these timestamps, durations for Time to Physician Initial Assessment (PIA), Time from PIA to Admission or Discharge Decision (PTE), and total LOS were calculated (Figure 3). Additionally, in order to convert the continuous timestamp data into discrete intervals for time series modelling, each of the three timestamp fields was replicated and rounded down to the nearest day and hour. For example, the field Arrival contains the precise minute of a patient s arrival as per the sample datum The calculated fields ArrivalDay and ArrivalHour are denoted as and respectively. The arrival times of the patients were rounded to the preceding hour in order to aggregate patient wait times into groups by Arrival Hour, and a time series of hourly YZ statistics was created. Additionally, the connection of patient wait times to hour of arrival facilitates the prediction of future wait times according to the status of the YZ at the patient s hour of arrival, rather than by factors appearing downstream in the treatment process. T1 T2-T1: PIA Time T3 T2: PTE Time LOS (Target = 90% < 4hrs ) T2 T3 Figure 3. Simplified process of the Yellow Zone patient pathway, illustrating where timestamps are created and durations can be derived. 26

36 For each time interval, the instantaneous patient occupancy was calculated based on the prior observation of occupancy and the volume of patient arrivals, discharges, LWBS, and admittances. Since the presumed occupancy at the start of the time series is zero, the first two days of the time series shall be excluded from the forecast in order to account for the data initialization period. It is important to note that for an unknown reason, LWBS patient visits were not captured in the individual patient visit data for the span of January 2013 to April For this reason, patient occupancy could not be calculated accurately for these months, and this time span was omitted from regressions including occupancy as a covariate. The 938 (3.4%) patients who left the YZ without being seen (LWBS) were removed from the data frame, as service durations cannot be determined for patients without a Time Leaving YZ timestamp. However, the volume of LWBS patients was recorded per time interval and retained as a possible covariate of LOS. Further outliers were removed from the individual patient records based on atypical patient characteristics and patient processes. Based on the low prevalence of CTAS 1 and CTAS 5 patients in the YZ, 0.03% and 0.2% of the total population respectively, patients of these acuities were removed from the data frame. Additionally, patients who visited the YZ but received care from other wards during their time in the ED (3.6%) were excluded on the supposition that they were improperly triaged to the YZ or required resources beyond those available to the YZ Model Validation and Prediction Accuracy The baseline statistics and correlations derived from the and data are used to inform determination of ARIMA models for PIA Time and PTE Time. For each distinct patient population based on acuity and complaint characteristics, separate ARIMA formulations were created with periodicity of 24-hours and inclusion of YZ occupancy, YZ arrivals, and selected external regressors that improved the AIC value of the model. A periodicity of 178-hours, emulating 27

37 the effect of weekday on waiting time, was tested in each model but found to have negligible significance. The continued validity of the regression models was tested by applying the regression formulae to a recent sample of ED data and assessing the goodness of fit on the new data. The residuals of each model are used to assess goodness of fit, and each model s prediction capability can be ranked by the model variance and AIC. The distribution of the residuals is used to estimate the capability of the model to predict future wait times. Since NYGH bases their wait time targets on a 90th percentile measurement, an 80% confidence interval (spanning the 10th to 90th percentile estimation of the predicted value) was used to assess the accuracy of the model. The proportion of actual wait times exceeding the values of the calculated 90% prediction bounds is used to assess the model s validity and utility for forecasting. Since underestimation of the expected wait time is more detrimental to patient satisfaction than overestimation, the proportion of patients waiting within 15-minutes, 30- minutes, and 60-minutes over the predicted value was also calculated [59]. 28

38 4. Analysis and Results for Regression Models fitted to PIA Time The section presents the results of the design methodology as applied to a time series of the first major time interval, PIA Time, derived from the set of individual patient data spanning September 2013 August The characteristics of the chosen patient population are described, followed by a summarization of the model results. The model fitted for the entire Yellow Zone population is considered and contrasted with models applied to specific patient cohorts. The validation section presents the results of validation tests for the whole population and sub-populations together, and the discussion section analyzes the numerical results as they pertain to precision and accuracy for forecasting. The detailed model result tables and graphs can be found in Appendices C and E Sample ARIMA Model Fitting Methodology In order to select the appropriate external regressors and ARIMA model for fitting Yellow Zone time series, a trial regression model was fit to the PIA time series for the whole patient population from September 2013-August 2014, i.e. no distinction is made between patients of differing acuities or complaints. To determine which external regressors to include in the regression, it was necessary to determine the covariates that are most highly correlated with PIA, as well as with other covariates. A correlation matrix of the collected variables (Figure 4) shows that the volume of patients yet to see the physician (before-md.all) has the strongest positive correlation with PIA time, and that Total Occupancy, Occupancy of CTAS 2 patients, and Occupancy of CTAS 3 patients have a lesser, but still significant positive correlation with PIA Time. Accordingly, the external regressors of occupancy were emphasized for inclusion in the linear model, though other available external regressors were tested for significance. 29

39 occupancy.all oc.2 oc.3 oc.4 before-md.all before-md.2 before-md.3 before-md.4 after-md.all Arrivals.All Arrivals.2 Arrivals.3 Arrivals.4 Admits.All Dsch.All PIA.All PTE.All Figure 4. Correlation Matrix showing strength of correlation between response values and available external regressors. A darker shade of blue indicates a stronger positive correlation. In order to assess the suitability of each covariate for inclusion in the model, the assumptions of linearity and homoscedasticity were tested using boxplots of the PIA Time against the predictive covariate, as well as the Breusch-Pagan test for constant variance (Table 2). As a general rule, occupancy was found to correlate linearly with PIA, whereas arrivals, discharges, and admissions in the YZ did not. Because the number of observations for extremely high occupancy scenarios is low, a number of the Occupancy-Wait Time boxplots show non-linear trends as the value of the predictor increases. However, where the trend is strong for the majority of the observations approaching the non-conforming segment, and correlation is found in the correlation matrix (Figure 4), the trend is assumed to continue despite graphical evidence. Where the boxplot and Breusch-Pagan tests indicated conflicting results, the covariate was marked as to be tested empirically through inclusion in the regression model. The boxplot graphs for measures of occupancy for PIA Time can be found in Appendix C. 30

40 Table 2. Yellow Zone Covariates as correlated with PIA time Covariate (by Hour) Sub-Group Covariate Linear Trend w.r.t. PIA Time Breusch Pagan Statistic Yellow Zone Occupancy All Strong Positive Yes Suitable for Regression Pre-PIA Occupancy Strong Positive <0.05 To be tested in model Post-PIA Occupancy Weak Positive <0.05 No CTAS 2 Occupancy Positive Yes CTAS 3 Occupancy Strong Positive Yes CTAS 4 Occupancy Positive Yes Pre-PIA CTAS 2 Positive <0.05 To be tested in model Occupancy Pre-PIA CTAS 3 Positive <0.05 To be tested in model Occupancy Pre-PIA CTAS 4 Weak Positive To be tested in model Occupancy Yellow Zone Arrivals All Non-Linear <0.05 No CTAS 2 Arrivals Weak Positive <0.05 No CTAS 3 Arrivals Weak Positive <0.05 No CTAS 4 Arrivals Non-Linear No Admissions from Yellow Zone Non-Linear <0.05 No Discharges from Yellow Zone Weak Positive <0.05 No The sub-group covariates, such as Occupancy stratified by the acuity of the patients, were included based upon feedback from NYGH clinicians and from the information easily obtained by clinicians from the Wellsoft EDIS in real time. Since sub-group occupancy measures are logically collinear, the covariates suitable for regression were sorted into three separate tests: All Occupancy, Occupancy by Acuity (3 variables), and Occupancy by Pre-PIA, the count of patients in the YZ who have not yet seen a physician. Though the Arima() function automatically computes a multivariable regression with ARIMA errors, Figure 5 is included to demonstrate how a simple linear regression of Occupancy on the autocorrelated PIA time series creates a series of autocorrelated residuals. The lags with bars exceeding the confidence bounds (shown in dashed blue lines in b. and d.) contain some correlation with the most recent observation, thereby 31

41 violating the assumption of independent error terms. Therefore, these residuals must be fit with an ARIMA model to account for the autocorrelation and seasonality in the series before making forecasts. a. c. b. d. Figure 5. The autocorrelation found in the unchanged PIA Time series (a,b) is retained in the errors and residuals (d) of a linear regression of occupancy on the time series (c). 32

42 Average PIA Time (Hours) Average PIA Time by Weekday Friday Monday Saturday Sunday Thursday Tuesday Wednesday Hour (24-Hour Clock) Figure 6. The average hourly PIA Time plotted against the time of day. In order to apply ARIMA, the series must first be made stationary through differencing. Examination of the PIA over the course of a week shows a daily oscillating pattern of PIA Time, indicating some amount of periodicity to the data (Figure 6). The non-stationarity of the time series is confirmed using Kwiatkowski-Phillips-Schmidt-Shin (KPSS) tests, which strongly reject the hypothesis of a constant mean and variance for the series [54]. Since the daily pattern implies a period of 24 hours and the ACF plot of the series and its residuals shows a significant 24 th lag, a season of 24-hours length is assumed to be present in the time series. When the 24 th difference is taken from the time series ( ), the series becomes stationary. Once the series is stationary, the order of the ARIMA(p,d,q)(P,D,Q) m 33

43 is adjusted according to inspection of the ACF and PACF plots of the regression model residuals. Conventionally, where the autocorrelation of residuals is positive, an AR term is added; where the autocorrelation is negative (i.e. over-differenced), an MA term may be added to compensate for the error [50]. After finding the lowest AIC through iteration, the independence of the regression errors,, is confirmed with a Portmanteau test. The detailed regression results of the fitted model can be found in Appendix E. In modelling the PIA time series for September 2013-August 2014, the empirically best regression was found with ARIMA(1,1,2)(0,1,1) 24, with AIC = and s 2 = using the CTAS2, CTAS3, and CTAS4 Occupancy as external regressors. The ACF plot of the model s residuals demonstrates that the first ARIMA errors are no longer autocorrelated (Figure 7). Though there remains a small positive autocorrelation at lags 24 and 26, the independence of the apparently significant lags was not rejected by the Portmanteau test. Figure 7. ACF plot for the residuals of the optimal model for fitting the PIA Time series (all patients, one year). 34

44 The coefficients of the model are listed in Table 3. The linear regression implies an addition of 3.5 minutes, 2.4 minutes, and 1.7 minutes in PIA Time per CTS 2, 3, and 4 patient in the YZ, respectively. Table 3. Coefficient values for ARIMA model for Sept.2013-Aug.2014 PIA time series. Coefficient Value Standard Error AR(1) MA(1) MA(2) SMA(1) CTAS 2 Occupancy (hrs) CTAS 3 Occupancy (hrs) CTAS 4 Occupancy (hrs) The forecasting formula for PIA Time in hours is as follows, not including standard deviation of the coefficients. (First and Seasonal Difference) (AR Terms) (MA Terms) (Seasonal MA Term) (Linear Coefficient Terms) This iterative method was used to develop models for both PIA Time and PTE Time to test the hypothesis that wait times can be predicted for patients entering the YZ. 35

45 4.2. History of Attempted Methodologies Before all significant factors were incorporated into the modelling methodology, simpler ARIMA models were fit to several time series intervals and patient cohorts. These models were deemed unsuitable for forecasting due to their omission of significant regression parameters, but served to gain an understanding of the characteristics of the time series. Prior to the incorporation of seasonal effects, non-seasonal ARIMA models were fit to each acuity level and each of the seven major complaint categories. While these models yielded poor wait time estimates, they served to illustrate that the models based on complaint category yielded smaller variance on average than the models based on acuity. A detailed description of the cohort-based analysis for PIA time can be found in Appendix F. Prior to the decision to select stationary intervals for modelling, ARIMA models were fit to the PIA time series based on calendar-based delimiters. ARIMA Models were fit to the full calendar year, a four-month span, and four individual months in the data set which represented a stationary series, an increasing series, a decreasing series, and a nonlinear series. The stationary series yielded the lowest AIC and variance, supporting the intuition that model accuracy improves for series with lower complexity. As well, it was discovered that modelling shorter spans of time did not decrease the confidence of PIA time prediction. A detailed description of the monthly analysis for PIA time can be found in Appendix G Descriptive Analysis of PIA Time Stratification of the patient population by acuity and complaint illustrates the composition of the patient population and the attendant demand on YZ resources. Generally speaking, patients of CTAS 3 make up the majority of the yellow zone population, with abdominal pain being the 36

46 most prevalent complaint. Patients are seen by a physician (PIA time) in less than two hours on average, and 90% of patients are seen in 3.38 hours or less. The results of descriptive analysis for the top eight complaints and for each acuity can be viewed in Table 3. Table 3. Descriptive Analysis for patient population for September 2013 to August Patients Pct. of all Patients Cohort Volume Pct. of Cohort Admitted Vol. of Cohort Admitted All % % By Acuity CTAS % % CTAS % % CTAS % % By Complaint Abdominal Pain 32.32% % Bleeding 12.95% % Headache 11.46% % Fever 6.86% % Shortness of 5.50% % Breath Specific Pain 5.18% % Injury 4.65% % Other (Non- Categorized) 21.08% % Average Daily Arrivals Patients Mean PIA (hours) 90% PIA (hours) Mean LOS (hours) 90% LOS (hours) All By Acuity CTAS CTAS CTAS By Complaint Abdominal Pain Bleeding Headache Fever Shortness of Breath Specific Pain Injury Other (Non-Categorized)

47 Wait Time (hours) Figure 8 illustrates the PIA means compared with the PTE and total LOS means for each ailment and acuity. In graphical form, it can be seen that acuities with shorter PIA times do not necessarily have the shortest total LOS, which supports the decision to model each phase of care separately. Boxplot graphs illustrating the effects of occupancy measures on PIA can be found in Appendix C. Length of Stay by Patient Cohort Mean PIA (hours) Mean PTE (hours) Patient Cohort Figure 8. Length of stay stratified by patient cohort. 38

48 Time to PIA (Hours) Number of Patients in Yellow Zone PITA Time: Breakout Analysis on Sep Aug Time to PIA PIA.Mean Average Daily Arrivals Sep 1-Oct 1-Nov 1-Dec 1-Jan 1-Feb 1-Mar 1-Apr 1-May 1-Jun 1-Jul 1-Aug 74 Figure 9. Significant changes in wait time means over the course of the yearly PIA, with average daily arrival volumes noted per PIA interval. When examining PIA, it is apparent that the duration is not only distinctly affected by acuity, complaint, and system covariates, but also by time of year. In order to identify where long-term changes occur in the time series, above and beyond the periodic effect of the hour of the day, the time series was analyzed using a breakout detection algorithm [58]. The algorithm detects shifts in the median wait time of the series, which are evidence of long-term changes in the average wait time over the course of the year analyzed. Each series was divided along its median-shifts into steady state intervals with a constant mean. The breakout analysis suggests that the PIA time contains two intervals of higher than average wait times and one interval of lower wait times (Figure 9). The level stationarity of the intervals identified by the algorithm were confirmed using KPSS tests, and the differences in means for consecutive intervals were confirmed using Welch two sample t-tests. The two intervals of higher PIA Time occur from December 24 to December 31 and March 16-April 16, and are both 39

49 preceded and followed by longer intervals of steady state wait time. The three longer intervals of steady state wait time 1, 3, and 5 were found to have the same mean wait time. The findings imply that the PIA Time distribution remained in-control for the majority of September 2013 to August 2014, with outlying periods due to extraneous factors. Speculation for the presence nonconforming intervals is detailed in the Description field of Table 4. The findings of descriptive analysis on the PIA time series support the incorporation of patient acuity and complaint, instantaneous YZ occupancy, daily pattern and time of year into the ARIMA models. Table 4. Summary of distinct intervals found in the studied PIA time series. Interval Span Length (days) Mean PIA (hours) Description 1 Sep 1 Dec Steady State process 2 Dec 24 Dec Wait time increase. Possible Cause Holiday week, lower physician staffing. No significant change to Arrival Volume 3 Jan 1 Mar Return to Steady State process 4 Mar 16 Apr Wait time increase. Possible Cause Increase in daily arrival volume to the YZ with p= (See Figure 9) 5 Apr 17 Aug Steady State process. No decrease in arrival volume from interval 4 spike indicates possible acclimatization to new arrival volumes. 6 Aug 9 Aug Wait time decrease. Possible Cause Long-term improvement in YZ service due to expansion of outpatient care clinic [59] 40

50 4.4. Results of Regression on PIA Time In order to simplify and improve the accuracy of forecasting PIA Time, models were fit to a stationary interval of the series. With prior knowledge that the series is level stationary, the series only requires seasonal differencing (and not first order differencing) before an ARMA model can be fit [54]. For all patients in the YZ and specific patient cohorts, the model is fit to Interval 1 (September 1 December 23), and validated against Intervals 4 (March 16 April 16) and 5 (April 17 August 8). The time series consisting of the PIA Time for all patients entering the YZ for the 114-day span between September 1, 2013 and December 23, 2013 was fit with a regression model with ARIMA errors according to the methodology outlined in Section Forecasting for All Yellow Zone Patients Using the methodology described in the previous section, a model of ARIMA(2,0,1)(0,1,1)[24] was selected for the Interval 1 PIA time series. The model was selected based on the lowest found AIC (AIC = ) and variance of s 2 = The occupancy of CTAS 2 and CTAS 3 patients were found to be the best external regressors for this time period, compared to the other measures of occupancy in Table 2. The variance of Interval 1 is relatively superior to the value of s 2 = found for the entire span of the PIA time series for all patients modeled in Section The linear regression implies an addition of approximately 4.1 minutes to PIA Time per CTAS 2 patient in the YZ, and an addition of approximately 3.5 minutes to PIA Time per CTAS 3 patient in the YZ. The dimensions of the model are detailed in Table 5. The detailed regression results of the fitted model can be found in Appendix E. The properties of the model and process capability are further discussed in Section

51 Table 5. Coefficient values for ARIMA model for Interval 1 PIA time series. September 1-December 23, 2013 Term Coefficient Value Standard Error AR(1) AR(2) SMA(1) CTAS 2 Occupancy (hrs) CTAS 3 Occupancy (hrs) Forecasting for Patients by Acuity and Complaint As shown in Table 3, the average PIA time differs for different types of patients. In order to create individualized predictions for patients of differing needs, models were fitted to subgroup time series representing the PIA Time for patients of CTAS levels 2, 3, for the largest presenting complaint, Abdominal Pain. Due to the relatively smaller size of the subgroups compared to the complete patient set, there exist gaps in the time series for hours where no patient arrived that fitted the subgroup being modeled ( Percent of Hours without Arrivals in Table 6). In order to fill these gaps, wait time estimates were approximated by polynomial spline interpolation between known wait time observations. This method of approximation was chosen in order to emulate the seasonal pattern observed to occur throughout the course of 24-hours in the YZ (Figure 6) [60]. While Table 6 shows statistics for all acuities and the three largest presenting complaints, only patient groups with more than 50% real observations in the time series were modelled. Any negative data points created by the spline were rounded up to zero. Graph depictions of this approximation method for the three largest time series modeled can be found in Appendix H. The approximation method successfully retains the statistical distribution of the original time series. 42

52 As a means of comparison to the model for the entirety of the YZ population, the six patient subgroups were fitted for the time span of September 1, 2013 December 23, 2013 (Interval 1). Table 6. Size and stationarity of patient subgroups by acuity and presenting complaint for Interval 1. Subgroup Size (no. of Patients) Percent of Hours without Arrival Longest Gap w/o Arrival (hours) Level Stationarity Trend Stationarity Category p-value Level p- Trend value By Acuity CTAS % 13 > 0.1 Yes - - CTAS % 4 > 0.1 Yes - - CTAS % 30 > 0.1 Yes - - By Complaint Abdominal Pain % 10 > 0.1 Yes - - Bleeding % No No Headache % No > 0.1 Yes Following the same iterative method as for forecasting all patients, models were fit to the three largest patient subgroups. Though all external regressors were considered for the models, models using Occupancy of CTAS 2 Patients and Occupancy of CTAS 3 Patients in conjunction yielded the lowest AIC for all three series. The summarized model results are found in Table 7 and are further discussed in the following section. The detailed regression results of the fitted models can be found in Appendix E. Figure 10 shows the rolling forecasts for CTAS 2 and 3 patients for the 24-hour period following the interval used for the model, using occupancy data from the same period. The forecasts show the degree to which the acuity-based models account for occupancy differently in the projected service time of CTAS 2 and CTAS 3 patients throughout the course of a day, while capturing an overall descending trend in the actual PIA time. 43

53 7:00 AM 9:00 AM 11:00 AM 1:00 PM 3:00 PM 5:00 PM 7:00 PM 9:00 PM 11:00 PM 1:00 AM 3:00 AM 5:00 AM 7:00 AM 9:00 AM 11:00 AM 1:00 PM 3:00 PM 5:00 PM 7:00 PM 9:00 PM 11:00 PM 1:00 AM 3:00 AM 5:00 AM 7:00 AM Time to PIA (Hours) Table 7. Model results for patient acuity and complaint subgroups Coefficient Value Autoregressive Terms CTAS 2 CTAS 3 Abdominal Pain S.E. Coefficient Value S.E. Coefficient Value AR(1) AR(2) MA Terms MA(1) Seasonal MA Terms SMA(1) Linear Terms CTAS 2 Occupancy CTAS 3 Occupancy AIC Variance (s 2 ) S.E. Time to PIA Forecast by Patient Cohort (24-hour Forecast following 24-hours actual Time-to-PIA Data) CTAS2 Actual CTAS3 Actual CTAS2 Forecast CTAS 3 Forecast 6 5 Observed Forecasted and Observed Time (Hour) Figure 10. Rolling PIA Time forecasts for All, CTAS 2, and CTAS 3 patients. 44

54 4.5. Results Summary and Validation for PIA Time Models Models were fit to four partitions of the PIA Time time series for the Yellow Zone from September 1, 2013 to December 23, The optimal model for each time series partition was selected based on minimizing AIC and variance. The external regressors which most commonly yielded the lowest AIC were Occupancy by CTAS levels 2 and 3 in combination. The model results are numbered and found together in Table 8. In examination of the models together, all share the same seasonal period of 24-hours and a seasonal ARIMA order of (0, 1, 1), indicating one seasonal difference (D = 1) and one seasonal moving average term (Q = 1). All models are autoregressive models with seasonal exponential smoothing, with orders of one (1) to two (2) autoregressive terms. Model 1.2 differs in that it also contains an order of ordinary (period-toperiod) smoothing (q = 1). The lowest AIC, variance, and RMSE was found for the model fitting the CTAS 2 PIA Time. Table 8. Summary of PIA Time Model Results Interval 1 Model Non-seasonal Order Seasonal Order Model Statistics p d q P D Q AIC s 2 RMSE Seasonal Model Type 1.1 All Patients rd Order AR 1.2 CTAS ARMA 1.3 CTAS st Order AR 1.4 Abdominal Pain rd Order AR Validity and prediction accuracy of the models were assessed based on the distribution of regression residuals for each fitting. The first through thirtieth residuals for each Interval 1 model passed a Portmanteau test for independence, confirming that the residuals are random and the models may be used for forecasting. The validated Interval 1 model was then applied to new data 45

55 in two stationary intervals in the PIA time series, Intervals 4 (March 16 April 16) and 5 (April 17 August 8), to assess the model s utility for ongoing prediction. The proportion of actual wait times exceeding the values of the estimated 90% (2.78-sigma) prediction confidence interval (CI) were calculated and can be found in Table 9. The model s accuracy for forecasting is assessed according to the range of the bounds, where a smaller range indicates a smaller margin of error for forecasts. The Table 9 column Percent Overestimated or Underestimated by X or less describes the percentage of patients who either wait less than the PIA Time prediction or who wait less than 15, 30, or 60 minutes longer than the prediction. Table 9. Analysis of Residuals and Validation Criteria by Model Interval 1 Model 90% Confidence Interval (hours) (-1.39σ, +1.39σ) Prediction Estimation Error 90 th Percentile (hours) Percentage < CI (%) Percentage > CI (%) Percent Overestimated or Underestimated by X or Less (%) X = 15 min X = 30 min X = 60 min All Patients (-0.943, 0.851) CTAS 2 (-0.864, 0.754) CTAS 3 (-0.953, 0.830) Abdominal Pain (-0.902, 0.767) Interval 1 Model Applied to Interval 4 All Patients (-1.067, 1.021) CTAS 2 (-1.082, 0.920) CTAS 3 (-1.056, 0.997) Abdominal Pain (-1.074, 1.012) Interval 1 Model Applied to Interval 5 All Patients (-0.955, 0.851) CTAS 2 (-0.830, 0.702) CTAS 3 (-0.947, 0.819) Abdominal Pain (-0.905, 0.793) The analysis of residuals shows that the models fitted to the acuity- and complaint-specific time series consistently have confidence intervals with smaller ranges and a smaller proportion of 46

56 patients exceeding the 90% CI bound when compared to the All Patients model. This result is supported by the F-test to compare two variances, which asserts that the prediction confidence for the sub-models is either equal to or better than the All Patients model in all three intervals. Additionally, the acuity- and complaint-specific models have larger proportions of patients being seen by the physician in less than 15 and 30 minutes over the estimate, though the CTAS 3 Model underperforms the All Patients model when X = 60 in Interval 1. The result of a twosample Kolmogorov-Smirnov (K-S) test of the residuals from the All Patients model and the CTAS 3 indicates that this result may not be significant. The K-S test fails to reject the hypothesis that the residuals from the two models come from the same underlying distribution in Interval 1 (p = ), Interval 4 (p = ) and Interval 5 (p = 0.744). The model s utility for ongoing prediction is assessed by comparing the variance of residuals in Interval 1 to the variance of residuals for the same model in Intervals 4 and 5. Low variance is considered to be an indicator of lower average prediction error. The F-test to compare two variances indicates that the model variances are higher for all models in Interval 4, indicating a lower prediction accuracy on the new data. However, when the models are applied to Interval 5, the variance of residuals is equal to or lower than the Interval 1 variance, indicating equal or better capability on Interval 5 data. The implications of these results are elaborated upon in the discussion section Discussion of Results for PIA Time Models The current study asserts that any use of real-time wait time reporting, whether internally among clinicians or to patients, hinges on confidence in the ability of a model to predict wait times for patients entering the YZ. Further, the model should make strides towards overcoming the 47

57 limitations of existing models. In testing the hypothesis of whether individualized patient waiting times can be predicted for patients entering the YZ, 90% confidence intervals can be drawn around the residuals of the dynamic regression models with ARIMA errors. The distributions of these residuals, in addition to the orders and coefficients of the dynamic regression models, are used to draw conclusions about the results of the analysis Precision and Accuracy of PIA Time Models In analysis of residuals for the models fit to PIA Time, the CTAS 2- and Abdominal Painspecific model residuals consistently have 90% confidence intervals with smaller ranges than the model for All Patients and the model for CTAS 3 (Table 10). The two superior models, which also have the lower standard deviations, both encompass smaller patient groups than the other two models. This result supports the assertion that predictions improve when the patient population is stratified into smaller, characteristic subgroups [23]. The process steps involved in PIA Time are uniform and serial for all patients Triage, Registration, Nurse Assessment, and Lab Draw - and ordering priority is determined by a judgment of urgency and complaint. Consequently, the correlation observed between PIA time and acuity or complaint logically aligns with process structure and YZ business rules. Early forecasting attempts (Appendix F) support this result, though the ARIMA models used in early testing did not contain a seasonal component. The improvement in prediction accuracy is less evident when comparing the CTAS 3 model with the All Patients subgroup; a K-S test of the two residual distributions indicates that there is no significant improvement in performance when modelling CTAS 3 patients alone. This result may be due to the fact that CTAS 3 patients comprise approximately 62% of the patient population of the YZ, and the variation of patient need within the CTAS 3 group is too great for 48

58 the acuity to be considered a specific characteristic of the sub-group which would determine the treatment length. Table 10. Validation results of PIA Time models, in minutes 90% Error Confidence Interval (min) PIA Time (Interval 1 Residuals) 90 th Percentile Error (min) Root Mean Squared Error (min) Effect per CTAS 2 Patient (min) All Patients (-56.58, ) CTAS 2 (-51.84, ) CTAS 3 (-57.18, ) Abdominal Pain (-54.12, ) Effect per CTAS 3 Patient (min) Examination of the occupancy coefficients in each model belies that the YZ has a consistent treatment-priority discipline that creates different wait experiences for patients of different subgroups. Every model for PIA Time uses CTAS 2 and CTAS 3 Occupancy as an independent regressor, but the magnitude of the effects of occupancy differs for each patient subgroup. For the All Patients model, each CTAS 2 patient adds 4.10 minutes and each CTAS 3 patient adds 3.53 minutes to the wait time of the typical patient being modelled. Since the largest proportion of patients is CTAS 3, it is logical to assume that CTAS 2 patients present a greater increase to a patient s projected wait time than CTAS 3 patients. This result is also seen in the CTAS 2 and Abdominal Pain models. However, in the CTAS 2 model, the magnitude of added wait time per CTAS 2 and CTAS 3 patient is considerably smaller, reflecting the CTAS 2 patients overall higher treatment priority in the waiting room and resultant lower wait time to see a physician. The CTAS 3 model presents an interesting divergence from the trend of the other three models, in that the magnitude of the effect of CTAS 3 patients is larger than the effect of CTAS 2 patients. This result reflects that the CTAS 3 patient, as part of the largest cohort, competes more 49

59 with other CTAS 3 patients for treatment priority than it does with the smaller cohort of higher priority CTAS 2 patients. When examining the PIA time series from a statistical process control perspective, it appears as though the YZ experiences three long, stationary intervals of typical wait PIA Time and two segments of higher wait to PIA throughout the year: December 24-December 31 (Interval 2) and March 16-April 16 (Interval 4). When applied to two differing stationary segments of the time series, the four PIA time models had lower accuracy on Interval 4 and equal or better accuracy on Interval 5 (Table 4). The model s performance on Interval 5, a 114-day span with similar length and mean PIA Time to the interval on which the model was fit, indicates that the model s prediction ability does not decrease when applied to future data when the YZ is behaving typically. Interval 4, a 32-day span with a higher-than-average PIA Time and a higher-thanaverage daily arrival volume, represents a period of increased demand on clinical resources, to which the model that applies for typical YZ behavior does not work as well. Interval 4, which coincides with a period of increased Influenza B activity reported by Public Health Ontario, likely requires that the model be recalibrated at this time to reflect the changed effects of patient occupancy on wait to PIA or magnitude of autoregression [61]. Once Flu Season has ended, the typical YZ PIA Time forecasting model can be used with the same confidence as in the previous typical interval (January 1-March 15). One of the proposed contributions of the current study is to improve on the capability of existing wait time reporting tools. In studies that reported residual errors for forecasting wait time in the ED, the standard deviation was approximately 38.4 minutes with a minimum forecast interval of 8-hours and 36 minutes with a minimum forecast interval of 2 hours [53, 62]. The PIA Time ARIMA models provide forecasts at the hourly rate with a RSME ranging from

60 minutes, enabling the models to provide more frequent forecast updates with similar accuracy to existing ARIMA models with longer forecast horizons. This improvement over currently published ED wait time forecasting models provides regularly-updating forecast information at a rate that can be used to support decision making hour-to-hour in the ED. Additionally, the current study s method allows for prediction of subgroups with improved forecast accuracy, marking an improvement over methods that treat all ED patients as the same entity [48, 53, 51]. Models based on patient subgroups also improve prediction performance when compared to Ontario s benchmark for ED wait time reporting, SMGH s EDWIN module. The 90 th percentile PIA Time prediction for All Patients overestimates the average YZ patient by approximately 49 minutes. In comparison, the EDWIN module s 90 th percentile prediction may overestimate the average PIA Time by approximately 47 minutes, based on data collected from the EDWIN site and recent publications by the module s developers [63, 64]. Since the difference of the 90 th percentile residual from the mean reflects the variance of the model residuals, the CTAS 2 and Abdominal Pain models may make more precise predictions for the YZ than EDWIN does for SMGH s ED, while the CTAS 3 and All Patients models likely would not. When evaluating the overall appropriateness of the selected ARIMA methodology for fitting and predicting PIA Time, the high proportion of patients falling within the category of Overestimated or Underestimated by 15 Minutes or less was regarded as a satisfactory indicator that the model could be used in the YZ. The numerical results of the models, logical validity of the covariate coefficients, and stability of the error rate over time reflect that ARIMA models not only show utility in forecasting PIA Time, but in capturing the wait time differences that patients of varying acuity and complaint experience. 51

61 Significance of the Order of ARIMA Errors in PIA Time Models There are consistencies in the order of the Seasonal ARIMA errors across all models that indicate how current wait times can predict future performance in the YZ. All models contain one order of seasonal differencing (D = 1), a seasonal period of 24-hours, and one seasonal moving average (SMA) parameter (Table 11). Daily seasonal differencing lowered the AIC in all regressions, implying that the change in wait time and its covariates between two consecutive days yields more information about future wait times than the latest observation of wait time alone. In the example of PIA Time, the seasonal period and highs and lows of wait time are wellcoordinated with the daily occupancy pattern. However, since linear regression errors in time series forecasting will compound over time, an SMA term of approximately in all models shows that correcting for the previous day s forecasting error greatly improves the confidence of the regression. Particularly in the ED, where patient demand tends to spike on Monday and decrease until Sunday, capturing daily differences in wait time is essential to forecasting the current day s wait time (Figure 11). Table 11. Summary of ARIMA Error Orders in all regression models. Parameters PIA Models PIA Models All Patients (2,0,0)(0,1,1) CTAS 2 (1,0,1)(0,1,1) CTAS 3 (1,0,0)(0,1,1) Abdominal Pain (2,0,0)(0,1,1) AR(1) ± ± ± ± AR(2) ± ± MA(1) ± SMA(1) ± ± ± ± The previous day s observation acts as a starting point for forecasts. However, each model also contains at least one parameter for autoregression, which shows that the forecast is dependent on the previous consecutive hourly observations of day-to-day change. For example, in the PIA 52

62 Average PIA Time (Hours) Time model for Abdominal Pain Patients, if the difference between the previous day s error and the current day s error has been close to zero in recent observations, the forecast will mirror the previous day s seasonal trend. However, if the rate of change is increasing or decreasing in recent forecasts, the model will capture the acceleration of the trend as 84.58% of the previous observation and 10.95% of the twice-previous observation. In this way, the models capture not only daily changes in forecasting accuracy but also hourly changes. 3 Weekly Pattern of Average Time to PIA Time-to-PIA Saturday Sunday Monday Tuesday Wednesday Thursday Friday Time (24-Hour Clock) Figure 11. Weekly Pattern of PIA Time showing Weekend vs. Weekday variation. 53

63 Since all PIA Time models contained at least one and at most two autoregressive terms, a sensitivity analysis was run on all models to gauge the average increase in variance when an autoregressive term was removed. The addition of AR terms to models where one AR term proved optimal creates an AR(2) parameter approximately equal to zero, so any change to variance by adding terms was considered negligible. In the two PIA Time models with regular ARIMA orders of (2,0,0), the variance of the All Patients model increased in 8.3 seconds and the variance of the Abdominal Pain model increased by 14.4 seconds. Running a similar analysis on removal of the MA(1) term for the CTAS 2 model yields a variance increase of 16.9 seconds. If this change in accuracy is deemed negligible by NYGH, for simplicity, a general model of ARIMA(1,0,0)(0,1,1) 24 could be used for forecasting PIA Time in the YZ for each patient subgroup, along with the linear coefficients for CTAS 2 and CTAS 3 Occupancy Occupancy by CTAS as a Regressor for PIA Time Forecasting Generally speaking, the correlation of occupancy with PIA Time is logical based on the proven downstream effects of ED crowding [11, 22, 49]. Hourly occupancy in the YZ positively correlates with the number of admissions to the hospital, the number of outside consultations required, the number of ALC patients, and wait times in all areas of the ED. Lower occupancies are associated with fewer patients leaving without being seen, and a higher number of discharges occurring per hour. Therefore, measures of the downstream effects of occupancy, such as outside consultations, are a symptom of crowded EDs rather than a reliable independent covariate of wait time. Occupancy functions well as a regressor for forecasting PIA Time because it not only provides a robust measure of patient demand on hospital resources, but is easily measured in real-time from the EDIS. The conclusion is supported by the work of Hoot and Aronsky, who found that occupancy performs as well as more complex indicators of ED crowding [9]. 54

64 In the case of all four PIA Time models, the independent covariates with the greatest impact on the RSME of regression were the number of CTAS 2 patients and number of CTAS 3 patients at the time of arrival. In all cases, inclusion of the number of CTAS 4 patients as a regressor was insignificant. Of the different measures of occupancy tested in regression, it is surprising to note that Occupancy by CTAS out-performed Pre-PIA Occupancy or Pre-PIA Occupancy by CTAS, which account for the number of patients yet to see a physician. Considering that patients draw from different hospital resources after the PIA, Pre-PIA Occupancy was thought by the NYGH research team to correlate more closely with PIA Time. However, in practice, regressing on the number of Pre-PIA patients yielded a higher AIC than other covariates. A possible reason why Occupancy by CTAS was most effective at forecasting PIA Time, and also for the omission of the number of CTAS 4 patients, is that patients with higher acuity are more likely to return to the physician for reassessment than those with more simple complaints. Therefore, the effect of each CTAS 2 and CTAS 3 patient in the YZ waiting room may have an impact on the PIA Time of arriving YZ patients, even after they have already seen the physician once. 55

65 5. Analysis and Results for Regression Models fitted to PTE Time The descriptive analysis for PTE time influences the models considered for fitting the PTE time series (Appendix D). The series appears to contain a 24-hour seasonal pattern, and is modeled as a Seasonal ARIMA model. Only the hourly volume of admissions from the Yellow Zone was considered as an external regressor, as no other available covariates were found to correlate with PTE Time (Figure 12, Table 12). As the PTE time series was found not to contain a mean shift between September 1, 2013 and August 31, 2014, the time series was arbitrarily divided into three 4-month intervals for analysis. For all patients in the YZ and specific patient cohorts, the model is fit to Interval 1 (September 1 December 31, 2013), and validated against Interval 2 (January 1 April 30, 2014) and Interval 3 (May 1 August 31, 2014) Descriptive Analysis of PTE Time Sample model fitting and descriptive analysis was conducted on PTE time for the entire patient population from September 2013-August 2014, in the same manner as was done for PIA time. A correlation matrix of the collected covariates (Figure 12) shows that PTE time is not strongly correlated with any particular covariate, but does have a slight positive correlation with the volume of hourly admissions. Accordingly, admissions were emphasized for inclusion in the linear model. occupancy.all oc.2 oc.3 oc.4 before-md.all before-md.2 before-md.3 before-md.4 after-md.all Arrivals.All Arrivals.2 Arrivals.3 Arrivals.4 Admits.All Dsch.All PIA.All PTE.All Figure 12. Correlation Matrix showing strength of correlation between response values and available external regressors. A darker shade of blue indicates a stronger positive correlation. 56

66 Table 12. Yellow Zone Covariates as correlated with PTE Time Covariate (by Hour) Sub-Group Covariate Linear Trend w.r.t. PIA Time Breusch Pagan Statistic Yellow Zone Occupancy All Level <0.05 No Pre-PIA Occupancy Level No Post-PIA Occupancy Level <0.05 No CTAS 2 Occupancy Level No CTAS 3 Occupancy Level <0.05 No CTAS 4 Occupancy Level No Pre-PIA CTAS 2 Occupancy Level No Pre-PIA CTAS 3 Occupancy Level No Pre-PIA CTAS 4 Occupancy Level No Yellow Zone Arrivals All Weak Positive <0.05 No CTAS 2 Arrivals Weak Positive <0.05 No CTAS 3 Arrivals Weak Positive <0.05 No CTAS 4 Arrivals Non-Linear No Suitable for Regression Admissions from Yellow Zone Weak Positive To be Tested Discharges from Yellow Zone Weak Positive <0.05 No The assumptions of linearity and homoscedasticity for regression were tested using boxplots of the PTE time against all possible predictive covariates, as well as the Breusch-Pagan test for constant variance (Table 10). As was suggested by the correlation matrix, no covariates were found to have a strong linear correlation with PTE, though boxplots of the hourly number of admissions to the YZ showed a slight positive correlation (Appendix D, Figure 26). The complete boxplot graphs for PTE time covariates can be found in Appendix D. As seen in the PIA time series, the PTE time series contains a strong daily pattern that requires seasonal differencing to be made stationary (Figure 13). The non-stationarity of the time series is confirmed using Kwiatkowski-Phillips-Schmidt-Shin (KPSS) tests, which strongly reject the hypothesis of a constant mean and variance for the series [54]. 57

67 Average PIA Time (Hours) Average PTE Time by Weekday Friday Monday Saturday Sunday Thursday Tuesday Wednesday Hour (24-Hour Clock) Figure 13. The average hourly PTE Time plotted against the time of day. The stratified patient population by acuity and complaint illustrates that after seeing the physician (PTE time), patients wait an average of two hours and five minutes, and 90% of patients receive their disposition decision in 4.93 hours or less (Table 13). CTAS 2 patients and Abdominal Pain patients experience far longer PTE times than the rest of the patient cohort, which can also be seen in Figure 14. It can be also seen patients presenting with low-acuity complaints or external injury experience both shorter PIA and PTE times. 58

68 Wait Time (hours) Table 13. Descriptive Analysis for patient population for September 2013 to August Patients Pct. of all Patients Mean PTE (hours) 90% PTE (hours) Mean LOS (hours) All % By Acuity CTAS % CTAS % CTAS % By Complaint Abdominal Pain 32.32% Bleeding 12.95% Headache 11.46% Fever 6.86% Shortness of Breath 5.50% Specific Pain 5.18% Injury 4.65% Other (Non- Categorized) 21.08% % LOS (hours) Length of Stay by Patient Cohort Mean PIA (hours) Mean PTE (hours) Patient Cohort Figure 14. Length of stay stratified by patient cohort. 59

69 In order to identify where long-term changes occur in the PTE time series, the breakout detection algorithm was again used to detects shifts in the median of the series. The breakout analysis suggests that the median PTE Time does not shift significantly in the year observed (Figure 15). The findings imply level stationarity of the series throughout the year. However, as the variance of PTE time is much higher than the variance of PIA time, true shifts in median PTE time may be obscured. For the purposes of fitting and validating the ARIMA model, the PTE was split into three intervals of equal length and no statistical difference in means: Interval 1 (September 1 December 31, 2013), Interval 2 (January 1 April 30, 2014), and Interval 3 (May 1 August 31, 2014). The models for the whole patient population and patient subgroups were fit to Interval 1 and tested on Intervals 2 and PTE Time: Breakout Analysis on Sep Aug PTE Time PTE.Mean Sep 1-Oct 1-Nov 1-Dec 1-Jan 1-Feb 1-Mar 1-Apr 1-May 1-Jun 1-Jul 1-Aug Figure 15. Breakout analysis shows no significant change in the median PTE time of 2.09 hours over the course of the year surveyed. 60

70 5.2. Results of Regression on PTE Time The time series consisting of the PTE time (Physician Initial Assessment to Disposition Decision Duration) for all patients entering the YZ in Interval 1 (September 1 - December 31, 2013) was fit with a regression model with seasonal ARIMA errors according to the methodology outlined in Section Forecasting for All Yellow Zone Patients A model of ARIMA(1,0,0)(0,1,1)[24] was selected for the Interval 1 PTE time Series. The model was selected based on the lowest obtainable AIC (AIC = ) and variance (s 2 = 2.924). No external regressor tested was found to reduce of RSME of the regression, and as such, the model is a first-order autoregressive model with seasonal exponential smoothing. The autoregressive model implies that there exists an underlying trend to the series; the difference between the current day s and the previous day s observation is approximately 0.2 times the magnitude of the previous difference, with a correction for the error observed the day before. However, the variance of the PTE Time model is nearly 7-times as large as the variance for the Interval 1 model for PIA Time (s 2 =0.4238), indicating a much less accurate fit. The dimensions of the model are detailed in Table 14. The detailed regression results of the fitted model can be found in Appendix I. The properties of the model and process capability are further discussed in Section 5.3. Table 14. Coefficient values for ARIMA model for Interval 1 PTE time series. September 1-December 31, 2013 Term Coefficient Value Standard Error AR(1) SMA(1) AIC Variance (hours)

71 Forecasting for Patients by Acuity and Complaint Using the same reasoning and methodology as the PIA Time analysis, models were fitted to the PTE time series for CTAS 2, CTAS 3, and Abdominal Pain. As with the gaps in PIA time series, the observation gaps in the subgroups series of PTE Time were estimated by polynomial spline interpolation, with negative values rounded up to zero [60]. Models were fit to the three patient subgroups for the time span of September 1, 2013 December 31, 2013, as a means of comparison to the model encompassing all YZ patients. Though the external regressors of Number of Patients Admitted and Number of Patients Discharged were tested for correlation with PTE Time, no significant regression was found using either covariate. The summarized model results are found in Table 15 and are further discussed in the following sections. The detailed regression results of the fitted models can be found in Appendix I. Figure 16 shows the rolling forecasts for CTAS 2 and CTAS 3 patients for the 24-hour period following December 23 rd, The forecasts illustrate that CTAS 2 patients are expected to experience longer PTE Time on average than CTAS 3 patients, who comprise the majority of the population, but that the forecasts do not match well to the hourly actual PTE times. Table 15. Model results for patient acuity and complaint subgroups Coefficient Value Autoregressive Terms CTAS 2 CTAS 3 Abdominal Pain S.E. Coefficient Value S.E. Coefficient Value AR(1) MA Terms MA(1) Seasonal MA Terms SMA(1) AIC Variance (s 2 ) S.E. 62

72 7:00 AM 9:00 AM 11:00 AM 1:00 PM 3:00 PM 5:00 PM 7:00 PM 9:00 PM 11:00 PM 1:00 AM 3:00 AM 5:00 AM 7:00 AM 9:00 AM 11:00 AM 1:00 PM 3:00 PM 5:00 PM 7:00 PM 9:00 PM 11:00 PM 1:00 AM 3:00 AM 5:00 AM 7:00 AM Time to PIA (Hours) PTE time Forecast by Patient Cohort (Using data from December 23-24, 2013) CTAS2 Actual CTAS3 Actual CTAS2 Forecast CTAS 3 Forecast Time (Hour) Figure 16. Rolling PTE Time forecasts for All, CTAS 2, and CTAS 3 patients Results Summary and Validation for PTE Time Models Models were fit to four partitions of the PTE time series for the Yellow Zone from September 1, 2013 to December 31, The optimal model for each time series partition was selected based on minimizing AIC and variance. No external regressors were found to yield a lower AIC than ARIMA methods alone. The model results are numbered and found together in Table 12. In examination of the models together, all share the same seasonal period of 24-hours and a seasonal ARIMA order of (0, 1, 1), indicating one seasonal difference (D = 1) and one seasonal 63

73 moving average term (Q = 1). All models are autoregressive models with seasonal exponential smoothing, with one autoregressive term. Models 2.2 and 2.4 also contain an order of ordinary Table 16. Summary of PTE Time Model Results Interval 1 Model Non-seasonal Order Seasonal Order Model Statistics p d q P D Q AIC s 2 RMSE Seasonal Model Type 2.1 All Patients st Order AR 2.2 CTAS ARMA 2.3 CTAS st Order AR 2.4 Abdominal Pain ARMA (period-to-period) smoothing (q = 1). The model the whole YZ population, with no accounting for patient characteristics, yielded the lowest AIC, variance, and RMSE. Validity and prediction accuracy of the models were assessed based on the distribution of regression residuals for each fitting. The first through thirtieth residuals for each Interval 1 model passed a Portmanteau test for independence, confirming that the residuals are random and the models may be used for forecasting. The validated Interval 1 model was then applied to new data in the PTE time series, Interval 2 (January 1 April 30, 2014) and Interval 3 (May 1 August 31, 2014), to assess the model s utility for ongoing prediction. The proportion of actual wait times exceeding the values of the estimated 90% (2.78-sigma) prediction confidence interval (CI) were calculated and can be found in Table 13. The model s accuracy for forecasting is assessed according to the range of the bounds, where a smaller range indicates a smaller margin of error for forecasts. The Table 13 column Percent Overestimated or Underestimated by X or less describes the percentage of patients who either wait less than the PTE Time prediction or who wait less than 15, 30, 60, or 120 minutes longer than the prediction. 64

74 Table 17. Analysis of Residuals and Validation Criteria by PTE Time Model Interval 1 Model 90% Confidence Interval (hours) (-1.39σ, +1.39σ) Prediction Estimation Error 90 th Percentile (hours) < CI (%) > CI (%) Percent Overestimated or Underestimated by X or less (%) X = 30 X = 60 min min X = 15 min X = 120 min All Patients (-2.63, 2.10) CTAS 2 (-3.19, 2.88) CTAS 3 (-2.84, 2.27) Abdominal Pain (-3.43, 2.87) Interval 1 Model Applied to Interval 2 All Patients (-2.82, 2.12) CTAS 2 (-3.18, 2.63) CTAS 3 (-3.02, 2.34) Abdominal Pain (-3.47, 2.95) Interval 1 Model Applied to Interval 3 All Patients (-2.53, 1.88) CTAS 2 (-3.24, 2.45) CTAS 3 (-2.40, 1.89) Abdominal Pain (-3.16, 2.69) The analysis of residuals implies that the model fitted to all YZ patients generally outperforms the acuity- and complaint-specific models, but the CTAS 3 model has a tighter confidence interval than All Patients in Interval 3. This result may not be significant, as the K-S test fails to reject the hypothesis that the residuals from the two models come from the same underlying distribution in Interval 1 (p = 0.592) and Interval 3 (p = ). The K-S test for All Patients vs. CTAS 3 Patients in Interval 2 yields a p-value of , which rejects that the two residual distributions are the same with 95% confidence. However, the p-value may be large enough, especially when compared to CTAS 2 and Abdominal Pain, to regard this result as an outlier rather than a significant result. 65

75 The 90% confidence interval of the residuals, in combination with an F-test to compare two variances, is used to assess the confidence of prediction of the models in comparison to one another, as well as to their performance on new data. The results indicate that the confidence of prediction for the All Patients, CTAS 3, and Abdominal Pain models is lower in Interval 2 than in Interval 1. However, the confidence increases for all models in Interval 3. The implications of these results are elaborated upon in the discussion section Discussion of Results for PTE Time Models In contrast to the results of the PIA time models, the models for PIA to Disposition Decision (PTE time) yield much lower accuracy in forecasting wait times for YZ patients. In examination of the optimal dynamic regression with ARIMA errors models for PTE time, the standard deviation of the fitted residuals varies from 2 hours and 55 minutes to 5 hours and 10 minutes, a more than four-fold increase over any PIA time model (Table 15). As well, in contrast to the PIA time models, the lowest AIC and variance is found with the All Patients model, followed by the CTAS 3 model, CTAS 2 model, and Abdominal Pain model. The results imply that neither CTAS level nor complaint serves to improve the accuracy of the PTE time forecasting model using ARIMA. Though the final models selected for modeling PTE time yielded independent regression errors, the magnitude of the errors are too large to be of any practical use in the YZ. The same large variance found naturally in the All Patients time series likely impedes the statistical detection of mean shifts in the data over the course of the year studied, when in fact there may be special variation occurring over time that is visible when the population is stratified properly. While the process steps involved in PIA time do not vary, the steps involved in PTE time may vary greatly based on the physician s initial assessment of the patient. Timestamps for 66

76 additional process steps are not recorded in the EDIS, thereby not allowing the current study to account for the number and types of laboratory tests ordered or additional physician assessments required. Though characteristics such as presenting complaint theoretically should act as a surrogate for describing the patient s most likely treatment pathway, which is a proven determinant of wait time, within the YZ this level of stratification is not sufficient to create confident predictions of PTE Time [23, 65]. Table 18. Validation results of PTE Time models, in hours and minutes 90% Error Confidence Interval (h:mm) PTE Time (Interval 1 Residuals) 90 th Percentile Error (h:mm) All Patients (-2:37, +2:06) 1:58 2:55 CTAS 2 (-3:11, +2:53) 2:19 4:49 CTAS 3 (-2:50, +2:16) 1:53 3:24 Standard Deviation (h:mm) Abdominal Pain (-3:26, +2:52) 2:30 5:10 In another divergence from the results of the PIA time models, no independent covariates were found to correlate well with PTE time. In preliminary analysis, measures of occupancy showed no correlation with the mean and variance of PTE time, and when included in ARIMA models, yielded lower confidence in the regression than models without an external regressor. Since the PTE time is dependent not only on the availability of YZ clinical resources, but also on the availability and workload of resources shared with other parts of the hospital, covariates reflecting the workload of laboratory technicians might provide a useful predictor of PTE time. However, data enumerating the number and types of tests ordered from the YZ were not readily available at the time of the current study s data collection period. 67

77 The mean PTE time appears to have a sinusoidal daily trend which peaks at 8:00 am and reaches a low at 8:00pm. This daily trend, reflected in the fact that including an order of seasonal differencing and a seasonal moving average term improved the AIC of the PTE time regression models, is likely attributed to an unseen seasonal factor, such as laboratory staffing. Unlike physicians and nurses who may regularly work overnight hours, it is likely that laboratory technician staffing is considerably lower between 6pm and 6am. As such, the seasonal trend observed in PTE time may be due to a build-up of laboratory orders in the evening and night that are only serviced once more technicians arrive in the morning. Generally speaking, the ARIMA model s utility for forecasting is low, due to the large standard deviations in forecasting residuals. However, there may be conclusions to be drawn from the stability of the models over time. The PTE time models built on Interval 1 (September 1- December 31 st, 2013) were applied to two subsequent intervals spanning January 1 April 30 (Interval 2) and May 1 August 31, 2014 (Interval 3). The decrease in forecasting accuracy in Interval 2 may reflect the events that caused mean shifts in the PIA time series at the same time of the year, such as the occurrence of Flu Season. Similarly, the increase in forecasting performance in Interval 3 may reflect a long-term improvement in YZ service, since PIA time forecasting accuracy improves over the same time span. The correlation between PIA time forecasting accuracy and PTE time forecasting accuracy implies by transitive property that factors that correlate with PIA time, such as occupancy, may also correlate PTE time. However, the existence or extent of this correlation is not apparent in the analysis conducted. The results of the YZ PTE time analysis are corroborated by results of recent studies of the factors associated with ED waits. While factors such as acuity, presenting complaint, and time of day have been identified to impact the distribution of PTE time for patients, the standard 68

78 deviations within the subgroups are still on the order of hours rather than minutes [51, 66]. Crowding factors, such as occupancy, were also found not to correlate well with PTE time at a number of ED sites. These studies similarly conclude that PTE time forecasting cannot be used until the variations in ED throughput are accounted for. The validation results show that, to forecast PTE time for ailment and acuity subgroups in the YZ, dynamic regression with seasonal ARIMA errors has no practical application at this time. However, lower-variance stratifications of the patient population may exist, as well as independent covariates that correlate well with PTE Time. Suggested methods for accounting for PTE time process and patient variation when forecasting are discussed in detail in Section 6.3, Future Work. 69

79 6. Discussion The chapter is divided into three sections. First, the results are considered in terms of their utility to improve internal operations within the Yellow Zone. Second, the results are considered in terms of their utility to improve patient experience through improved communication of wait time estimates between clinicians and patients. Last, the limitations of the methodology are noted and proposals for future work are made Implications of Forecasting Waits for YZ Operational Improvement The study hypothesizes that if individual wait times can be predicted for patients arriving at the YZ, the forecasts can be used to strategically improve the patient experience. The implementation of a real-time wait time forecasting model is dependent on the viability of applications that improve the YZ s capacity to manage patient demand for services. This section describes how and where the models could act as facilitators in the YZ treatment process. It is proposed that the addition of salient informational cues to the current EDIS would promote improved situational awareness for clinicians and support decision making for both patient management and resource utilization. In their studies of ED crowding, Hoot and Aronsky postulate that creating reliable information in the ED about the current and future patient demand allows for the introduction and study of intervention policies [67]. As of the time of this study, there are few well-evidenced studies on the impacts of interventions that improve situational awareness in the ED. However, real-time hospital performance reporting technologies, such as the Oculys VIBE and Piedmont Healthcare s Report Portal have been anecdotally well-received by clinicians upon implementation [33, 68, 69]. Given that ED crowding is known to correlate with high wait times 70

80 and increased strain on ED resources, access to information about future demand allows clinicians to take actions to maintain the quality of care despite high occupancy [9, 11, 18]. Maintaining the quality of care is necessary to reduce the costs of variation in patient experience, which is seen by many as the root of quality control issues in the ED [11, 64, 70, 71]. Through modifications to the current EDIS, YZ clinicians can be given information about the status of current and future YZ operations that may be used to reduce the costs of long wait times and improve service quality. The regression models proposed in this study were developed with the goal to use information easily obtainable from the Wellsoft EDIS or by visual inspection in the YZ. The parameters required for making wait time forecasts for patients at the time of arrival are past observations of wait time, a running log of forecasting errors, and current counts of occupancy in the YZ. The model can draw the number of CTAS 2 and CTAS 3 patients currently waiting or in treatment in the YZ from the EDIS, but an ongoing record-keeping of actual PIA Time wait times, forecasts, and forecasting errors for the previous 24 hours would need to be created to support the model. In order to make forecasts for future hours in the YZ, the counts of CTAS 2 and CTAS 3 occupancy in the YZ would also need to be recorded on an hourly basis, so that future occupancies can be projected. Using this information, the model is able to project wait times for arriving patients, which can be used proactively to inform clinicians of which patients are at risk of long wait times and when a population-level risk may occur. For example, if the target PIA Time wait is under 2 hours, indictors can be added to the EDIS interface which show when a patient arrives that may exceed the target. Beyond improving situational awareness through reporting current and future wait times for patients and patient subgroups, the wait time model can be used to support decision support 71

81 applications at the YZ s specification. Using the model as a mechanism for calculating projected wait times, an application can be developed that allows clinicians to test the impact of options to decompress the YZ. For example, if the current wait time is high for arriving patients, the model can be used to calculate the potential impact of adding or removing certain patients from the queue. In this way, the model can be used to test various interventions and provide decision support for a variation-reducing patient treatment priority. In cases where patient reprioritization will not ease YZ workload, the predictions may support YZ clinicians in resource planning decisions. While the model does not account for the impact of resource availability on waiting and treatment times, when presented with projected wait times, an experienced YZ clinician may seek out the resources required to mitigate an impending risk scenario Implications for the Design of Real-Time Wait Reporting and Patient Satisfaction In addition to using the models to improve decision-making by YZ clinicians, forecasting information can used to support provider-to-patient communication about wait times. Many studies have discussed the positive correlation of patient satisfaction with wait time with the availability of information about the length, cause, and justification for waiting [72, 73, 74]. In the absence of certainty or explanation, patients tend to perceive the wait times to be longer than they are. Further, patients whose wait time expectations align poorly with the actual wait times tend to report lower satisfaction overall [72, 75]. Accordingly, the most promising interventions for improving patient satisfaction with wait times are those that target patient perceptions of wait time, rather than those that attempt to reduce wait times through process redesign [76]. It is proposed that the models wait time projections can be used both to support patient demand for wait time information and manage patient expectations of YZ processes. In combination with 72

82 the recommendations for improving situational awareness, the model could provide insight into the systematic factors contributing to long wait times as well as a prediction of the wait time itself. The addition of wait time forecasting information may also reduce the effects of two other aspects of patient dissatisfaction: Patients waiting alone and having unoccupied time [73]. When presented with reliable information about wait times, patients may make different decisions about how, where, and with whom they choose to wait. SMGH has reported that, with frequent use of their wait time reporting website, they have seen improved patient satisfaction and a decline in the number of low-acuity arrivals to the ED [31]. If the wait time estimate and alternate care options provided by St. Mary s General Hospital s wait time website can be inferred to influence patient decision-making about whether or not to seek emergency services, a wait time communication tool for the YZ may also influence patient behaviour and attitudes. The quality of provider-to-patient communication is frequently found to be the best determinant of patient satisfaction, but patients do not consistently express higher satisfaction when provided an expected waiting time verbally by a physician [73, 76]. With emergency resources in increasingly high demand, especially when wait times are long, visual media has shown promise in communicating ED information to patients. Utilizing non-human sources of information gives patients control over when and where they can access information about their waits, and may reduce disruption to clinicians, who would otherwise be asked to provide estimation and justifications for wait time. Using information from the EDIS and the forecasting models together, information could be drawn from the Yellow Zone to support the real-time reporting of acuity and ailment specific 73

83 wait time predictions for arriving patients. The model currently provides the data required to match the capabilities of other real-time reporting sites in Canada: 50 th and 90 th percentile estimations of wait time for a physician, as well as the number and types of patients waiting for emergency care. Potential methods for communicating wait times to patients include an automated call-in number, digital whiteboards or screens in the YZ, or online applications accessible by computer or smart phone. In the literature surveyed by the author, there were no available case reports on the implementation of these technologies for the purpose of reporting wait times in the ED, nor were there comparisons of the efficacy of one mode of information delivery over another. However, online wait time reporting tools in Ontario have received positive reviews from the public, and the implementation of digital whiteboards for clinician use has been reported to improve patient flow in a number of ED case studies [29, 33, 77]. These cases may indicate that the technology shows promise for improving patient perceptions of wait time length and causes, but more data in this field of study is required before a recommendation for a wait time information delivery method can be made Research Limitations and Future Work The results of the current study are subject to a number of methodological and informational limitations. First, the population subgroups based on ailment were created based the author s interpretation of plaintext entries into the EDIS, and without medical expertise; there is no standardized indicator of ailment used in the YZ currently. While the ailment subgroups showed correlation with PIA Time in preliminary analysis, there was no correlation between ailment groupings and PTE Time. As well, it was found that there is too much variation within the CTAS 3 subgroup for acuity level to be a useful grouping of the YZ patient population. It is possible 74

84 that superior groups for forecasting could be found using a quantitative clustering method, such as self-organizing maps or k-means analysis [23]. In light of the large variation found for the PTE Time model residuals, it is not recommended that the models be used for forecasting. In addition to reducing the variation within patient subgroups through clustering analysis, the PTE Time study may benefit from a study of the process following PIA to determine the durations and frequencies of specific treatment activities in the YZ. Descriptive analysis of imaging, reassessment, consultations, and laboratory requisitions would illuminate whether PTE Time can be predicted more confidently. Depending on the number of different patient types drawn from further research, discrete event simulation may be more capable than time series modelling for predicting PTE wait times [65]. Third, the time series used for developing ARIMA forecasting models used hourly averages of wait time, and as such, contained missing data points in hours where no patients arrived. Polynomial spline interpolation was used to interpolate the missing values in the data, as it probabilistically mimics the seasonal pattern of wait times [60]. However, since the interpolated values are based on the values before and after the missing value, the magnitude of autocorrelation found in the ARIMA errors may be inflated. This effect is likely to be larger in smaller patient subgroup models, where longer spans of time were interpolated due to the time in between arrivals. Fourth, the extensibility of the results of this study is limited by the variability between ED settings. The current study builds upon the framework for understanding patient experience by asserting that the number and CTAS levels of patients occupying the YZ impact arriving patients more than other measures of occupancy. As well, the models assert that day-old and hour-old 75

85 data can be used to forecast future PIA Time. As with many applications of mathematical modelling to the ED, the extensibility of the models is uncertain without validation on similar data sets from other EDs or ED units. However, based on the systematic correlations with PIA Time that many EDs share, there is reason to believe that dynamic regression modelling using Occupancy by CTAS level with ARIMA(1,0,0)(0,1,1) 24 Errors may work in forecasting other EDs, when calibrated to new data [2, 9]. Finally, the results of any real-time wait time reporting service must be interpreted with respect to the standard deviation of the forecasts. Changes in forecast that are smaller than the standard deviation should not be seen as certainty that wait time is increasing or decreasing in wait time, and any mode of publishing forecasts should model this uncertainty in a simple and clear way. In a study of the accuracy of surgery wait time information services, it is suggested that models such as the ones in the current study be used only to avoid excessive waits [78]. Using the model only to detect special variation in the YZ process serves to avoid scenarios where resource allocation decisions are made based on projections of normal variation and may also reduce alarm fatigue in clinicians who are monitoring wait times. 76

86 7. Conclusion The current study used multivariable regression with Seasonal ARIMA errors to test the hypothesis that individualized patient wait times can be predicted for patients in NYGH s Yellow Zone. Literature was reviewed to test the secondary hypothesis that real-time wait time reporting can be used strategically to manage patient demand for information and services. The predictive models created in this study are the first to demonstrate that hourly forecasting of wait times for physician assessment using dynamic regression can be conducted with the same accuracy as models with longer forecasting horizons. By comparing the variation in the residuals from the wait time forecasting models, conclusions can be made about the suitability of individual and systematic characteristics for forecasting wait time to see a physician in the Yellow Zone. The models developed in the current study are able to forecast characteristic patient subgroups with more accuracy than when all patients are treated as a homogenous group, and with comparable accuracy to existing real-time wait reporting tools used in Ontario. It is suggested that a quantitative clustering method be used to create characteristic subgroups. In minimizing two measures of goodness of fit, the root mean-squared error and Akaike Information Criterion, the study identified the multivariable of occupancy by CTAS level as a superior predictor of wait PIA Time. These results support the hypothesis that individualized wait times can be estimated frequently and with satisfactory accuracy. Consideration of the quantitative results and relevant literature in conjunction led to the development of recommendations for applying predictive models in the YZ to support resource allocation and capacity management. Implementation of forecasting into the EDIS may provide YZ clinicians with cues that identify current or future patient risks and provide support for 77

87 reducing unnecessarily long wait times. Forecasting information may also be used to manage patient dissatisfaction with wait times by improving process transparency for both providers and patients. The study contributes to what has been published on forecasting ED wait times by providing statistical support for creating patient-centred, rather than population-centred, forecasting tools. As well, the study further explores the relationship of occupancy to emergency department crowding, adding evidence of the significance of CTAS level when measuring occupancy. In application to healthcare settings, predictive models should be used only as needed to detect process irregularities and avoid overlong wait times. It is recommended that further research be conducted to improve predictive accuracy and create more tightly-clustered patient subgroups, especially in predicting time from physician assessment to disposition decision. 78

88 Works Cited [1] K. Born and M. Tierney, "Should hospitals post emergency department waits online?," 19 July [Online]. Available: [Accessed 8 March 2013]. [2] Canadian Association of Emergency Physicians and National Emergency Nurses Affiliation, "Joint Position Statement on emergency department overcrowding," Canadian Journal of Emergency Medicine, pp , [3] M. L. MacMaster, Interviewee, Thesis Work at NYGH. [Interview]. December [4] M. Ogilvie, "Toronto area hospitals may soon post live ER wait times online," The Toronto Star, 22 May [5] Ontario Ministry of Health and Long-Term Care, "Ontario Wait Times," 21 March [Online]. Available: [Accessed 20 June 2014]. [6] B. Bursch, J. Beezy and R. Shaw, "Emergency department satisfaction: what matters most?," Annals of Emergency Medicine, pp , [7] C. Prong Parkhill, "St. Mary s celebrates high hospital rankings," Kitchener Post, 2 January [8] Ontario Ministry of Health and Long-Term Care, "Emergency Room Search Results," Queen's Printer for Ontario, 20 April [Online]. Available: &str=N&view=0&period=0&expand=0. [Accessed ]. [9] N. R. Hoot and D. Aronsky, "Systematic Review of Emergency Department Crowding:," Health Policy and Clinical Practice, pp , August [10] G. Paraghamian, Interviewee, Yellow Zone Info. [Interview]. 8 May [11] D. R. Eitel, S. E. Rudkin, M. A. Malvehy, J. P. Killeen and J. M. Pines, "Improving Service Quality by Understanding Emergency Department Flow: A White Paper and Position Statement Prepared for the American Academy of Emergency Medicine," The Journal of Emergency Medicine, vol. 38, no. 1, pp ,

89 [12] K. N. McKay, J. E. Engels, S. Jain, L. Chudleigh, D. Shilton and A. Sharma, "Emergency Departments: "Repairs While You Wait, No Appointments Necessary"," Handbook of Healthcare Operations Management, vol. 184, pp , [13] V. Singh, "Use of Queuing Models in Health Care," in The Selected Works of Vikas Singh, United States, Selected Works, [14] G. A. Wennberg J., "Small area variations in health care delivery," vol. 182, no. 4117, pp , [15] Ontario Ministry of Finance, "Chapter 5: Health," 15 February [Online]. Available: [Accessed 8 May 2014]. [16] Plant, David V, "Healthcare Support through Information Technology Enhancements," Montreal, [17] J. Hall, "Canadians face longest emergency room waits in developed world, survey finds," The Toronto Star, 29 November [18] B. R. Asplin, D. J. Magid, K. V. Rhodes, L. I. Solberg, N. Lurie and C. A. C. Jr., "A conceptual model of emergency department crowding," Annals of Emergency Medicine, pp , August [19] Canadian Institute for Health Information, "Understanding Emergency Department Wait Times: Access to Inpatient Beds and Patient Flow," CIHI, Ottawa, [20] Canadian Institute for Health Information, "Understanding Emergency Department Wait Times: How Long Do People Spend in Emergency Departments in Ontario?," CIHI, Ottawa, [21] Canadian Institute for Health Information, "Understanding Emergency Department Wait Times: Who Is Using Emergency Departments and How Long Are They Waiting?," CIHI, Ottawa, [22] J. Bergs, S. Verelst, J.-B. Gillet, P. Deboutte, C. Vandoren and D. Vandijck, "The number of patients simultaneously present at the emergency department as an indicator of unsafe waiting times: A receiver operated curve-based evaluation," International Emergency Nursing, 28 January [23] M. Xu, T. Wong and K. Chin, "A medical procedure-based patient grouping method for an emergency department," Applied Soft Computing, pp , January

90 [24] Alberta Health Services, "Emergency Department Wait Times," [Online]. Available: [Accessed 15 December 2014]. [25] St. Mary's General Hospital, "News," 11 April [Online]. Available: [Accessed August 2013]. [26] Vancouver Coastal Health; Providence Health Care, "Wait Times," [Online]. Available: [27] Global News, "Kitchener hospital displays ER wait times on its website," Globalnews.ca, 22 May [28] B. Wang, J. Jewer, K. M. McKay and A. Sharma, "Case Study: Exploratory Data Analysis on Length of Stay in an Emergency Department," in INFORMS Healthcare 2013, Chicago, [29] Torstar News Service, "Ontario hospitals ponder posting ER wait times online," Metro News, 22 May [30] J. Frketich, "Posting ER wait-times online shortens queues," The Hamilton Spectator, 03 August [31] J. Weidner, "St. Mary s award-winning online clock deters hospital ER visits for patients with minor issues in Kitchener," Waterloo Region Record, 25 November [32] Oculys Health Informatics, "Staff and patients get a brand-new VIBE at Windsor s Hôtel- Dieu Grace Hospital," Oculys Health Informatics, Kitchener, Ontario, [33] Hotel-Dieu Grace Hospital, "Staff feeling the VIBE at HDGH," Hotel-Dieu Grace Hospital, Windsor, Ontario, [34] B. Wittmeier, "Edmonton-area emergency room wait times now available online," edmontonjournal.com, 5 June [35] CBC News, "Cutting ER wait times in Alberta part of larger problem, says health official," CBCNews Calgary, 19 September [36] T. Lyon, "ER wait times for Vancouver, Richmond, and North Shore are now online," 16 April [Online]. Available: [Accessed 15 December 2014]. 81

91 [37] CBC News, "High-tech solutions not the answer to ER wait times, experts say," CBC.ca, 13 April [38] L. C. Taylor, "Emergency room wait times: Online, real-time listings get mixed reviews," thestar.com, 29 April [39] Michel, Robert, "More Hospitals Advertise Shorter Patient Wait Times for Their Emergency Departments," 29 November [Online]. Available: [Accessed December 2014]. [40] T. Blackwell, "Hospitals turn to Internet to fight emergency room wait times," National Post, 17 April [41] B. Mihaylova, A. Briggs, A. O'Hagan and S. G. Thompson, "Review of Statistical Methods for Analysing Healthcare Resources," Health Economics, vol. 20, no. 8, pp , [42] H. Arsham, "Systems Simulation: The Shortest Route to Applications," University of Baltimore, Baltimore, [43] K. Awuah-Offei, "Can discrete event simulation help you improve your operation?," 10 May [Online]. Available: [Accessed 1 May 2013]. [44] S. Fomundam and J. Herrman, A Survey of Queuing Theory Applications in Healthcare, College Park, MD: The Institute for Systems Research, [45] V. Sundarapandian, "7. Queueing Theory," in Probability, Statistics and Queueing Theory, PHI Learning, [46] N. Rathlev, J. Chessare, J. Olshaker, D. Obendorfer, S. Mehta, T. Rothenhaus, S. Crespo, B. Magauran, K. Davidson, R. Shemin, K. Lewis, J. Becker, L. Fisher, L. Guy, A. Cooper and E. Litvak, "Time Series Analysis of Variables Associated With Daily Mean Emergency Department Length of Stay," Annals of Emergency Medicine, pp. Vol. 49, No. 3, pp , [47] A. Field, Discovering statistics using SPSS, London: Sage, [48] M. Schull, A. Kiss and J. Szalai, "The Effect of Low-Complexity Patients on Emergency Department Waiting Times," Annals of Emergency Medicine, pp. 1-9, [49] F. Kadri, F. Harrou, S. Chaabane and C. Tahon, "Time Series Modelling and Forecasting of 82

92 Emergency Department Overcrowding," Journal of Medical Systems, vol. 38, no. 107, [50] R. Nau, "Introduction to ARIMA: nonseasonal models," [Online]. Available: [Accessed 22 December 2014]. [51] A. Forster, I. Stiell, G. Wells, A. Lee and C. van Walraven, "The Effect of Hospital Occupancy on Emergency Department Length of Stay and Patient Disposition," ACAD EMERG MED., pp. Vol. 10, No. 2, pp , [52] D. Lin, J. Patrick and F. Labeau, "Estimating the waiting time of multi-priority emergency patients with downstream blocking," Health Care Management Science, pp , [53] N. Rathlev, D. Obendorfer, L. White, C. Rebholz, B. Magauran, W. Baker, A. Ulrich, L. Fisher and J. Olshaker, "Time Series Analysis of Emergency Department Length of Stay per 8-Hour Shift," The Western Journal of Emergency Medicine, pp. Vol. 13, No. 2, pp , May [54] R. J. Hyndman and G. Athanasopoulos, Forecasting: principles and practice, otexts.org, [55] NIST/SEMATECH, "Common Approaches to Univariate Time Series," April [Online]. Available: [56] R. J. Hyndman, "Package 'forecast'," 27 January [Online]. Available: [57] K. Sidhu, Interviewee, [Interview]. 19 August [58] A. Kejariwal, "Breakout detection in the wild," Twitter Engineering Blog, [Online]. Available: [59] North York General Hospital, "Newly expanded outpatient clinic meets growing community needs," [Online]. Available: [Accessed ]. [60] S. Guerrero, "2. Dealing with Missing Data in R: Omit, Approx, or Spline Part 1," 11 December [Online]. Available: [61] Public Health Ontario, "Ontario Respiratory Virus Bulletin I ," [Online]. Available: 83

93 Virus_Bulletin _Season_Summary.pdf. [62] N. R. Hoot, L. J. LeBlanc, I. Jones, S. R. Levin, C. Zhou, C. S. Gadd and D. Aronsky, "Forecasting Emergency Department Crowding: A Prospective, Real-time Evaluation," Journal of the American Medical Informatics Association, pp , May [63] St. Mary's General Hospital, "ED Wait Times," [Online]. Available: [64] B. Wang, K. McKay, J. Jewer and A. Sharma, "PHYSICIAN SHIFT BEHAVIOR AND ITS IMPACT ON SERVICE PERFORMANCES IN AN EMERGENCY DEPARTMENT," in Winter Simulation Conference, Savannah, GA, [65] Y. Marmor and D. Sinreich, "Emergency department operations: The basis for developing a simulation tool," vol. 37, no. 3, [66] R. Ding, J. S. D. Melissa L. McCarthy, J. S. Lee, D. Aronsky and S. L. Zeger, "Characterizing Waiting Room Time, Treatment Time, and Boarding Time in the Emergency Department Using Quantile Regression," Academic Emergency Medicine, vol. 17, no. 8, pp , [67] N. R. Hoot, C. Zhou, I. Jones and D. Aronsky, "Measuring and Forecasting Emergency Department Crowding in Real Time," Annals of Emergency Medicine, vol. 49, no. 6, pp , [68] M. Jackson, "Visualizing Data at Piedmont Healthcare," Institute of Industrial Engineers, [69] A. J. Forster, "An Agenda for Reducing Emergency Department Crowding," HEALTH POLICY AND CLINICAL PRACTICE, pp , [70] P. Asaro, L. Lewis and S. Boxerman, "The impact of input and output factors on emergency department throughput," Academic Emergency Medicine, vol. 14, no. 3, pp , [71] M. McHugh, K. V. Dyke, M. McClelland and D. Moss, "Improving Patient Flow and Reducing Emergency Department Crowding: A Guide for Hospitals," October [Online]. Available: [Accessed ]. [72] D. A. Thompson, P. R. Yarnold, S. L. Adams and A. B. Spacone, "How Accurate Are Waiting Time Perceptions of Patients in the Emergency Department?," Annals of 84

94 Emergency Medicine, pp , [73] J. C. Mowen, J. W. Licata and J. McPhail, "Waiting in the emergency room: How to improve patient satisfaction," Journal of Health Care Marketing, [74] S. Krishel and L. Baraff, "Effect of emergency department information on patient satisfaction," Ann Emerg Med., pp , Mar [75] T. N. Cassidy-Smith, B. M. Baumann and E. D. Boudreaux, "The disconfirmation paradigm: Throughput times and emergency department patient satisfaction," The Journal of Emergency Medicine, pp. 7-13, January [76] E. D. Boudreaux and E. L. O'Hea, "Patient satisfaction in the Emergency Department: a review of the literature and implications for practice," The Journal of Emergency Medicine, pp , January [77] J. L. Wiler, C. Gentle, J. M. Halfpenny, A. Heins, A. Mehrotra, M. G. Mikhail and D. Fite, "Optimizing Emergency Department Front-End Operations," Annals of Emergency Medicine, pp , February [78] D. A. Cromwell, "Waiting time information services: An evaluation of how well clearance time statistics can forecast a patient s wait," Social Science & Medicine, pp , [79] I. S. MacKenzie, "Modeling Interaction: Descriptive and Predictive Techniques," York University, [Online]. Available: [Accessed ]. [80] Ontario Ministry of Health and Long-Term Care, "Your Health Care Options," Queen's Printer for Ontario, 1 November [Online]. Available: on.ca/english/search/search-results/?sort=1&start=0&end=9&lati= &longi= &service=&servtype=&newPat=&&asvs. [Accessed 26 May 2014]. [81] [82] E. S. Gardner and E. McKenzie, "Why the damped trend works," 22 October [Online]. Available: [83] H. Akaike, "A New Look at the Statistical Model Identification," IEEE Transactions on Automatic Control, pp , December

95 [84] S. Hu, "Akaike Information Criterion," [Online]. Available: [Accessed ]. [85] K. N. McKay and e. al., "Emergency Departments: "Repairs While You Wait, No Appointments Necessary"q," Handbook of Healthcare Operations Management, vol. 184, pp ,

96 Appendix A: Yellow Zone Current State Documentation Table 19. Voice of Business Analysis with Yellow Zone Project Team Plan Voice of Business Impact Measures - TBD Thesis Statement Translation Innovation Process / IB Customers Finance Questions 1 Want to develop a Want decision New process Resource Wait times How do we use Predictive model support for and service Utilization an ED model for If by 9pm the wait increasing ED enablers resource and will be 5h, what capacity and/or capacity can I do to speed it reducing demand planning? up? 2 Many complaints Mid-level acuity Transparency; Resource Wait times, How do we [in the Yellow patients care New process utilization Number of better Zone] are about primarily about and service patient-staff communicate waiting, not being short waits and enablers touch- wait times to involved in access to points patients? process; we want information. We to internal should improve communicate reasons for waiting information flow between ED staff and both to and from patients How do we use ED model to improve patient satisfaction with care delivery? 87

97 Figure 17: Map of the Yellow Zone in the context of the Emergency Department Yellow Zone Triage ED Waiting Room ED Entrance 88

98 Figure 18. Process Map for Yellow Zone Patient Treatment and Intake 89

99 Appendix B: Data Sources Table 20: Yellow Zone Individual Patient Record Data Field Definitions Field ID Age Sex Description An anonymized identifier for the patient The age of the patient, denoted numerically by yr or mo The sex of the patient, denoted M or F Acuity Patient CTAS level, 1, 2, 3, 4, or 5 Curr Imp / CC Area of Care Rm (last) Admit Dx Arrival MD ED Name Entered Status Admt/ntfd Status Dsch Removed from Pt Track The presenting complaint of the patient, as described by the patient to the triage nurse, and transcribed in freeform English into Wellsoft by the nurse. The care pathway taken by the patient, denoted by which rooms the patient has visited. The most common pathway is, YZ, Wtng which denoted that the patient was admitted directly to the YZ, and then was moved to a waiting room until leaving the hospital. The treatment room used by the Patient. For all Yellow Zone records, the entry is S-Yz, indicating that the patient was seen by a Yellow Zone physician. The Diagnosis given to the patient, if the patient was admitted to the hospital from the Yellow Zone. The time of the patient s arrival to the ED, in format yyyy-mm-dd hhmm. The time of the patient s initial assessment by the physician, or PIA, in format yyyy-mm-dd hhmm. If the patient was admitted to the hospital, the time and date of admission. Format yyyy-mm-dd hhmm. If the patient was discharged from the hospital, the time and date of discharge. Format yyyy-mm-dd hhmm. The time at which the patient is removed from the Yellow Zone roster. Only recorded from January 2013-April Format yyyy-mm-dd hhmm. Table 21: AC Zone (Acute Care Zone) Data Field Definitions Field Description Week The start and end dates for the week for the observation. avg daily volume The average daily volume of patients in the AC Zone, for the week. avg LOS (hrs) The average length of stay of patients in the AC Zone, in hours, for the week. avg PIA (hrs) The average PIA time of patients in the AC Zone, in hours, for the week. 90

100 Table 22: Acute Zone Data Field Definitions Field Description Week The start and end dates for the week for the observation. avg LOS (hrs) The average length of stay of patients in acute beds, in hours, for the week. PIA (hrs) The average PIA time of patients in acute beds, in hours, for the week. Table 23: Clinical Decisions Unit (ADU) Data Field Definitions Field Date CDU volume (# of pts) Description The date of the observation. The number of patients from the whole ED in the Clinical Decisions Unit for the date in question. Table 24: Ambulatory Care Zone Data Field Definitions Field Weekday DATE Pt Vol Avg LOS (mins) Avg LOS (hrs) Avg PIA Time (mins) Avg PIA Time (hrs) Description Weekday for the date of the observation. Date of the observation. Number of Patients in ambulatory care for the given date. The average length of stay of patients in ambulatory care, in minutes, for given date. The average length of stay of patients in ambulatory care, in hours to one decimal point, for given date. The average PIA Time of patients in ambulatory care, in minutes, for given date. The average PIA Time of patients in ambulatory care, in hours to one decimal point, for given date. Table 25: Emergency Department (all) Data Field Definitions Field Date Vol CTAS Level 1 CTAS Level 2 CTAS Level 3 CTAS Level 4 CTAS Level 5 Description Date of the observation. Total volume of patients in the ED for the given date. Total volume of CTAS 1 patients in the ED for the given date. Total volume of CTAS 2 patients in the ED for the given date. Total volume of CTAS 3 patients in the ED for the given date. Total volume of CTAS 4 patients in the ED for the given date. Total volume of CTAS 5 patients in the ED for the given date. 91

101 Admitted Discharges LWBS Avg LOS (mins) Avg LOS (hrs) Avg PIA Time (mins) Avg PIA Time (hrs) The volume of patients admitted to the hospital from the ED for the given date. The volume of patients discharged from the ED for the given date. The volume of patients who left without being seen from the ED for the given date. The average length of stay of patients in ambulatory care, in minutes, for given date. The average length of stay of patients in ambulatory care, in hours to one decimal point, for given date. The average PIA Time of patients in ambulatory care, in minutes, for given date. The average PIA Time of patients in ambulatory care, in hours to one decimal point, for given date. Table 26. Sample language variations in input for Presenting Complaint (Curr Imp / CC) Complaint Category Abdominal Pain Variations in Field Entry Abdo Pain; Abd Pain; Abdo Pains; Abdo Pains,; Abdo. Pain; A.P; A.P.; A/P; Abd. Pain; 15 Weeks Preg, Rlq Pain;?Appe; Appendicitis; R/O Appy; Mvc - Abdo Pain - 11 Wks Pregnant; Mvc - Abdo Pain Allergic Reaction?Allergy; Rash Allergic Reaction; Rash/Allergy; A.R to Peanut; F.B. Throat, Nut?; A.R.; A/R; Rash? Allergic Reaction; Headache Pain/Injury Shortness of Breath Vaginal Bleeding Dizziness And Nausea Post Mvc; Headache, Feeling Unwell; Ha Post Mvc, Vomited X 1, Wants A Ct; Ha, Nausea, Fainting Spells X 2 Mvc X 2 Days Ago- Back Pain/Neck Pain; Mvc, Lac To The Lip; Mvc, Lt Hip Pain; Mvc, Right Sided Rib Pain; Mvc-Left Arm Pain; Mvc,Body Pain; Mvc/4 Months Pregnant Sob; Sob X 24 Hrs, Hyperventilating, Chest Tightness;? Fb Throat-Difficulty Swallowing; Diff Breathing; Breathing Fast; Breathing Trouble; hard to breathe; Prob Breathing; Wheezing; Wheezy; Vb; Vag. Bleed; Vag Spotting; Vag Bleeding; Vag Bleed; V.B.; Spotting; 10 Weeks Pregnant Vag Bleed; 14 Weeks Pregnant, Vb; 92

102 Table 27. Top 20 Complaints by Frequency, with iterative correction of variations in entry Rank No Alteration First Corrections Final Corrections 1 Abdo Pain Abdominal Pain Abdominal Pain 2 Headache Headache Bleeding 3 Abdominal Pain Bleed - PV Headache 4 Rlq Pain HA - Dizzy Fever 5 Urinary Retention Pain SOB 6 Right Flank Pain Fever - Back Pain Injury 7 Left Flank Pain SOB Specific Pain 8 Hematuria AP - Flank Pain 9 Allergic Reaction Bleed - UR Allergic Rxn 10 Sob Injury - 11 Urinary Symptoms Fever - 12 Epistaxis Bleed - Hematuria - 13 Vag Bleed Fever - Rash - 14 Epigastric Pain Bleed - Epistaxis - 15 Head Injury SOB - FB - 16 Vaginal Bleeding Bleed - PR - 17 Fever HA - Multiple - 18 Llq Pain Urinary Symptoms - 19 Dizziness Allergic Rxn - 20 Dysuria AP - Epigastric - 93

103 Appendix C: Results Descriptive Analysis for PIA Time Figure 19. Boxplot, showing PIA Time by Total Occupancy Volume 94

104 Figure 20: Boxplots showing partial linearity and homoscedasticity for PIA Time grouped by CTAS 2, 3, and 4 Occupancy. 95

105 Figure 21. Boxplots showing partial linearity and homoscedasticity for PIA Time grouped by Pre-PIA CTAS 2, 3, and 4 Occupancy. 96

106 Figure 22. Boxplots showing linearity and homoscedasticity for PIA Time grouped by Pre- PIA and Post-PIA Occupancy. 97

107 STATIONARITY PIA Timeline divisions using breakoutdetector o 1:2719; December 23 Fall Christmas (mean = 1.88) kpss level 0.1 o 2720: December 24 December 31 (low season) Level mean=2.51 o 2922: 4691; January 1 - March 15 - Winter Rates (mean=1.85) kpss level o 4692:5470; March 15 Apr 16 (2.397) spike month! Kpss level 0.1 o 5471:8172; Apr 16 Summer August 8 (1.87) kpss level 0.09 o 8173:8735; August 9 August 31 (mean = 1.49) kpss Trend o Week minimum span - PTE: Level Stationary! 98

108 Appendix D: Results Descriptive Analysis for PTE Time Figure 23. Boxplot, showing PTE Time by Total Occupancy Volume 99

109 Figure 24. Boxplots showing linearity and homoscedasticity for PTE Time grouped by Pre-PIA and Post-PIA Occupancy. 100

110 Figure 25. Boxplot showing linearity and homoscedasticity for PTE Time grouped by Volume of Admissions to the hospital. 101

111 Appendix E: ARIMA Results Supplemental Figures for PIA Time Test Model: All Yellow Zone Patients in one Year Iteration #1: First difference and Seasonal Difference With 1 MA term Series: as.numeric(hpia) ARIMA(0,1,1)(0,1,1)[24] Coefficients: ma1 sma1 oc.2 oc.3 oc s.e sigma^2 estimated as : log likelihood= AIC= AICc= BIC= Iteration #2: First difference and Seasonal Difference With 1 AR term and 1 MA term Series: as.numeric(hpia) ARIMA(1,1,1)(0,1,1)[24] Coefficients: ar1 ma1 sma1 oc.2 oc.3 oc s.e sigma^2 estimated as 0.439: log likelihood= AIC= AICc= BIC= Iteration #3: First difference and Seasonal Difference With 1 AR term and 2 MA terms, and Occupancy by CTAS as external regressor (Lowest AIC) Series: as.numeric(hpia) ARIMA(1,1,2)(0,1,1)[24] Coefficients: ar1 ma1 ma2 sma1 oc.2 oc.3 oc s.e sigma^2 estimated as : log likelihood= AIC= AICc= BIC=

112 Iteration #4: First difference and Seasonal Difference With 1 AR terms and 2 MA terms, with, and Pre-PIA Occupancy as external regressor Series: as.numeric(hpia) ARIMA(1,1,2)(0,1,1)[24] Coefficients: ar1 ma1 ma2 sma1 xreg s.e sigma^2 estimated as : log likelihood= AIC= AICc= BIC=

113 Model 1. All Yellow Zone Patients, Interval 1 (September 1-December 23, 2013) Series: as.numeric(pia1.all) ARIMA(2,0,0)(0,1,1)[24] Coefficients: ar1 ar2 sma1 oc.2 oc s.e sigma^2 estimated as : log likelihood= AIC= AICc= BIC= Training set error measures: ME RMSE MAE MPE MAPE Training set MASE ACF1 Training set Figure 26. Time Series Display of residuals for model fitting PIA Time, all patients in Interval 1. ACF and PACF show possible autocorrelation of errors at lag 13, but portmanteau test rejects autocorrelation hypothesis. 104

114 Model 2. Yellow Zone CTAS 2 Patients, Interval 1 (September 1-December 23, 2013) Series: as.numeric(pia1.2) ARIMA(1,0,1)(0,1,1)[24] Coefficients: ar1 ma1 sma1 oc.2 oc s.e sigma^2 estimated as 0.345: log likelihood= AIC= AICc= BIC= Training set error measures: ME RMSE MAE MASE ACF1 Training set Figure 27. Fitted model (green) shown over actual time series (black) for CTAS 2 Patients, Interval

115 Model 3. Yellow Zone CTAS 3 Patients, Interval 1 (September 1-December 23, 2013) Series: as.numeric(pia1.3) ARIMA(1,0,0)(0,1,1)[24] Coefficients: ar1 sma1 oc.2 oc s.e sigma^2 estimated as : log likelihood= AIC= AICc= BIC= Training set error measures: ME RMSE MAE MASE ACF1 Training set Figure 28. Fitted model (green) shown over actual time series (black) for CTAS 3 Patients, Interval

116 Model 4. Yellow Zone Abdominal Pain Patients, Interval 1 (September 1-December 23, 2013) Series: as.numeric(pia1.ap) ARIMA(2,0,0)(0,1,1)[24] Coefficients: ar1 ar2 sma1 oc.2 oc s.e sigma^2 estimated as 0.367: log likelihood= AIC= AICc= BIC= Training set error measures: ME RMSE MAE MASE ACF1 Training set Figure 29. Fitted model (green) shown over actual time series (black) for Abdominal pai Patients, Interval

117 Appendix F: Early Forecasting Attempts Results for Yellow Zone Patient cohorts without Seasonal component Model Comparison 1: AIC for models based on Phase of Care AIC: LOS,all: ARIMA (2,1,3) with Hourly Arrivals, Pts in YZ PIA, all: ARIMA (5,0,1) with Hourly Arrivals, Pts in YZ PTE: all: ARIMA (3,0,0) with Hourly Arrivals, Pts in YZ Model Comparison 2: AIC for models based on Acuity PIA, AC2: ARIMA(3,0,0) PIA, AC3: ARIMA (2,0,0) with Pts in YZ PIA, AC4: ARIMA (3,0,0) Average AIC: Model Comparison 3: AIC for models based on Major Complaint Category PIA, Specific, Ambulatory (tssp): ARIMA(1,0,0) with Arrivals and Pts in YZ PIA, Headache (tsha): ARIMA(1,0,0) with Arrivals and Pts in YZ PIA, AP&BL (tsa): ARIMA(2,0,0) PIA, FV&SOB (tsb): ARIMA(1,0,0) with Arrivals and Pts in YZ Average AIC: Table 28. Forecasting confidence bounds for non-seasonal models by patient cohort. Patient Cohort ARIMA Model 90% error Acuity 2 Based on the previous three hours 73 min underestimation Acuity 3 Based on the previous two hours and Occupancy 75 min underestimation Acuity 4 Based on the previous three hours 58 min underestimation Abdominal Pain or Bleeding Fever or Shortness of Breath Headache Ambulatory or Non-urgent Pain Based on the previous two hours Based on the previous hour, Arrivals, Occupancy Based on the previous hour, Arrivals, Occupancy Based on the previous hour, Arrivals, Occupancy 76 min underestimation 67 min underestimation 62 min underestimation 63 min underestimation 108

118 Appendix G: Early Forecasting Attempts for All Yellow Zone Patients by Monthly Interval The strength of the daily seasonal trend of the data was tested by fitting regression models to four non-consecutive calendar months from the September 2013-August 2014 time series. The months selected were chosen in order to test the efficacy of different occupancy covariates on time series of differing stationarity and trends. The level and trend stationarity of each month is described in Table 28. Table 29. Results of KPSS tests of Stationarity for each month in Sept.2013-Aug Level Stationarity Trend Stationarity Month p-value Level p-value Trend September 2013 > 0.1 Yes - - October > 0.1 Yes - - November > 0.1 Yes - - December No < 0.01 Non-linear January No < 0.01 Non-linear February > 0.1 Yes - - March < 0.01 No > 0.1 Increasing April < 0.01 No > 0.1 Decreasing May > 0.1 Yes - - June No > 0.1 Increasing July > 0.1 Yes - - August > 0.1 Yes - - Following the same iterative method as the previous model, models were fit to the months of November 2013 (Level Stationary), January 2014 (non-stationary), March 2014 (Trend Stationary, increasing), and April 2014 (Trend Stationary, decreasing). The results of the fitted models are detailed in Table 7. The lowest AIC was found for the level stationary month, November, and the highest AIC was found for the decreasing trend month, April. With the exception of March, which uses three autoregressive terms, all months tested required only two 109

119 autoregressive terms and no moving average terms. To represent seasonality, a seasonal difference of D=1 and a Seasonal Moving Average (SMA) term of Q=1 are used in all models, with a seasonal length of 24-hours. For the months of November and March, CTAS 2 and CTAS 3 Occupancy were used together as external regressors, with CTAS 4 Occupancy being eliminated due to the linear regression coefficient including zero in the confidence interval. January, the non-linear month, used the measure of all YZ Occupancy as external regressor. The model for April used Pre-PIA Occupancy as an external regressor. Table 30. ARIMA Model Results for Monthly Time Series Segments November (level) Coefficient S.E. Value Autoregressive Terms January (nonlinear) Coefficient S.E. Value March (increasing) Coefficient S.E. Value April (decreasing) Coefficient Value AR(1) AR(2) AR(3) Seasonal MA Terms SMA(1) Linear Terms (hrs) CTAS 2 Occupancy CTAS 3 Occupancy All YZ Occupancy Pre-PIA Occupancy AIC Variance (s 2 ) S.E. 110

120 Table 31. ARIMA Results for PIA Time, Early Seasonal Forecasting Attempts Full-Year Models Non-seasonal Order Seasonal Order Model Statistics p d q P D Q AIC s 2 RMSE Seasonal Model Type 1 All Patients rd Order AR 2 CTAS nd Order AR 3 CTAS ARMA 4 Abdominal Pain st Order AR Four-Month Model 5 May-Aug ARMA One-Month Models 6 November nd Order AR 7 January nd Order AR 8 March rd Order AR 9 April nd Order AR 111

121 Appendix H: Time Series Approximation Figure 30. Distribution of Original Time Series (Blue) and Time Series after Spline Interpolation (Blue), CTAS 2 patients. Figure 31. Distribution of Original Time Series (Blue) and Time Series after Spline Interpolation (Blue), CTAS 3 patients. 112

122 Figure 32. Distribution of Original Time Series (Blue) and Time Series after Spline Interpolation (Blue), Abdominal Pain patients. 113

123 Appendix I: ARIMA Results Supplemental Figures for PTE Time Model 1. All Yellow Zone Patients, Interval 1 (September 1-December 31, 2013) Call: Arima(x = as.numeric(pte1.all), order = c(1, 0, 0), seasonal = list(order = c(0, 1, 1), period = 24)) Coefficients: ar1 sma s.e sigma^2 estimated as 2.924: log likelihood = , aic = Figure 33. Time Series Display of residuals for model fitting PTE Time, all patients in Interval 1. ACF and PACF show possible autocorrelation of errors at lag 24, but portmanteau test rejects autocorrelation hypothesis. 114

124 Model 2. CTAS 2 Patients, Interval 1 (September 1-December 31, 2013) Series: as.numeric(pte1.2) ARIMA(1,0,1)(0,1,1)[24] Coefficients: ar1 ma1 sma s.e sigma^2 estimated as 4.812: log likelihood= AIC= AICc= BIC= Training set error measures: ME RMSE MAE MASE ACF1 Training set Figure 34. Fitted model (green) shown over actual time series (black) for CTAS 2 Patients, Interval

125 Model 3. CTAS 3 Patients, Interval 1 (September 1-December 31, 2013) Series: as.numeric(pte1.3) ARIMA(1,0,0)(0,1,1)[24] Coefficients: ar1 sma s.e sigma^2 estimated as 3.408: log likelihood= AIC= AICc= BIC= Training set error measures: ME RMSE MAE MASE ACF1 Training set Figure 35. Fitted model (green) shown over actual time series (black) for CTAS 3 Patients, Interval

126 Model 4. Abdominal Pain Patients, Interval 1 (September 1-December 31, 2013) Series: as.numeric(pte1.ap) ARIMA(1,0,1)(0,1,1)[24] Coefficients: ar1 ma1 sma s.e sigma^2 estimated as 5.159: log likelihood= AIC= AICc= BIC= Training set error measures: ME RMSE MAE MASE ACF1 Training set Figure 36. Fitted model (green) shown over actual time series (black) for Abdominal Pain Patients, Interval

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds. Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds. IDENTIFYING THE OPTIMAL CONFIGURATION OF AN EXPRESS CARE AREA

More information

Hospital Patient Flow Capacity Planning Simulation Model at Vancouver Coastal Health

Hospital Patient Flow Capacity Planning Simulation Model at Vancouver Coastal Health Hospital Patient Flow Capacity Planning Simulation Model at Vancouver Coastal Health Amanda Yuen, Hongtu Ernest Wu Decision Support, Vancouver Coastal Health Vancouver, BC, Canada Abstract In order to

More information

QUEUING THEORY APPLIED IN HEALTHCARE

QUEUING THEORY APPLIED IN HEALTHCARE QUEUING THEORY APPLIED IN HEALTHCARE This report surveys the contributions and applications of queuing theory applications in the field of healthcare. The report summarizes a range of queuing theory results

More information

The Determinants of Patient Satisfaction in the United States

The Determinants of Patient Satisfaction in the United States The Determinants of Patient Satisfaction in the United States Nikhil Porecha The College of New Jersey 5 April 2016 Dr. Donka Mirtcheva Abstract Hospitals and other healthcare facilities face a problem

More information

FOCUS on Emergency Departments DATA DICTIONARY

FOCUS on Emergency Departments DATA DICTIONARY FOCUS on Emergency Departments DATA DICTIONARY Table of Contents Contents Patient time to see an emergency doctor... 1 Patient emergency department total length of stay (LOS)... 3 Length of time emergency

More information

Boarding Impact on patients, hospitals and healthcare systems

Boarding Impact on patients, hospitals and healthcare systems Boarding Impact on patients, hospitals and healthcare systems Dan Beckett Consultant Acute Physician NHSFV National Clinical Lead Whole System Patient Flow Project Scottish Government May 2014 Important

More information

Executive Summary. This Project

Executive Summary. This Project Executive Summary The Health Care Financing Administration (HCFA) has had a long-term commitment to work towards implementation of a per-episode prospective payment approach for Medicare home health services,

More information

Health System Outcomes and Measurement Framework

Health System Outcomes and Measurement Framework Health System Outcomes and Measurement Framework December 2013 (Amended August 2014) Table of Contents Introduction... 2 Purpose of the Framework... 2 Overview of the Framework... 3 Logic Model Approach...

More information

Models for Bed Occupancy Management of a Hospital in Singapore

Models for Bed Occupancy Management of a Hospital in Singapore Proceedings of the 2010 International Conference on Industrial Engineering and Operations Management Dhaka, Bangladesh, January 9-10, 2010 Models for Bed Occupancy Management of a Hospital in Singapore

More information

Matching Capacity and Demand:

Matching Capacity and Demand: We have nothing to disclose Matching Capacity and Demand: Using Advanced Analytics for Improvement and ecasting Denise L. White, PhD MBA Assistant Professor Director Quality & Transformation Analytics

More information

STUDY OF PATIENT WAITING TIME AT EMERGENCY DEPARTMENT OF A TERTIARY CARE HOSPITAL IN INDIA

STUDY OF PATIENT WAITING TIME AT EMERGENCY DEPARTMENT OF A TERTIARY CARE HOSPITAL IN INDIA STUDY OF PATIENT WAITING TIME AT EMERGENCY DEPARTMENT OF A TERTIARY CARE HOSPITAL IN INDIA *Angel Rajan Singh and Shakti Kumar Gupta Department of Hospital Administration, All India Institute of Medical

More information

Analyzing Readmissions Patterns: Assessment of the LACE Tool Impact

Analyzing Readmissions Patterns: Assessment of the LACE Tool Impact Health Informatics Meets ehealth G. Schreier et al. (Eds.) 2016 The authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative

More information

Analysis of Nursing Workload in Primary Care

Analysis of Nursing Workload in Primary Care Analysis of Nursing Workload in Primary Care University of Michigan Health System Final Report Client: Candia B. Laughlin, MS, RN Director of Nursing Ambulatory Care Coordinator: Laura Mittendorf Management

More information

Decreasing Environmental Services Response Times

Decreasing Environmental Services Response Times Decreasing Environmental Services Response Times Murray J. Côté, Ph.D., Associate Professor, Department of Health Policy & Management, Texas A&M Health Science Center; Zach Robison, M.B.A., Administrative

More information

Report on Feasibility, Costs, and Potential Benefits of Scaling the Military Acuity Model

Report on Feasibility, Costs, and Potential Benefits of Scaling the Military Acuity Model Report on Feasibility, Costs, and Potential Benefits of Scaling the Military Acuity Model June 2017 Requested by: House Report 114-139, page 280, which accompanies H.R. 2685, the Department of Defense

More information

Long-Stay Alternate Level of Care in Ontario Mental Health Beds

Long-Stay Alternate Level of Care in Ontario Mental Health Beds Health System Reconfiguration Long-Stay Alternate Level of Care in Ontario Mental Health Beds PREPARED BY: Jerrica Little, BA John P. Hirdes, PhD FCAHS School of Public Health and Health Systems University

More information

How Allina Saved $13 Million By Optimizing Length of Stay

How Allina Saved $13 Million By Optimizing Length of Stay Success Story How Allina Saved $13 Million By Optimizing Length of Stay EXECUTIVE SUMMARY Like most large healthcare systems throughout the country, Allina Health s financial health improves dramatically

More information

4.09. Hospitals Management and Use of Surgical Facilities. Chapter 4 Section. Background. Follow-up on VFM Section 3.09, 2007 Annual Report

4.09. Hospitals Management and Use of Surgical Facilities. Chapter 4 Section. Background. Follow-up on VFM Section 3.09, 2007 Annual Report Chapter 4 Section 4.09 Hospitals Management and Use of Surgical Facilities Follow-up on VFM Section 3.09, 2007 Annual Report Background Ontario s public hospitals are generally governed by a board of directors

More information

University of Michigan Health System MiChart Department Improving Operating Room Case Time Accuracy Final Report

University of Michigan Health System MiChart Department Improving Operating Room Case Time Accuracy Final Report University of Michigan Health System MiChart Department Improving Operating Room Case Time Accuracy Final Report Submitted To: Clients Jeffrey Terrell, MD: Associate Chief Medical Information Officer Deborah

More information

PG snapshot PRESS GANEY IDENTIFIES KEY DRIVERS OF PATIENT LOYALTY IN MEDICAL PRACTICES. January 2014 Volume 13 Issue 1

PG snapshot PRESS GANEY IDENTIFIES KEY DRIVERS OF PATIENT LOYALTY IN MEDICAL PRACTICES. January 2014 Volume 13 Issue 1 PG snapshot news, views & ideas from the leader in healthcare experience & satisfaction measurement The Press Ganey snapshot is a monthly electronic bulletin freely available to all those involved or interested

More information

University of Michigan Health System. Current State Analysis of the Main Adult Emergency Department

University of Michigan Health System. Current State Analysis of the Main Adult Emergency Department University of Michigan Health System Program and Operations Analysis Current State Analysis of the Main Adult Emergency Department Final Report To: Jeff Desmond MD, Clinical Operations Manager Emergency

More information

Health Quality Ontario

Health Quality Ontario Health Quality Ontario The provincial advisor on the quality of health care in Ontario November 15, 2016 Under Pressure: Emergency department performance in Ontario Technical Appendix Table of Contents

More information

Michigan Medicine--Frankel Cardiovascular Center. Determining Direct Patient Utilization Costs in the Cardiovascular Clinic.

Michigan Medicine--Frankel Cardiovascular Center. Determining Direct Patient Utilization Costs in the Cardiovascular Clinic. Michigan Medicine--Frankel Cardiovascular Center Clinical Design and Innovation Determining Direct Patient Utilization Costs in the Cardiovascular Clinic Final Report Client: Mrs. Cathy Twu-Wong Project

More information

TRIAGE PRACTICES AND PROCEDURES IN ONTARIO S EMERGENCY DEPARTMENTS A REPORT TO THE STEERING COMMITTEE, TRIAGE IN ONTARIO

TRIAGE PRACTICES AND PROCEDURES IN ONTARIO S EMERGENCY DEPARTMENTS A REPORT TO THE STEERING COMMITTEE, TRIAGE IN ONTARIO TRIAGE PRACTICES AND PROCEDURES IN ONTARIO S EMERGENCY DEPARTMENTS A REPORT TO THE STEERING COMMITTEE, TRIAGE IN ONTARIO Cater Sloan Raymond Pong Vic Sahai Robert Barnett Mary Ward Jack Williams MARCH

More information

Identifying step-down bed needs to improve ICU capacity and costs

Identifying step-down bed needs to improve ICU capacity and costs www.simul8healthcare.com/case-studies Identifying step-down bed needs to improve ICU capacity and costs London Health Sciences Centre and Ivey Business School utilized SIMUL8 simulation software to evaluate

More information

APPLICATION OF SIMULATION MODELING FOR STREAMLINING OPERATIONS IN HOSPITAL EMERGENCY DEPARTMENTS

APPLICATION OF SIMULATION MODELING FOR STREAMLINING OPERATIONS IN HOSPITAL EMERGENCY DEPARTMENTS APPLICATION OF SIMULATION MODELING FOR STREAMLINING OPERATIONS IN HOSPITAL EMERGENCY DEPARTMENTS Igor Georgievskiy Alcorn State University Department of Advanced Technologies phone: 601-877-6482, fax:

More information

Workload Models. Hospitalist Consulting Solutions White Paper Series

Workload Models. Hospitalist Consulting Solutions White Paper Series Hospitalist Consulting Solutions White Paper Series Workload Models Author Vandad Yousefi MD CCFP Senior partner Hospitalist Consulting Solutions 1905-763 Bay St Toronto ON M5G 2R3 1 Hospitalist Consulting

More information

Optimizing the planning of the one day treatment facility of the VUmc

Optimizing the planning of the one day treatment facility of the VUmc Research Paper Business Analytics Optimizing the planning of the one day treatment facility of the VUmc Author: Babiche de Jong Supervisors: Marjolein Jungman René Bekker Vrije Universiteit Amsterdam Faculty

More information

INPATIENT SURVEY PSYCHOMETRICS

INPATIENT SURVEY PSYCHOMETRICS INPATIENT SURVEY PSYCHOMETRICS One of the hallmarks of Press Ganey s surveys is their scientific basis: our products incorporate the best characteristics of survey design. Our surveys are developed by

More information

Satisfaction and Experience with Health Care Services: A Survey of Albertans December 2010

Satisfaction and Experience with Health Care Services: A Survey of Albertans December 2010 Satisfaction and Experience with Health Care Services: A Survey of Albertans 2010 December 2010 Table of Contents 1.0 Executive Summary...1 1.1 Quality of Health Care Services... 2 1.2 Access to Health

More information

Let s Talk Informatics

Let s Talk Informatics Let s Talk Informatics Discrete-Event Simulation Daryl MacNeil P.Eng., MBA Terry Boudreau P.Eng., B.Sc. 28 Sept. 2017 Bethune Ballroom, Halifax, Nova Scotia Please be advised that we are currently in a

More information

MINISTRY/LHIN ACCOUNTABILITY AGREEMENT (MLAA) MLAA Performance Assessment Dashboard /10 Q3

MINISTRY/LHIN ACCOUNTABILITY AGREEMENT (MLAA) MLAA Performance Assessment Dashboard /10 Q3 MINISTRY/LHIN ACCOUNTABILITY AGREEMENT (MLAA) MLAA Performance Assessment Dashboard - 29/1 Q3 README The 29/1 MLAA Dashboard has been designed to reflect various reporting fiscal periods as well as the

More information

Health System Performance and Accountability Division MOHLTC. Transitional Care Program Framework

Health System Performance and Accountability Division MOHLTC. Transitional Care Program Framework Transitional Care Program Framework August, 2010 1 Table of Contents 1. Context... 3 2. Transitional Care Program Framework... 4 3. Transitional Care Program in the Hospital Setting... 5 4. Summary of

More information

Emergency Medicine Programme

Emergency Medicine Programme Emergency Medicine Programme Implementation Guide 8: Matching Demand and Capacity in the ED January 2013 Introduction This is a guide for Emergency Department (ED) and hospital operational management teams

More information

In order to analyze the relationship between diversion status and other factors within the

In order to analyze the relationship between diversion status and other factors within the Root Cause Analysis of Emergency Department Crowding and Ambulance Diversion in Massachusetts A report submitted by the Boston University Program for the Management of Variability in Health Care Delivery

More information

Proceedings of the 2010 Winter Simulation Conference B. Johansson, S. Jain, J. Montoya-Torres, J. Hugan, and E. Yücesan, eds.

Proceedings of the 2010 Winter Simulation Conference B. Johansson, S. Jain, J. Montoya-Torres, J. Hugan, and E. Yücesan, eds. Proceedings of the 2010 Winter Simulation Conference B. Johansson, S. Jain, J. Montoya-Torres, J. Hugan, and E. Yücesan, eds. BI-CRITERIA ANALYSIS OF AMBULANCE DIVERSION POLICIES Adrian Ramirez Nafarrate

More information

The development and testing of a conceptual model for the analysis of contemporry developmental relationships in nursing

The development and testing of a conceptual model for the analysis of contemporry developmental relationships in nursing University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 1992 The development and testing of a conceptual model for the

More information

Indicator Definition

Indicator Definition Patients Discharged from Emergency Department within 4 hours Full data definition sign-off complete. Name of Measure Name of Measure (short) Domain Type of Measure Emergency Department Length of Stay:

More information

The significance of staffing and work environment for quality of care and. the recruitment and retention of care workers. Perspectives from the Swiss

The significance of staffing and work environment for quality of care and. the recruitment and retention of care workers. Perspectives from the Swiss The significance of staffing and work environment for quality of care and the recruitment and retention of care workers. Perspectives from the Swiss Nursing Homes Human Resources Project (SHURP) Inauguraldissertation

More information

A Measurement Guide for Long Term Care

A Measurement Guide for Long Term Care Step 6.10 Change and Measure A Measurement Guide for Long Term Care Introduction Stratis Health, in partnership with the Minnesota Department of Health, is pleased to present A Measurement Guide for Long

More information

The Impact of Increased Number of Acute Care Beds to Reduce Emergency Room Wait Times

The Impact of Increased Number of Acute Care Beds to Reduce Emergency Room Wait Times The Impact of Increased Number of Acute Care Beds to Reduce Emergency Room Wait Times JENNIFER MCKAY Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements

More information

Improving ED Flow through the UMLN II

Improving ED Flow through the UMLN II Improving ED Flow through the UMLN II Good Samaritan Hospital Medical Center West Islip, NY 437 beds, 50 ED beds http://www.goodsamaritan.chsli.org Good Samaritan Hospital Medical Center, a member of Catholic

More information

Publication Year: 2013

Publication Year: 2013 THE INITIAL ASSESSMENT PROCESS ST. JOSEPH'S HEALTHCARE HAMILTON Publication Year: 2013 Summary: The Initial Assessment Process (IAP) was developed collaboratively by the emergency physicians, nursing,

More information

Hospital Patient Journey Modelling to Assess Quality of Care: An Evidence-Based, Agile Process-Oriented Framework for Health Intelligence

Hospital Patient Journey Modelling to Assess Quality of Care: An Evidence-Based, Agile Process-Oriented Framework for Health Intelligence FLINDERS UNIVERSITY OF SOUTH AUSTRALIA Hospital Patient Journey Modelling to Assess Quality of Care: An Evidence-Based, Agile Process-Oriented Framework for Health Intelligence Lua Perimal-Lewis School

More information

BRIGHAM AND WOMEN S EMERGENCY DEPARTMENT OBSERVATION UNIT PROCESS IMPROVEMENT

BRIGHAM AND WOMEN S EMERGENCY DEPARTMENT OBSERVATION UNIT PROCESS IMPROVEMENT BRIGHAM AND WOMEN S EMERGENCY DEPARTMENT OBSERVATION UNIT PROCESS IMPROVEMENT Design Team Daniel Beaulieu, Xenia Ferraro Melissa Marinace, Kendall Sanderson Ellen Wilson Design Advisors Prof. James Benneyan

More information

A Primer on Activity-Based Funding

A Primer on Activity-Based Funding A Primer on Activity-Based Funding Introduction and Background Canada is ranked sixth among the richest countries in the world in terms of the proportion of gross domestic product (GDP) spent on health

More information

Palomar College ADN Model Prerequisite Validation Study. Summary. Prepared by the Office of Institutional Research & Planning August 2005

Palomar College ADN Model Prerequisite Validation Study. Summary. Prepared by the Office of Institutional Research & Planning August 2005 Palomar College ADN Model Prerequisite Validation Study Summary Prepared by the Office of Institutional Research & Planning August 2005 During summer 2004, Dr. Judith Eckhart, Department Chair for the

More information

Quality Improvement Plan (QIP): 2015/16 Progress Report

Quality Improvement Plan (QIP): 2015/16 Progress Report Quality Improvement Plan (QIP): Progress Report Medication Reconciliation for Outpatient Clinics 1 % complete medication reconciliation on outpatient clinic visit assessments ( %; Pediatric Patients; Fiscal

More information

How to deal with Emergency at the Operating Room

How to deal with Emergency at the Operating Room How to deal with Emergency at the Operating Room Research Paper Business Analytics Author: Freerk Alons Supervisor: Dr. R. Bekker VU University Amsterdam Faculty of Science Master Business Mathematics

More information

Gantt Chart. Critical Path Method 9/23/2013. Some of the common tools that managers use to create operational plan

Gantt Chart. Critical Path Method 9/23/2013. Some of the common tools that managers use to create operational plan Some of the common tools that managers use to create operational plan Gantt Chart The Gantt chart is useful for planning and scheduling projects. It allows the manager to assess how long a project should

More information

2017/18 Quality Improvement Plan Improvement Targets and Initiatives

2017/18 Quality Improvement Plan Improvement Targets and Initiatives 2017/18 Quality Improvement Plan Improvement Targets and Initiatives AIM Measure Change Effective Effective Care for Patients with Sepsis % Eligible Nurses who have Completed the Sepsis Education Bundle

More information

3M Health Information Systems. 3M Clinical Risk Groups: Measuring risk, managing care

3M Health Information Systems. 3M Clinical Risk Groups: Measuring risk, managing care 3M Health Information Systems 3M Clinical Risk Groups: Measuring risk, managing care 3M Clinical Risk Groups: Measuring risk, managing care Overview The 3M Clinical Risk Groups (CRGs) are a population

More information

Meeting Date: July 26, 2017 Action: Decision Topic: Item 13.0 Grand River Hospital MRI and Nuclear Medicine Replacement Pre-Capital Submission

Meeting Date: July 26, 2017 Action: Decision Topic: Item 13.0 Grand River Hospital MRI and Nuclear Medicine Replacement Pre-Capital Submission BRIEFING NOTE Mission: To make it easy for you to be healthy and to get the care and support you need. Vision: Healthy People. Thriving Communities. Bright Futures. Core Value: Acting in the best interest

More information

Using discrete event simulation to improve the patient care process in the emergency department of a rural Kentucky hospital.

Using discrete event simulation to improve the patient care process in the emergency department of a rural Kentucky hospital. University of Louisville ThinkIR: The University of Louisville's Institutional Repository Electronic Theses and Dissertations 6-2013 Using discrete event simulation to improve the patient care process

More information

EXECUTIVE SUMMARY. Introduction. Methods

EXECUTIVE SUMMARY. Introduction. Methods EXECUTIVE SUMMARY Introduction University of Michigan (UM) General Pediatrics offers health services to patients through nine outpatient clinics located throughout South Eastern Michigan. These clinics

More information

Building a Smarter Healthcare System The IE s Role. Kristin H. Goin Service Consultant Children s Healthcare of Atlanta

Building a Smarter Healthcare System The IE s Role. Kristin H. Goin Service Consultant Children s Healthcare of Atlanta Building a Smarter Healthcare System The IE s Role Kristin H. Goin Service Consultant Children s Healthcare of Atlanta 2 1 Background 3 Industrial Engineering The objective of Industrial Engineering is

More information

MINISTRY OF HEALTH AND LONG-TERM CARE. Summary of Transfer Payments for the Operation of Public Hospitals. Type of Funding

MINISTRY OF HEALTH AND LONG-TERM CARE. Summary of Transfer Payments for the Operation of Public Hospitals. Type of Funding MINISTRY OF HEALTH AND LONG-TERM CARE 3.09 Institutional Health Program Transfer Payments to Public Hospitals The Public Hospitals Act provides the legislative authority to regulate and fund the operations

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-Li Huang, Ph.D. Assistant Professor Industrial Engineering Department New Mexico State University 575-646-2950 yhuang@nmsu.edu

More information

University of Michigan Health System. Final Report

University of Michigan Health System. Final Report University of Michigan Health System Program and Operations Analysis Analysis of Medication Turnaround in the 6 th Floor University Hospital Pharmacy Satellite Final Report To: Dr. Phil Brummond, Pharm.D,

More information

PG snapshot Nursing Special Report. The Role of Workplace Safety and Surveillance Capacity in Driving Nurse and Patient Outcomes

PG snapshot Nursing Special Report. The Role of Workplace Safety and Surveillance Capacity in Driving Nurse and Patient Outcomes PG snapshot news, views & ideas from the leader in healthcare experience & satisfaction measurement The Press Ganey snapshot is a monthly electronic bulletin freely available to all those involved or interested

More information

INDEPENDENT ASSESSMENT COMMITTEE REPORT SUMMARY

INDEPENDENT ASSESSMENT COMMITTEE REPORT SUMMARY INDEPENDENT ASSESSMENT COMMITTEE REPORT SUMMARY Employer: Lakeridge Health Oshawa, Emergency Department (Oshawa Site) Board: Chair: Leslie Vincent; ONA Nominee: Cindy Gabrielli; Employer Nominee: Susan

More information

Emergency care workload units: A novel tool to compare emergency department activity

Emergency care workload units: A novel tool to compare emergency department activity Bond University epublications@bond Faculty of Health Sciences & Medicine Publications Faculty of Health Sciences & Medicine 10-1-2010 Emergency care workload units: A novel tool to compare emergency department

More information

A Canadian Perspective: Implementing Tiered Licensing in the Province of Ontario

A Canadian Perspective: Implementing Tiered Licensing in the Province of Ontario A Canadian Perspective: Implementing Tiered Licensing in the Province of Ontario NARA Licensing Seminar September 20, 2016 Ministry of Education Province of Ontario, Canada Ontario s Geography Ontario

More information

PUBLIC HEALTH PERFORMANCE INDICATORS 2013 YEAR-END RESULTS. August 2014

PUBLIC HEALTH PERFORMANCE INDICATORS 2013 YEAR-END RESULTS. August 2014 PUBLIC HEALTH PERFORMANCE INDICATORS 2013 YEAR-END RESULTS August 2014 Table of Contents Introduction... 1 Considerations for Interpretation... 2 Health Protection Indicators... 5 Indicator # 1. % of high-risk

More information

Community Performance Report

Community Performance Report : Wenatchee Current Year: Q1 217 through Q4 217 Qualis Health Communities for Safer Transitions of Care Performance Report : Wenatchee Includes Data Through: Q4 217 Report Created: May 3, 218 Purpose of

More information

Cost-Benefit Analysis of Medication Reconciliation Pharmacy Technician Pilot Final Report

Cost-Benefit Analysis of Medication Reconciliation Pharmacy Technician Pilot Final Report Team 10 Med-List University of Michigan Health System Program and Operations Analysis Cost-Benefit Analysis of Medication Reconciliation Pharmacy Technician Pilot Final Report To: John Clark, PharmD, MS,

More information

Forecasts of the Registered Nurse Workforce in California. June 7, 2005

Forecasts of the Registered Nurse Workforce in California. June 7, 2005 Forecasts of the Registered Nurse Workforce in California June 7, 2005 Conducted for the California Board of Registered Nursing Joanne Spetz, PhD Wendy Dyer, MS Center for California Health Workforce Studies

More information

Title: The Parent Support and Training Practice Protocol - Validation of the Scoring Tool and Establishing Statewide Baseline Fidelity

Title: The Parent Support and Training Practice Protocol - Validation of the Scoring Tool and Establishing Statewide Baseline Fidelity Title: The Parent Support and Training Practice Protocol - Validation of the Scoring Tool and Establishing Statewide Baseline Fidelity Sharah Davis-Groves, LMSW, Project Manager; Kathy Byrnes, M.A., LMSW,

More information

Scenario Planning: Optimizing your inpatient capacity glide path in an age of uncertainty

Scenario Planning: Optimizing your inpatient capacity glide path in an age of uncertainty Scenario Planning: Optimizing your inpatient capacity glide path in an age of uncertainty Scenario Planning: Optimizing your inpatient capacity glide path in an age of uncertainty Examining a range of

More information

Thank you for joining us today!

Thank you for joining us today! Thank you for joining us today! Please dial 1.800.732.6179 now to connect to the audio for this webinar. To show/hide the control panel click the double arrows. 1 Emergency Room Overcrowding A multi-dimensional

More information

Critique of a Nurse Driven Mobility Study. Heather Nowak, Wendy Szymoniak, Sueann Unger, Sofia Warren. Ferris State University

Critique of a Nurse Driven Mobility Study. Heather Nowak, Wendy Szymoniak, Sueann Unger, Sofia Warren. Ferris State University Running head: CRITIQUE OF A NURSE 1 Critique of a Nurse Driven Mobility Study Heather Nowak, Wendy Szymoniak, Sueann Unger, Sofia Warren Ferris State University CRITIQUE OF A NURSE 2 Abstract This is a

More information

Waterloo Wellington Community Care Access Centre. Community Needs Assessment

Waterloo Wellington Community Care Access Centre. Community Needs Assessment Waterloo Wellington Community Care Access Centre Community Needs Assessment Table of Contents 1. Geography & Demographics 2. Socio-Economic Status & Population Health Community Needs Assessment 3. Community

More information

Access to Health Care Services in Canada, 2003

Access to Health Care Services in Canada, 2003 Access to Health Care Services in Canada, 2003 by Claudia Sanmartin, François Gendron, Jean-Marie Berthelot and Kellie Murphy Health Analysis and Measurement Group Statistics Canada Statistics Canada Health

More information

The Economic Cost of Wait Times in Canada

The Economic Cost of Wait Times in Canada Assessing past, present and future economic and demographic change in Canada The Economic Cost of Wait Times in Canada Prepared for: British Columbia Medical Association 1665 West Broadway, Suite 115 Vancouver,

More information

Comparing the Value of Three Main Diagnostic-Based Risk-Adjustment Systems (DBRAS)

Comparing the Value of Three Main Diagnostic-Based Risk-Adjustment Systems (DBRAS) Comparing the Value of Three Main Diagnostic-Based Risk-Adjustment Systems (DBRAS) March 2005 Marc Berlinguet, MD, MPH Colin Preyra, PhD Stafford Dean, MA Funding Provided by: Fonds de Recherche en Santé

More information

August 25, Dear Ms. Verma:

August 25, Dear Ms. Verma: Seema Verma Administrator Centers for Medicare & Medicaid Services Hubert H. Humphrey Building 200 Independence Avenue, S.W. Room 445-G Washington, DC 20201 CMS 1686 ANPRM, Medicare Program; Prospective

More information

Summary of Findings. Data Memo. John B. Horrigan, Associate Director for Research Aaron Smith, Research Specialist

Summary of Findings. Data Memo. John B. Horrigan, Associate Director for Research Aaron Smith, Research Specialist Data Memo BY: John B. Horrigan, Associate Director for Research Aaron Smith, Research Specialist RE: HOME BROADBAND ADOPTION 2007 June 2007 Summary of Findings 47% of all adult Americans have a broadband

More information

Case-mix Analysis Across Patient Populations and Boundaries: A Refined Classification System

Case-mix Analysis Across Patient Populations and Boundaries: A Refined Classification System Case-mix Analysis Across Patient Populations and Boundaries: A Refined Classification System Designed Specifically for International Quality and Performance Use A white paper by: Marc Berlinguet, MD, MPH

More information

Family and Community Support Services (FCSS) Program Review

Family and Community Support Services (FCSS) Program Review Family and Community Support Services (FCSS) Program Review Judy Smith, Director Community Investment Community Services Department City of Edmonton 1100, CN Tower, 10004 104 Avenue Edmonton, Alberta,

More information

University of Michigan Health System. Program and Operations Analysis. CSR Staffing Process. Final Report

University of Michigan Health System. Program and Operations Analysis. CSR Staffing Process. Final Report University of Michigan Health System Program and Operations Analysis CSR Staffing Process Final Report To: Jean Shlafer, Director, Central Staffing Resources, Admissions Bed Coordination Center Amanda

More information

Applying Critical ED Improvement Principles Jody Crane, MD, MBA Kevin Nolan, MStat, MA

Applying Critical ED Improvement Principles Jody Crane, MD, MBA Kevin Nolan, MStat, MA These presenters have nothing to disclose. Applying Critical ED Improvement Principles Jody Crane, MD, MBA Kevin Nolan, MStat, MA April 28, 2015 Cambridge, MA Session Objectives After this session, participants

More information

THE USE OF SIMULATION TO DETERMINE MAXIMUM CAPACITY IN THE SURGICAL SUITE OPERATING ROOM. Sarah M. Ballard Michael E. Kuhl

THE USE OF SIMULATION TO DETERMINE MAXIMUM CAPACITY IN THE SURGICAL SUITE OPERATING ROOM. Sarah M. Ballard Michael E. Kuhl Proceedings of the 2006 Winter Simulation Conference L. F. Perrone, F. P. Wieland, J. Liu, B. G. Lawson, D. M. Nicol, and R. M. Fujimoto, eds. THE USE OF SIMULATION TO DETERMINE MAXIMUM CAPACITY IN THE

More information

Cost analysis of emergency department

Cost analysis of emergency department J prev med hyg 2010; 51: 157-163 Original article Cost analysis of emergency department P. Cremonesi, E. di Bella *, M. Montefiori * Complex Structure of Medicine and Surgery of Acceptance and Urgency,

More information

FRENCH LANGUAGE HEALTH SERVICES STRATEGY

FRENCH LANGUAGE HEALTH SERVICES STRATEGY FRENCH LANGUAGE HEALTH SERVICES STRATEGY 2016-2019 Table of Contents I. Introduction... 4 Partners... 4 A. Champlain LHIN IHSP... 4 B. South East LHIN IHSP... 5 C. Réseau Strategic Planning... 5 II. Goal

More information

University of Michigan Comprehensive Stroke Center

University of Michigan Comprehensive Stroke Center University of Michigan Comprehensive Stroke Center Improving the Discharge and Post-Discharge Process Flow Final Report Date: April 18, 2017 To: Jenevra Foley, Operating Director of Stroke Center, jenevra@med.umich.edu

More information

Improving Hospital Performance Through Clinical Integration

Improving Hospital Performance Through Clinical Integration white paper Improving Hospital Performance Through Clinical Integration Rohit Uppal, MD President of Acute Hospital Medicine, TeamHealth In the typical hospital, most clinical service lines operate as

More information

Developing ABF in mental health services: time is running out!

Developing ABF in mental health services: time is running out! Developing ABF in mental health services: time is running out! Joe Scuteri (Managing Director) Health Informatics Conference 2012 Tuesday 31 st July, 2012 The ABF Health Reform From 2014/15 the Commonwealth

More information

Emergency-Departments Simulation in Support of Service-Engineering: Staffing, Design, and Real-Time Tracking

Emergency-Departments Simulation in Support of Service-Engineering: Staffing, Design, and Real-Time Tracking Emergency-Departments Simulation in Support of Service-Engineering: Staffing, Design, and Real-Time Tracking Yariv N. Marmor Advisor: Professor Mandelbaum Avishai Faculty of Industrial Engineering and

More information

Methodology Notes. Identifying Indicator Top Results and Trends for Regions/Facilities

Methodology Notes. Identifying Indicator Top Results and Trends for Regions/Facilities Methodology Notes Identifying Indicator Top Results and Trends for Regions/Facilities Production of this document is made possible by financial contributions from Health Canada and provincial and territorial

More information

A Publication for Hospital and Health System Professionals

A Publication for Hospital and Health System Professionals A Publication for Hospital and Health System Professionals S U M M E R 2 0 0 8 V O L U M E 6, I S S U E 2 Data for Healthcare Improvement Developing and Applying Avoidable Delay Tracking Working with Difficult

More information

The Pennsylvania State University. The Graduate School ROBUST DESIGN USING LOSS FUNCTION WITH MULTIPLE OBJECTIVES

The Pennsylvania State University. The Graduate School ROBUST DESIGN USING LOSS FUNCTION WITH MULTIPLE OBJECTIVES The Pennsylvania State University The Graduate School The Harold and Inge Marcus Department of Industrial and Manufacturing Engineering ROBUST DESIGN USING LOSS FUNCTION WITH MULTIPLE OBJECTIVES AND PATIENT

More information

Organisational factors that influence waiting times in emergency departments

Organisational factors that influence waiting times in emergency departments ACCESS TO HEALTH CARE NOVEMBER 2007 ResearchSummary Organisational factors that influence waiting times in emergency departments Waiting times in emergency departments are important to patients and also

More information

UTILIZATION MANAGEMENT FOR ADULT MEMBERS

UTILIZATION MANAGEMENT FOR ADULT MEMBERS UTILIZATION MANAGEMENT FOR ADULT MEMBERS Quarter 2: (April through June 2014) EXECUTIVE SUMMARY & ANALYSIS BY LEVEL OF CARE Submitted: September 2, 2014 CONNECTICUT DCF CONNECTICUT Utilization Report

More information

Improving patient satisfaction by adding a physician in triage

Improving patient satisfaction by adding a physician in triage ORIGINAL ARTICLE Improving patient satisfaction by adding a physician in triage Jason Imperato 1, Darren S. Morris 2, Leon D. Sanchez 2, Gary Setnik 1 1. Department of Emergency Medicine, Mount Auburn

More information

Enhancing Sustainability: Building Modeling Through Text Analytics. Jessica N. Terman, George Mason University

Enhancing Sustainability: Building Modeling Through Text Analytics. Jessica N. Terman, George Mason University Enhancing Sustainability: Building Modeling Through Text Analytics Tony Kassekert, The George Washington University Jessica N. Terman, George Mason University Research Background Recent work by Terman

More information

Children s Hospital of Eastern Ontario

Children s Hospital of Eastern Ontario Children s Hospital of Eastern Ontario April 1, 2011 Children s Hospital of Eastern Ontario 1 Part A: Overview of Our Hospital s Quality Improvement Plan 1. Overview of our quality improvement plan for

More information

University of Michigan Health System Analysis of Wait Times Through the Patient Preoperative Process. Final Report

University of Michigan Health System Analysis of Wait Times Through the Patient Preoperative Process. Final Report University of Michigan Health System Analysis of Wait Times Through the Patient Preoperative Process Final Report Submitted to: Ms. Angela Haley Ambulatory Care Manager, Department of Surgery 1540 E Medical

More information

Running Head: READINESS FOR DISCHARGE

Running Head: READINESS FOR DISCHARGE Running Head: READINESS FOR DISCHARGE Readiness for Discharge Quantitative Review Melissa Benderman, Cynthia DeBoer, Patricia Kraemer, Barbara Van Der Male, & Angela VanMaanen. Ferris State University

More information

Mental Health Accountability Framework

Mental Health Accountability Framework Mental Health Accountability Framework 2002 Chief Medical Officer of Health Report Injury: Predictable and Preventable Contents 3 Executive Summary 4 I Introduction 6 1) Why is accountability necessary?

More information

Alberta Health Services. Strategic Direction

Alberta Health Services. Strategic Direction Alberta Health Services Strategic Direction 2009 2012 PLEASE GO TO WWW.AHS-STRATEGY.COM TO PROVIDE FEEDBACK ON THIS DOCUMENT Defining Our Focus / Measuring Our Progress CONSULTATION DOCUMENT Introduction

More information