Tilburg University. Numbers telling the tale? Krol, M.W. Document version: Publisher's PDF, also known as Version of record. Publication date: 2015

Size: px
Start display at page:

Download "Tilburg University. Numbers telling the tale? Krol, M.W. Document version: Publisher's PDF, also known as Version of record. Publication date: 2015"

Transcription

1 Tilburg University Numbers telling the tale? Krol, M.W. Document version: Publisher's PDF, also known as Version of record Publication date: 2015 Link to publication Citation for published version (APA): Krol, M. W. (2015). Numbers telling the tale? On the validity of patient experience surveys and the usability of their results Zutphen: CPI Koninklijke Wöhrmann General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. - Users may download and print one copy of any publication from the public portal for the purpose of private study or research - You may not further distribute the material or use it for any profit-making activity or commercial gain - You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright, please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 15. Feb. 2018

2 Numbers telling the tale? On the validity of patient experience surveys and the usability of their results Maarten Krol

3 ISBN Maarten Krol Cover design Word processing/lay out Printing : Frank Roose : Christel van Well / Doortje Saya, Utrecht : CPI Koninklijke Wöhrmann, Zutphen All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the author. Exceptions are allowed in respect of any fair dealing for the purpose of research, private study or review.

4 Numbers telling the tale? On the validity of patient experience surveys and the usability of their results Is meten weten? Over de validiteit van patiëntervaringsvragenlijsten en de bruikbaarheid van hun resultaten PROEFSCHRIFT ter verkrijging van de graad van doctor aan Tilburg University op gezag van de rector magnificus, prof. dr. E.H.L. Aarts, in het openbaar te verdedigen ten overstaan van een door het college voor promoties aangewezen commissie in de aula van de Universiteit op vrijdag 12 juni 2015 om uur door Maarten Watse Krol geboren op 14 augustus 1984 te Herwen en Aerdt

5 Promotiecommissie Promotor Prof. dr. D.M.J. Delnoij Copromotores Dr. D. de Boer Dr. J.J.D.J.M. Rademakers Overige leden Prof. dr. R. Huijsman Prof. dr. R.T.J.M. Janssen Prof. dr. J. Kievit Prof. dr. J.J. Polder Prof. dr. C. Wagner The research presented in this thesis was conducted at NIVEL, Netherlands Institute for Health Services Research, Utrecht, The Netherlands. NIVEL participates in the Netherlands School of Primary Care Research (CaRe), acknowledged by the Royal Dutch Academy of Science (KNAW). The Dutch Ministry of Education, Culture and Science (OC&W) provided financial support for this thesis. Printing of this book has been supported financially by NIVEL.

6 The search for easy ways to measure a highly complex phenomenon such as medical care may be pursuing a will-o -the-wisp. Avedis Donabedian As no better man advances to take this matter in hand, I hereupon offer my own poor endeavours. Ishmael (Moby-Dick, Herman Melville)

7

8 Contents 1. General introduction 9 2. Exploring young patients perspectives on rehabilitation care: 27 methods and challenges of organizing focus groups for children and adolescents 3. Consumer Quality Index Chronic Skin Diseases (CQI-CSD): 41 a new instrument to measure quality of care from patients' perspective 4. Complementary or confusing: Comparing patient experiences, 57 patient reported outcomes and clinical indicators of hip and knee surgery 5. Overall scores as an alternative to global ratings in patient 77 experience surveys: a comparison of four methods 6. The Net Promoter Score an asset to patient experience surveys? Patient experiences of inpatient hospital care: a department 107 matter and a hospital matter 8. Discussion 127 References 149 Summary 179 Samenvatting (summary in Dutch) 187 Dankwoord (acknowledgements in Dutch) 197 About the author 201 List of publications 205

9

10 1 General introduction Chapter 1: General introduction 9

11 Healthcare policy and quality information In 2006, both the Healthcare (Market Regulation) Act (WMG) and the Health Insurance Act (ZVW) came into force in the Netherlands (Staten-Generaal, 2005; NZa, 2006). The purpose was to introduce a system of regulated competition between three main stakeholders: healthcare providers, patients and health insurers (Enthoven and Van de Ven, 2007). The system of competition involves three markets. In each of these markets, two of the three main stakeholders interact with each other. As a fourth stakeholder, Dutch governmental bodies work as regulators of the healthcare system, continuously monitoring the healthcare market and intervening when needed (e.g. the healthcare inspectorate and the Dutch healthcare authority). The figure below, taken from the 2014 Zorgbalans (National report on the performance of the Dutch healthcare system) shows how the stakeholders are related to each other in this system (Van den Berg et al., 2014a). Figure 1.1 Regulated healthcare competition in the Netherlands Health insurers Health insurance market Healthcare purchasing market Healthcare users Healthcare providers Healthcare market Source: Van den Berg et al., 2014a The central idea is that these markets should lead to a more efficient and sustainable healthcare system: quality improvement and lower costs of care. Valid, reliable and usable information about the quality of care is deemed central to the system, as a lack of such information is thought to result in competition based on prices, at the expense of quality of care. However, each stakeholder has different information needs, according to their role within the healthcare system and the markets shown above (Van den Berg et al., 2014b). Patients. One mechanism that is central to the policy of the WMG is the supposed role of the patient as an active consumer in healthcare: as they would when purchasing a particular commercial service or product, patients 10 Numbers telling the tale?

12 are expected to actually choose the best care for themselves (healthcare market). The same goes for choosing a health insurer; every individual is required to have basic health insurance, but is free to choose their own health insurer and switch at the end of each year (health insurance market) (Enthoven and Van de Ven, 2007). Patient choice is supposed to be one of the drivers of healthcare quality. Patients therefore need information about the quality of the care delivered. Similarly, they need information to let them choose a health insurer that fits their needs and interests. Healthcare providers. Performance information can show healthcare providers how they are performing compared to their fellow providers (and competitors), and therefore also which elements of care need improvement (Porter, 2010). A cycle of monitoring care, interpreting performance information and adapting the care process accordingly can be used by healthcare providers to improve quality of care (healthcare market) (Berwick et al., 2003; Zuidgeest, 2011). Also, healthcare providers are accountable for the quality of care they deliver. They have to publish annual quality reports on several quality indicators to inform the healthcare inspectorate and the public about their performance. Health insurers. Health insurance companies are private corporations, operating under an extensive regulatory system; most have commercial interests in healthcare. It is in their interest to purchase the most efficient care: the best possible care at the lowest possible price. To this end, health insurers negotiate prices with healthcare providers to purchase healthcare for their clients. In these negotiations, they are expected to not only weigh up the costs, but also quality of healthcare (Grol, 2006). The providers that perform best may receive better contracts, or health insurers may choose to selectively contract specific healthcare providers (healthcare purchasing market). It is also interesting for insurers to acquire a large market share; this strengthens their purchasing position and their influence. To recruit clients, insurance companies try to present themselves as attractively as possible. This may be done for example by offering an attractive premium or by showing that they have contracted the best healthcare providers (health insurance market) (Enthoven and Van de Ven, 2007). Government. In addition to these three central stakeholders, governmental bodies such as the healthcare inspectorate and the Dutch healthcare authority may use information to assess the quality and safety of healthcare. Table 1.1 gives an overview of the functions of quality of care for each of the four stakeholders. Chapter 1: General introduction 11

13 Table 1.1 Functions of care quality information for the various healthcare stakeholders Patients/healthcare users Choosing a healthcare provider Choosing a health insurer Healthcare providers Internal quality improvement Accountability towards health insurers, government and society Health insurers Healthcare purchasing Advertising the quality of the healthcare contracted Government Monitoring quality and safety Encouraging quality, affordability and equity of care Source: Van den Berg et al., 2014b In short, information about the quality of care is seen as a vital component in the Dutch healthcare system. There are three main sources for obtaining this information: healthcare institutions, individual healthcare providers, and patients (healthcare users). Healthcare institutions can collect information about how they have organized their care, such as accessibility, facilities, processes and outcomes of care. This may include information collected and recorded by healthcare providers themselves or by independent observers, e.g. by recording treatments, complications, and the prescription and provision of medications (Luce et al., 1994). Patients can also be a source of information. For example, patients with chronic conditions such as diabetes may be deeply involved in their own treatment and report their own observations on their condition. In this respect, patient self-care may provide healthcare providers with useful information for tailoring treatments (Toobert et al., 2000; Swan, 2009). Patients may also be involved in evaluating the care they have received. To this end, they can report their experiences with a specific healthcare provider and the extent to which a treatment has had an effect on their health problem. This thesis focuses on the latter: patient experiences as a source of information on the quality of care. Patient evaluations have become increasingly important over the last decade due to the changing legal position of patients and growing attention to patient empowerment (Zastowny et al., 1995; Fung et al., 2008; Delnoij, 2009; Mold, 2010; De Boer et al., 2013; Siegrist, 2013). How do patients, as healthcare consumers, experience and evaluate the care they have received? Input from patients is deemed essential in achieving good quality of healthcare, because each of them has their own preferences and interests (Institute of Medicine, 2001). Patients are a unique source of 12 Numbers telling the tale?

14 information because they are the only ones who can report on aspects of care such as patient-centeredness. In addition, patient experiences do not necessarily concur with those of healthcare providers (Sitzia and Wood, 1997; Burney et al., 2002; Zuidgeest et al., 2011). Before we move on to how to obtain, measure and analyse patient experiences, a few other issues will be discussed first. For instance, what constitutes quality of care? How can it be operationalized? And which parameters or variables should be taken into account? Measuring quality of care from the patients perspective Defining quality of care indicators The World Health Organization (WHO) states that quality of healthcare can be divided into six domains (WHO, 2006). According to the WHO, healthcare should be: - effective (evidence-based, resulting in improved health and based on need); - efficient (maximizing the use of resources and avoiding waste); - accessible (timely, geographically reasonable, at an appropriate location) - acceptable/patient-centred (taking into account preferences, aspirations and cultures of healthcare users); - equitable (not varying because of personal characteristics of healthcare users); - safe (minimizing risks and harm to healthcare users). The fourth domain makes it clear that the WHO believes that the patients perspective is highly important. This underlines the relevance of studying the patients perspectives on the quality of care. Evaluations and measurements are needed in order to examine whether healthcare is in accordance with these six domains. To this end, aspects of quality of healthcare can be translated into measurable units, commonly referred to as quality of care indicators. Quality of care indicators can be categorized into three types of indicators: structure, process, and outcome (Donabedian, 1980) (see also Table 1.2). Structural indicators show whether certain preconditions for safe and effective care are present in a healthcare setting (e.g. a GP s practice or hospital). Examples are the training levels of staff, having a sound quality management system, and the availability of relevant medical equipment. Structural indicators are normally dichotomous (either the criterion is met or Chapter 1: General introduction 13

15 it is not) and are often considered prerequisites for good care. The data needed for determining the structural indicator are often collected by the healthcare providers themselves or by an (independent) observer, but are sometimes reported by patients (for example, the accessibility of a healthcare facility). Process indicators concern the actions of healthcare professionals and the care process. Examples are how often a professional complies with certain guidelines, how easily patients can get in touch with a care professional, whether patients were provided with clear and accurate information on possible treatments, etc. Process indicators may show how the professionals interact and communicate with their patients (and possibly with other healthcare professionals). This can be evaluated either by healthcare providers, by patients, or both. Outcome indicators, finally, report on the outcome or result of the provided care. This may for example be the success of an operation, the status of the health problem of the patient at the end of the treatment, or even the overall quality of life. Outcome measurements may focus not only on technical, biological and physiological outcomes, but also on psychosocial outcomes for the patient, such as quality of life (Wilson and Cleary, 1995). Patient surveys may also include satisfaction measures such as a global rating of the quality of care that patients received, or whether they would recommend their healthcare provider to other patients. Clinical outcomes are often reported by healthcare providers themselves, e.g. mortality rates and complications in treatment (McIntyre et al., 2001). Where outcome indicators measure adverse outcomes that could point to safety issues, the Healthcare Inspectorate (IGZ) requires healthcare providers to record the information. Increasingly however, the various stakeholders are looking for ways in which the patient can also indicate whether a treatment has improved their functional status. There is some debate about the terminology used for identifying quality indicators from the patients perspective. Some claim that all patient evaluations of care (structure, process and outcome indicators) can be seen as outcomes and can therefore be called "PROMs" (Patient-Reported Outcome Measures). However, a more commonly used definition of PROMs is that they focus exclusively on the outcome of the care process (U.S. Department of Health, 2006; Black, 2013). For instance, the health or functional status after treatment. Consecutively, patient evaluations of structure and process indicators are usually referred to as "PREMs" (Patient-Reported Experience Measures) (Gibbons and Fitzpatrick, 2012). In this thesis, we will use these definitions of PREMs and PROMs (Table 1.2). 14 Numbers telling the tale?

16 Table 1.2 Structure, process and outcome indicators from healthcare providers and patients perspectives Structure indicators Process indicators Outcome indicators Healthcare providers What do they measure? Organization of care, preconditions for safe and effective care Practicing of healthcare professionals and the care process Outcome or result of provided care Examples Quality management systems, organization of patients privacy, staff training levels Guideline adherence Mortality rates, complications Patients Specific term for measurement from the patient s perspective Patient-Reported Experience Measure (PREM) Patient-Reported Experience Measure (PREM) Patient-Reported Outcome Measure (PROM) Examples of indicators Accessibility of the healthcare facility Communication with professionals, providing clear and understandable information, shareddecision making Success of operation, status of the health problem at the end of the treatment, quality of life, appraisal of outcome (satisfaction) Measuring quality of care indicators from the patients perspective The CQ-index A commonly used method for letting patients evaluate the quality of care is to use standardized patient surveys. In the search for more systematic (and transparent) ways of measuring patient experiences (PREMs) with Dutch healthcare, the Consumer Quality Index (CQ-index or CQI) was created in the Netherlands in 2006 (Box 1.1). This was done by combining the Dutch QUOTE-surveys with the CAHPS methodology from the USA (Sixma et al., 1998; Hargraves et al., 2003; Delnoij et al., 2006; CMS, 2014). Chapter 1: General introduction 15

17 Box 1.1 The Consumer Quality Index What is the Consumer Quality Index (CQ-index or CQI)? - National standard for measuring healthcare quality from the perspective of healthcare users. - Based on American CAHPS (Consumer Assessment of Healthcare Providers and Systems) and Dutch QUOTE (QUality Of care Through the patient s Eyes) instruments. - Collection of instruments (surveys or interview protocols). - Collection of protocols and guidelines for sampling, data collection, analysis, and reporting formats. What is measured by the CQ-index? - What healthcare users find important in healthcare. - What their actual experiences are. - How they rate the overall quality of care. What types of questions are included in the CQ-index? - Frequency with which quality criteria are met: Never, sometimes, usually, and always. - Importance of quality criteria: Not important, fairly important, important, and extremely important. - Access to care and the degree to which lack of access is perceived as a problem: A big problem, a small problem, not a problem. - General rating of the quality of care: Scale from 0 (worst possible) to 10 (best possible) or likely to recommend: -Scale from 0 (not at all likely) to 10 (extremely likely). - Effects of care and adherence to professional guidelines. - Background characteristics: Age, gender, ethnicity, education, and general health status. Source: Sixma et al., 2008 Years of research have led to a substantial list of conditions and requirements to ensure that surveys in the CQI system produce valid, reliable and useful information for all stakeholders. The development of a new CQI questionnaire can be initiated by various stakeholders in search of an instrument to systematically investigate patient experiences within a healthcare sector. A supervisory committee with representatives of relevant stakeholders (including healthcare providers, health insurers and patients) watches over the development phase of each new CQI questionnaire (Delnoij et al., 2010). They are also involved in determining the content of the questionnaires, although patients remain the central source of information. Qualitative research (interviews, focus groups), supplemented by (scientific) literature, is used for identifying the most relevant aspects of care for patients. Subsequently, survey questions on these care aspects are formulated so that they produce valid and reliable answers. To test whether the questionnaire is understandable and acceptable to the patient group concerned, cognitive interviews are conducted (Buers et al., 2014). These interviews test whether 16 Numbers telling the tale?

18 the most relevant questions have been included, if participants understand what is meant by each survey item, and whether the response categories reflect their actual experiences. Next, a quantitative study is carried out using the survey among larger samples of patients to establish the psychometric properties of the questionnaire. The survey data are used for comparing the performance of care providers, using multilevel analyses and controlled for case mix (Zaslavsky et al., 2001). This not only takes account of the clustering of experiences for each healthcare provider, but also looks at differences in the numbers of respondents per provider (Damman et al., 2009a). The results are then presented to all parties involved, focusing on the information needs of various stakeholders and their respective questions: - Are there significant differences between providers? (De Boer et al., 2011) - Which providers perform best and which are underperforming? - What are the main areas for quality improvement? To this end, the overall results are publicly reported and healthcare providers may receive individual performance reports. Depending on the owner of the data, these reports may be made available on request. PROMs Outcome indicators from the patient's perspective (PROMs) have been around for several years, but have increasingly sparked interest among various stakeholders over the last decade, particularly when measuring the quality of care is involved. PROMs are a way of measuring patient-reported outcomes (PROs) consisting of self-reported health and functioning of patients (US Department of Health, 2006). PROMs may be specific or generic, depending on the concept being measured. A number of items are often used to cover this concept. The item scores are subsequently used to calculate the PROM score. Generic PROMs, such as the SF-36 and the EQ-5D, may look at overall healthrelated quality of life (Kind et al., 2005). Specific PROMs assess how well patients are functioning with respect to a specific health problem or treatment, for instance hip or knee surgery (Roos et al., 1998; Davis et al., 2008; McKenna, 2011). At first, PROMs were used by clinicians often in clinical trials or other studies to monitor the health status and functioning of their patients before, during and after treatment. In recent years, PROMs are being included in patient surveys, including CQI surveys, more and more often in order to compare healthcare provider performance (Black, 2013). In the view of many people, progress in (physical or mental) functioning of patients is the key indicator of healthcare quality. In the United Kingdom, PROMs are considered a compulsory part of NHS measurements. In the Chapter 1: General introduction 17

19 Netherlands, including PROMs in patient experience surveys is also advocated, and they have been implemented in surveys on elective surgery (e.g. CQI Hip/Knee Replacement, and CQI Varicose veins (Miletus, 2014)). The use of PROMs in combination with PREMs may provide more information about the quality of healthcare; not only about the care process and communication with professionals, for instance, but also about the outcomes of care. Current issues in patient experience research Eight years after the implementation of the 2006 Healthcare (Market Regulation) Act in the Netherlands, the first evaluations have been made of its policy assumptions and whether quality of care information has lived up to its expected value in the relationships between the main stakeholders. One of these evaluations has focused specifically on the CQ-index. An overview of the studies in its first five years of existence gave a rather positive picture of patient experience surveys of the Dutch CQ-index, as well as some points of concern and opportunities for improvement (Hopman et al., 2011). For instance, in some cases, there were conflicts of interest between stakeholders regarding the contents of the survey and the purpose of the research. The results were also not always understandable for all stakeholders, which impeded their use of the data. In this respect, some issues arise regarding two major criteria in survey methodology: the validity of patient experience surveys and the usability of their results. The validity of the surveys hinges on the extent to which the surveys include relevant aspects of quality of care (face and content validity) and whether the survey results are in concordance with corresponding measures (convergent validity) (Streiner and Norman, 1999a; Gravetter and Forzano, 2012). The usability of patient experience survey results is about whether stakeholders are able to use these results. For example, are they able to interpret the results that are presented? Can they act upon these results to choose a healthcare provider (patients), to improve care (healthcare providers) or to purchase good quality care (health insurers)? In this section, we will briefly consider these issues for the current Dutch healthcare system. Validity of patient experience research A major development regarding the validity of patient experience surveys is that they are increasingly tailored to specific target groups, such as migrant groups (cultural validation (Asmoredjo et al., 2013)) and people with specific health problems for whom participation in research is particularly difficult 18 Numbers telling the tale?

20 (e.g. people with dyslexia or aphasia, paediatric oncology (Tates et al., 2009; Ruijter et al., 2014). This development emerged partly from the criticism that patient surveys were not specific enough for patients to fully report their experiences, or for healthcare providers to identify possibilities for quality improvement. Also, it was suggested that there could be differences in the perspectives of patients from differing cultural backgrounds. In turn, there could be differences in the aspects of care that they think are most relevant (Asmoredjo et al., 2013). This implies that a generic survey may not cover all the aspects of care relevant to subgroups in the population. The insights gained by developing these specialized surveys may be used as examples for future research, as they often involve innovative methods for engaging patients in research. Over recent years, a number of Dutch patient surveys used in nationwide surveys have been shortened considerably (Triemstra et al., 2008). A fear that is often expressed in discussions about the CQ-index is that their length has made surveys too demanding for patients; a shorter survey may also be sufficient to get a rough indication of the level of quality within a certain healthcare sector. Furthermore, shorter questionnaires are cheaper to send and to analyse. For health insurers, an important reason for shortening surveys is to primarily include items that are able to show statistical differences in performance between healthcare providers. However, this involves some issues regarding the content validity; it is still important to include the quality of care aspects most important and relevant to the specific patient group. The addition of PROMs to CQI questionnaires is partly driven by the use of patient experience in healthcare purchasing by health insurers. Since 2012, a number of nationwide studies using CQI surveys on elective care (Hip/Knee arthroplasty, Varicose vein treatment and Cataract treatment) have included PROMs, both generic (EQ-5D) and treatment-specific (Kind et al., 2005; Miletus, 2014). On average, process indicators from CQI surveys have shown limited differences in healthcare provider performance (Hopman et al., 2011). In this respect, health insurers in particular hope that PROMs will yield more differences between healthcare providers. Use and usability of patient experience research Healthcare market According to the policy of the WMG, patients are assumed to make informed decisions when choosing a healthcare provider. Victoor et al. investigated the policy theory used in the Netherlands to promote patient choice of providers Chapter 1: General introduction 19

21 (Victoor et al., 2012a). Their research showed that a great deal of effort and funding by the government was directed at presenting information about quality of care, including patient experiences. Damman et al. investigated how patients handle and interpret such quality information, leading to clear recommendations on how to present this information to the public (Damman et al., 2009b; 2012). Nonetheless, presenting results in a clear and understandable way to patients, using suitable media, remains a challenge (Zwijnenberg et al., 2012). In order to determine whether these efforts are worthwhile, it is important to know more about the willingness of patients to actually use this information (Victoor et al., 2012a). Research shows that patients use quality information very sparingly and selectively when choosing a caregiver (Faber et al., 2009). And when they do, it is not necessarily decisive; patients seem to rely especially on advice from their general practitioner and from friends or family for the actual choice of a healthcare provider (Bes and Van den Berg, 2013). The proximity of the healthcare provider is also important in their choice (Victoor et al., 2012b; 2014a). The fact that many patients do not use quality information does not necessarily have anything to do with reluctance or disinterest. In fact, not all patients are able to interpret or use the information effectively (Rademakers et al., 2014). A recent study shows distinct differences in the characteristics of patients who sought and/or actively used information in choosing a provider, and those who did not (Victoor et al., submitted). As research so far suggests, not all patients are willing or able to act as informed consumers who make active choices. There are claims that the amount of information presented to the public and the multitude of data sources make it difficult for people to effectively search and interpret information. More than half (55%) of the Dutch population have difficulty finding comparative information about the quality of hospitals on the Internet (Nijman et al., 2014). Summarizing information and guiding people to appropriate websites might help improve this situation (Damman et al., 2012). Internal improvement and accountability There are indications that patient evaluations have improved in some healthcare settings since nationwide patient experience measurements were introduced (Ikkersheim and Koolman, 2012; Zuidgeest et al., 2012; Winters- Van der Meer et al., 2013). Furthermore, several healthcare sectors have worked collectively on improving the logistics of patient experience research and the interpretation of results, such as Dutch rehabilitation care and the Santeon hospital consortium. Nonetheless, some points of concern remain for the use of patient experiences by healthcare providers (Hopman et al., 2011). 20 Numbers telling the tale?

22 The results are often considered too abstract (e.g. limited survey topics) to identify possibilities for improvement, or not detailed enough (e.g. only available at institutional level) to subsequently tailor interventions. Although most healthcare providers are highly motivated to deliver high-quality care, they may need guidance to interpret and use information from quality of care research (ActiZ et al., 2011). Consequently, targeted action aimed at quality improvement is often difficult to implement in healthcare organizations and requires additional effort (Bosch et al., 2007; Winters et al., 2014). Healthcare purchasing market One of the aspects of regulated competition is that health insurers may choose to selectively contract healthcare providers when purchasing care. Selective purchasing of care is based on the insurer s own criteria, in which quality indicators can play a role. Information about the quality of care is also used to discuss potential quality improvements with healthcare providers. These improvement plans and their results are used to differentiate fees for healthcare providers. Even though selective purchasing has increased in the past two years (NZa, 2014), it has been established that the use of quality of care information is still limited compared to other factors. Negotiations are primarily about costs, but also to some extent about healthcare volume (number of treatments) as a proxy for quality (Westert et al., 2010; NZa, 2014; Van Kleef et al., 2014). As mentioned earlier, one of the reasons for the limited use of quality of care information is that the results of many patient experience surveys show little difference in the quality of care between providers (Hopman et al., 2011). Consequently, it is difficult for health insurers to use this information for selective purchasing purposes. Also, rewarding the providers who perform best is irrelevant if all scores are similar. It should be noted that the competition between health insurance companies makes it difficult to gain full insights into the way these companies use information about quality of care. A recent thesis on the use of pay-forperformance in healthcare suggests that health insurers remain cautious when it comes to reporting their use of quality of care research (Eijkenaar, 2013a). Nonetheless, health insurance companies do call for more useful information about quality of care for the purposes of health care purchasing; the first steps towards using this information are apparent (NZa, 2014; Ruwaard et al., 2014). A recent scoping review suggests that health insurers need more relevant information and encouragement to involve quality of care (improvement) in their contract purchasing (Bouwhuis et al., submitted). Especially among health insurers, there is a call for the results of the (sometimes extensive) patient surveys to be summarized or aggregated at higher levels, for instance, to get an impression of the worst-performing and Chapter 1: General introduction 21

23 best-performing providers. The same goes for items used to get an overall view of the patient s experience of quality of care, for instance a global rating. By carefully analysing and presenting such data, researchers may help the work of health planners in purchasing the best care (Zema and Rogers, 2001). Additional incentives might potentially improve the case for quality of care in healthcare purchasing (Custers et al., 2007). Health Insurance Market Some health insurance companies advertise their ability to support patients in choosing healthcare providers, or to enable patients to receive the best possible care. In order to provide sound advice, many insurers use information about the quality of care. However, this service provided by health insurers does not seem to be a major determining factor for Dutch people when choosing a health insurer. Even though the 2006 WMG made it possible for people to change their insurer every year (sparking competition between health insurers), the annual percentage of people changing insurer since 2006 has been between 4 and 10% (Romp and Merkx, 2013; Reitsma-Van Rooijen and De Jong, 2014). Freedom to choose a healthcare provider is an important issue for many people when selecting a health insurer (Reitsma- Van Rooijen et al., 2011). But more importantly for the system, the reason most widely stated for switching during recent years was the premium (approx. 40%), with less than 1% of people changing insurers because of the quality of contracted care (Brabers et al., 2012). Six out of ten Dutch people are hardly aware of the differences between health insurers, and 43% are unable to find comparative quality information on insurers on the Internet (Nijman et al., 2014). Therefore, in the competition between health insurers to enrol people into their schemes, quality of care does not (yet) seem to play a major role. As with the healthcare market, there are initiatives to facilitate comparisons of health insurance policies for the general public, by gathering and summarizing quality information (Consumentenbond, 2014; Independer, 2014). These mostly involve (commercial) websites through which members of the public can enrol in a particular health insurance scheme. Such endeavours to help people select an appropriate insurance policy may not be in vain. The Consumentenbond (Dutch organisation for the protection of consumers) reported that Dutch people had over 1,300 different health insurance policies to choose from at the end of 2014 (VARA, 2014). In short, it is clear that there are several important issues with regard to measuring and improving quality of care from the perspective of patients. It seems that the quality of care information does not yet meet the demands of 22 Numbers telling the tale?

24 stakeholders. This acknowledges the relevance of further research to better understand and then improve the validity of patient experience surveys and the usability of their results. This thesis The studies in this thesis are intended to contribute to the understanding or even the improvement of the validity of patient surveys, the usability of their results, or both. This thesis seeks to answer two general questions: 1. How can the validity of patient experience surveys be improved? 2. How can the usability of patient survey results be improved for stakeholders? This thesis includes six studies on these subjects, as shown in Table 1.3. Validity being a broad concept in itself, it may be useful to define the dimensions of validity considered in this thesis: face validity, content validity and construct validity. Face validity is the degree to which a survey item at first glance seems to cover the concept it aims to measure (Streiner and Norman, 1999b; Mokkink et al., 2012). This may involve a subjective judgement, which can vary according to the specific individual. Content validity concerns the relevance of survey items regarding the concept being measured, and whether the survey as a whole covers this concept (Streiner and Norman, 1999c; Mokkink et al., 2012). As an element of construct validity, convergent validity can be assessed by examining how closely an item or measure is related to other measures to which it could theoretically be related (Streiner and Norman, 1999a). Usability concerns the extent to which the results of patient survey research are comprehensible and interpretable for stakeholders, enabling them to use the results. We will illustrate below how each of the studies contributes to answering the research questions. Chapter 1: General introduction 23

25 Table 1.3 Relating the study aims of the thesis studies to the two general research questions Chapter (setting) 1. Validity 2. Usability Ch2. Gathering survey content: focus group meetings and online focus groups (rehabilitation care) Ch3. Developing, evaluating and optimizing a patient experience survey (chronic skin disease care) Ch4. PREMs, PROMs, and clinical indicators (hip/knee arthroplasty) Ch5. Constructing overall scores from patient experiences (nursing home care) Ch6. Including the NPS in a patient experience survey (inpatient and outpatient hospital care) Ch7. Specificity of data and data analysis from patient experience surveys (inpatient hospital care) Organizing focus groups to obtain relevant information for modifying a survey for adult patients to create valid measurements for children and adolescents. To underpin the content validity of a patient experience survey, in development, psychometric testing and optimization. Inclusion of PROMs in a patient experience survey, assessment of their relationship with PREMs, for construct validity. To construct an overall score to summarize patient survey results; does this provide a valid representation of patient experiences? Construct and convergent validity: inclusion of an alternative summary score (NPS) in a patient experience survey. Level of detail in measuring experiences: the appropriate level for obtaining valid measurements and analyses (department vs. hospital) - - To determine associations between PREMs, PROMs and clinical indicators; linking patient experiences, effectiveness and safety of care. Do multiple methods to summarize patient experiences lead to different results? And which are most usable? To examine response patterns of summarizing measures, in search of improved differentiation. To determine the specificity of data/results: the use of multilevel structures in analyses to identify the appropriate level of influence. Chapter 2 focuses on the content validity of a patient experience survey. This particular study attempted to enhance content validity by tailoring the survey to the experiences, needs and preferences of the specific patient group. This chapter presents the organization of focus groups with children and adolescents, used in developing a patient experience survey questionnaire about their treatment in rehabilitation centres. Numerous studies have been 24 Numbers telling the tale?

26 published in the past looking at the development of patient surveys, including CQI questionnaires. However, it is not always evident how the contents of these questionnaires were determined; many studies do not describe the qualitative data collection used to gather the aspects included in the questionnaire. Also, as mentioned earlier, there is increasing interest in tailoring surveys to specific patient groups, of which Chapter 2 is an example. Chapter 3 describes the entire process of development and psychometric testing of a patient experience survey, in this case the CQI on Chronic Skin Diseases. For instance, how are the contents of the survey obtained and which tests are used to investigate its validity and reliability? Traditionally, patient surveys consisted mainly of patient evaluations of the care process (PREMs), but there is increasing interest in the evaluation of treatment outcomes by patients themselves. In this respect, it is interesting to look at the addition of outcome indicators (PROMs) to patient surveys. It is important to note that a PROM score may depend not only on various aspects of care, such as the organization of the care process, but also on patient characteristics. Therefore, in order to obtain a more comprehensive view of quality of care, a combination of structure, process and outcome indicators (i.e. PREMs and PROMs) may be useful. So what do PROMs specifically add to the questionnaires and how do PREMs and PROMs relate to each other? By examining this, we can get an idea of their construct validity. Chapter 4 provides an example for a patient survey on Hip/Knee arthroplasty. Moreover, this study also assesses the associations between PREMs, PROMs and clinical indicators reported by healthcare providers themselves, thus attempting to link patient experiences, effectiveness and safety of care. Patient surveys generally produce numerous results: multiple performance scores both detailed and aggregated figures, tables, and so on. Is it perhaps possible to summarize these results by calculating an overall score? If so, this information could be used to get a quick view of healthcare quality, without having to go into too much detail. If such overall scores are used as summary measures for patient experiences, it is an important condition that these scores should be representative of actual patient experiences, thus relating to their construct validity. In Chapter 5, the construct validity and usability of a number of potential overall scores are investigated, using data from the CQI on Nursing Home Care. In the search for a simple summary measure, use of the 'Net Promoter Score' (NPS) has also been on the rise in the Netherlands. Not only in management and retail research, but also in determining client satisfaction Chapter 1: General introduction 25

27 with healthcare. Because the NPS is considered to replace some existing questions in patient surveys as an outcome indicator, their respective relationships with actual patient experiences are investigated. Potentially, the NPS allows for more differentiation in measuring the willingness to recommend a healthcare provider. Also, its methodology includes the calculation of a single score, which is supposed to represent the loyalty of clients or patients. Chapter 6 contains this study on the construct validity of the NPS, using data of the CQI Inpatient Hospital Care and CQI Outpatient Hospital Care. Another point about the level of detail of survey results is the level at which patient experiences are measured, analysed and presented. The validity of results depends in part on whether the right unit of observation is being described. In the case of hospital care, it may be important to know the quality of care in different departments, in addition to aggregated information at the hospital level. It has already been mentioned that the specificity of the data is very important for its usability. In this case, this was examined for data on the CQI Inpatient Hospital Care, as presented in Chapter 7. This included a measurement issue on multilevel structures: is it relevant to include the department level in analyses, in addition to the hospital level? And are there perhaps structural differences between types of departments? If so, this might provide more specific and therefore more useful information. Finally, Chapter 8 summarizes the results of this thesis and covers the main points for discussion. The results and conclusions of the previous chapters and opportunities for their implementation will be discussed, including suggestions for future research that could be done to further increase the validity and usability of patient experience research. 26 Numbers telling the tale?

28 2 Exploring young patients perspectives on rehabilitation care Methods and challenges of organizing focus groups for children and adolescents This article was published as: Krol M, Sixma H, Meerdink J, Wiersma H, Rademakers J. Exploring young patients perspectives on rehabilitation care: methods and challenges of organizing focus groups for children and adolescents. Child: Care, Health and Development, 2014; 40(4): Chapter 2: Organizing focus groups for young patients on rehabilitation care 27

29 Introduction The importance of the patients perspective in quality of care research is widely recognized (Fung et al., 2008; Delnoij, 2009). Patients can provide information that cannot be collected otherwise, for example about how they experience the care process, or the perceived effectiveness of care. Although many studies have focussed on the preferences and experiences of patients in healthcare, not every patient group is heard. Children, for instance, are often not included in patient surveys. Even though many studies claim to focus on the experiences of children in healthcare, often, information reported by parents, as proxies for their children, is used (House et al., 2009; Lindeke et al., 2009). Children themselves, however, have their own healthcare preferences and experiences. These do not necessarily concur with those of their parents (Van Beek, 2007; Knopf et al., 2008). Fortunately, the importance of involving children in quality of care research is more and more recognized (Lightfoot et al., 1999; Siebes et al., 2007; Watson et al., 2007; Lindeke et al., 2009; Pelander et al., 2009). Children and adolescents are perfectly able to give their own opinions about their healthcare, if they are given the right opportunity to do so. Children as young as 8 years old are capable of participating individually in (online) surveys (Borgers et al., 2000; Lindeke et al., 2009). Before this age, however, the cognitive skills necessary for self-reflection and the question answering process are usually not yet developed (Piaget and Inhelder, 1969; Borgers et al., 2004). Our research aimed to give young patients an opportunity to speak up for themselves in the development process of two new patient experience surveys on rehabilitation care [new additions to the Consumer Quality Index (CQI), a family of surveys measuring patient experiences in Dutch healthcare (Delnoij et al., 2006)]. Rehabilitation care covers a variety of specialized care (such as physical, occupational and speech therapy), aimed at enhancing functional abilities of their patients. Also, the cause of the functional disabilities can be diverse, such as congenital disorders, (traffic) accidents, sports injuries or cognitive disorders. In the case of children and adolescents, the first two causes are most common in rehabilitation care. In the Netherlands, each year about 18,000 young patients (<18 years old) receive rehabilitation care, constituting about 23% of the total number of rehabilitation patients (Revalidatie Nederland, 2011). In order to develop a valid patient survey, relevant quality items for the specific patient group have to be identified. Our research distinguished two age groups of young patients: children (aged 8 11 years old) and 28 Numbers telling the tale?

30 (pre)adolescents (aged years old). The division of these age groups was based on developmental differences: in adolescence, young people become more autonomous and may hold different views than they did in their childhood (Kyngäs, 2004; Livingston et al., 2007). Because of these developmental differences, Dutch law actually makes a distinction between the responsibilities of healthcare providers regarding these age groups; adolescents should be more actively involved by healthcare providers in their care than children (Law on Medical Treatment Agreement, 1994). To explore the preferences and experiences relevant to patients, focus group research can be used (Sofaer, 2002). Focus group meetings have proved to be a useful and suitable method for involving a specific group of people, such as patients who visit the same healthcare provider or suffer from the same specific illness (Krueger and Casey, 2000). Focus groups aim to provide an encouraging and safe situation for participants to freely discuss their experiences and opinions, for instance regarding healthcare. This is also true, with some modifications, for children and adolescents (Peterson-Sweeney, 2005). More recently, online focus groups have also proved to be a popular and accessible way of involving patients in quality of care research (Moloney et al., 2003; Tates et al., 2009). An online forum might prove a useful alternative to a focus group meeting; participating in an online forum can be done from the comfort of one s own house (or any other place that has an Internet connection), regardless the time of day. It also provides more anonymity. Adolescents are usually very familiar with online forums through social media. Nowadays, the same also applies more and more to children (Kennedy et al., 2003; Kenny, 2005). There have been encouraging experiences in using online focus groups for obtaining children s and adolescents views on healthcare (Zwaanswijk et al., 2007; Tates et al., 2009). In this article, we will present the organization and design of our focus group meetings and online focus groups. In doing so, we will discuss the usefulness and challenges of both types of focus groups in our research and aim to answer the following research question: To what extent are focus group meetings and online focus groups feasible and applicable strategies for exploring the preferences and experiences of children and adolescents in rehabilitation care? Chapter 2: Organizing focus groups for young patients on rehabilitation care 29

31 Methods Recruitment of participants The research took place during summer Young patients from two Dutch rehabilitation centres were recruited to participate. They could choose to either participate in a focus group meeting or in an online focus group. Patients were eligible for selection based on their age (either 8 11 for the children s groups or years for the adolescents groups). Also, they should have had at least one appointment at the rehabilitation centre in the past 12 months. As the research involved minors, the invitation was addressed to the children and their parents. The invitation included an outline of the research for the parents and a specific letter for the young patients, which stressed the importance of the research and that participants would receive a gift certificate. The meetings were situated at the rehabilitation centres. The aim was to organize two meetings per age group in both centres. Also, online focus groups were constructed, one for each age group. The online focus groups consisted of a 1-week online forum. According to the Dutch Medical Research Involving Human Subjects Act this study did not require ethics approval. An explanatory statement of the governmental agency overseeing compliance with the Act, defines medical research as (in short): research aimed at acquiring generalizable results about diseases and health (aetiology, pathogenesis, symptoms, treatment regimes etc.). Also, research without subjecting people to certain procedures or requiring them to act in a certain way, is not medical research in the sense of the Act. However, informed consent was obtained from the participants parents. Tailoring focus groups to children and adolescents Although there are similarities, a focus group involving children or adolescents demands a slightly different approach than a standard (adult) focus group (Peterson-Sweeney, 2005). The focus group meetings for children were led by a female professional from the WESP foundation; an organization specialized in involving children as research participants (WESP foundation, 2013). During the WESP meetings, she was assisted by one of the researchers. Focus groups with adolescents were led by one of the researchers (H.W.). Meetings were scheduled to last 90-min maximum, including a 15-min break, because it was expected that such meetings would be more tiring for young patients (Heary and Hennessy, 2002). In contrast, meetings for adults usually last 2h or more. In standard focus groups, 10 to 12 participants are common. However, because of the 30 Numbers telling the tale?

32 potentially intimidating setting of a focus group meeting, we decided to form smaller groups, aiming to recruit six participants for each children s group and eight for each adolescents group. Also, the moderator and the assistant were alert to any stress, fear or agitation in the participants (Heary and Hennessy, 2002). In order to put the children and adolescents at ease, refreshments, cookies and crisps were available. The atmosphere of the meetings was kept as informal as possible (Peterson-Sweeney, 2005). Also, the participants were shown beforehand in which room their parent(s) would be during the meeting. For adolescents, separate meetings were organized for boys and girls. Puberty marks a period in which adolescents become very self-conscious of themselves and their body. In order not to deter participants from discussing personal subjects such as relationships and sexuality, it was decided to organize same-sex focus groups (Heary and Hennessy, 2002; Wiegerink et al., 2006). Design of focus group meetings The meetings were designed by the WESP foundation. To start off, the moderator gave the participants a short outline of the meeting and stressed the confidentiality of what was said during the meeting (Horner, 2000; Heary and Hennessy, 2002). She also emphasized that the participants were experts by experience ; the meeting was exclusively about their perceptions and there were no such things as good or bad answers. Participants should feel that their opinions matter and that their input is taken seriously. In order to build trust and inspire self-confidence, the meetings continued with a short round of introductions (Horner, 2000). Subsequently, participants were asked to form pairs and interview each other, using a sheet of questions handed to them by the moderator. The interview covered four different dimensions, pretested by WESP. The dimensions and their associated questions were presented in a logical order so that participants could give their answers and opinions more easily. The four dimensions were: Exploration (e.g. Of what use is a rehabilitation centre to a child? ), Feeling ( What do you feel like when you are there? ), Opinion ( What do you think about what happens there? ) and finally Advice ( What should be changed? ). In the appendix, all questions are specified. Answers to the questions were written down on slips of coloured paper by the children who were interviewing; separate colours for separate subjects. At the end of the interview, the roles were reversed. Subsequently, the children glued the paper slips with the answers on flip charts. This provided an overview of the subjects, useful for both the participants and the researchers. Chapter 2: Organizing focus groups for young patients on rehabilitation care 31

33 After the interviewing sessions, there was a joint discussion about which aspects of rehabilitation care are important to children or adolescents. For example, the moderator asked the participants to imagine their rehabilitation centre wanted to know how well it performed according to their young patients. Which questions should the centre then ask the children or adolescents who are being treated there? The answers from the participants were written down by the moderator on a flipchart. To conclude the meeting, participants received a gift certificate, and an evaluation form about the meeting, which they could return by mail anonymously. Design of online focus groups For each of the age groups an online focus group (i.e. online forum) was organized, in the same manner as Tates and colleagues (2009). Applicants received the URL of the online forum and a personal username and password. Considerable attention was paid to making the texts on the website clear and comprehensible. Also, some rules of conduct were published on the site, for instance about language (e.g. no profanities) and anonymity. The forums were accessible for a week. Applicants received a reminder a few days before the research began and also on the starting day. On the first 5 days, a question was posted each day by the researchers. These were questions also included in the focus group meetings, as presented in the appendix (Wednesday s topic being the concluding question of the focus group meetings). Participants were invited to answer the questions and to comment on both the questions and each other s answers. The researchers monitored the discussion and asked additional questions if necessary. Results Response and participation Below, Table 2.1 describes the response on both types of focus groups. Across the two centres, 359 children and 395 adolescents were invited to participate. In the end, 41 (6%) of these young patients actually participated in either a focus group meeting or an online forum, which was limited. Participation rates were equal for children and adolescents: 13 children and 17 adolescents participated in one of the seven focus group meetings; five children and six adolescents participated in the online forums, having posted at least one reaction on the discussion board. 32 Numbers telling the tale?

34 Feasibility and applicability of the focus groups The duration of the meetings proved to be adequate. After 90-min, no new information was obtained from both children and adolescents. A shorter duration, however, would not have been sufficient to complete the programme. Most children and adolescents were fully motivated to participate, some needed extra encouragement. With regard to the children s focus groups, a few children needed reassurance from the researchers, but most of them gradually showed more enthusiasm. The mutual interviewing strategy suited most participants. However, some participants resorted to literally repeating the answers their interviewing partner had just given, probably because of insecurity. But for the most part, participants went ahead enthusiastically and seemed to enjoy the experience. This was also reflected by the evaluation forms that were returned afterwards; nine children returned the form and seven of them stated they had liked participating (two said they don t know ). With regard to the adolescents focus groups, the results were more extensive and detailed than those of the children s. With regard to the group meetings, however, both boys groups proved to be far less informative than the girls focus groups. This was observable during the meetings, but was also reflected by the results; the girls lists of answers were far more extensive than those of the boys. Also, in one of the boys focus groups, participants sought to outdo each other by bragging, resulting in limited responses. Nonetheless, of the 10 evaluation forms that were returned, eight (including four boys) were positive about participating, the other two being neutral. Also, participants were asked on this form what they thought of the same-sex composition of the focus groups. Eight of them (five girls, three boys) stated they had no preference whether it was a same-sex group or a combined group. Only one girl preferred a same-sex group. With regard to the online forums, these proved less successful than the meetings, both in terms of participation and in results. After several days, many of the applicants had not yet posted answers, or even visited the website. To remedy this, additional reminders were sent. In the end, though, many of the adolescents had not posted any answers or comments on the website. Also, the majority of the answers to our questions proved to be either very specific or very general. Spurring the participants to clarify their answers, or to elaborate on them, only led to a few additional reactions. Chapter 2: Organizing focus groups for young patients on rehabilitation care 33

35 Table 2.1 Response on focus group meeting sand online forums per centre and age group Invitations Applications Participants Mean age (range) Meetings Children 229 a (8-12) (8-11) 130 a (9-11) 2 (cancelled) Adolescents (f) (14-15) (14-15) Adolescents (m) (13-15) (12-12) Online groups (> 1 post) Children ? ? Adolescents ? ? a Applicants could choose from two dates. Discussion A limited number of children and adolescents participated in the focus groups or the online forums. It proved very difficult to recruit young patients, despite sending the invitations and reminders through the rehabilitation centres and mentioning the gift certificate, although some participants did mention the gift certificate as an extra reason to participate. It should be noted that our issues regarding response rates seem not uncommon for qualitative studies involving children (Goodenough et al., 2003; Kendall et al., 2003; Siebes et al., 2007). Several reasons, both organizational and personal, might account for the low response rate. First, the meetings and online forums took place during the last week of the summer holiday, so many children might have been away shortly before or during the focus groups and therefore unavailable. Organizing these focus groups during regular school weeks, however, may have overburdened these young patients. Organizing meetings during school hours is an option, for instance at a school for physically disabled children, 34 Numbers telling the tale?

36 although this would include only a part of the target population, i.e. the most severely disabled children and adolescents. Second, the number of treatments was very limited in the summer period; it was not possible to recruit participants through the employees of the rehabilitation centres. Being asked to participate by their own physician or physiotherapist might have improved the participation rate of young patients. Also, there are some possible reasons at a personal level. First of all, the letters were sent to the parents, and they might object to their child participating. Also, it is possible that a number of young patients did not feel the research was relevant to them, in case their rehabilitation treatment had ended, or if they did not regularly visit the centre. On the other hand, it may also be that rehabilitation treatment is highly relevant to patients, but they are reluctant to reflect on it, as was also suggested by Siebes and colleagues (2007). This may be because of the profound impact of their health problem on their daily life, such as it is. Especially for children, it may be quite a big step to visit the centre on a day off to talk to strangers about their rehabilitation. We expected the online forums to be an effective way to involve adolescents in particular, as was found in previous research (Zwaanswijk et al., 2007; Tates et al., 2009). Tates et al. obtained a 23% response rate from paediatric cancer patients (8 17 years old). Despite following the exact same research methodology, the response on our online forums was much lower (2%) and actually proved to be lower than for the focus group meetings, despite all its advantages. A possible explanation for this is that participants in the Tates et al. study were more involved in their healthcare, cancer being such a serious and life-threatening illness. In rehabilitation care, patients are treated for a wide variety of illnesses and health problems, ranging from minor defects to complex trauma. Of course, in case of the latter, rehabilitation care also has tremendous consequences for the life of a child, so this explanation does not apply to all patients. It is important to note that qualitative research is used to explore subjects and not to generate generalizable information. Therefore, a limited response does not have to necessarily be problematic, as long as saturation of the information is obtained, and research participants are representative of the population (Strauss and Corbin, 1998). However, the low recruitment rates in our research did limit the generalizability of the focus group results. It would be interesting to investigate the use of social media (twitter, creating a buzz) or other recruitment strategies to increase the awareness of the research. Perhaps by asking children or adolescents to join a research panel. These strategies could increase the commitment. Chapter 2: Organizing focus groups for young patients on rehabilitation care 35

37 The setting of a focus group meeting suited most of the participants, but seemed a bit awkward for some of the children. However, the WESP strategy of letting participants interview each other did lead to active participation in the meetings. Letting children talk about their ideas and experiences during activities seems an appropriate and useful strategy. For more complete data collection, it could be considered to audiotape (or even videotape) the meetings, in addition to the written answers of the participants. These recordings could help to identify subjects mentioned by the participants that were not written down, but also more spontaneous remarks made by the children. Also, it may have been useful to organize multiple sessions for each group. In this way, participants get to know each other and the researchers better, which will probably let them open up more. Another suggestion is to perform individual interviews with the youngest patients (aged 11 years and younger). This would avoid the excitement of a group meeting and may make the children feel more at ease in discussing all subjects important to them. Also, this would give the researcher the opportunity to explore answers in depth. During the current meetings, this was not possible. Furthermore, it may be considered to interview children at a location of their own choice, for instance at home. The rehabilitation centre was a familiar location for most participants, but it may have raised negative associations for some of them. With respect to the same-sex focus groups for adolescents, it is difficult to judge whether the usability of focus group results would benefit from mixed focus groups. On the downside, it might slow down the enthusiasm shown by the girls or increase the bragging by the boys. It should be noted, though, that this concerned an exceptionally young adolescents focus group (i.e. all three participants were 12 years old). On the upside, it might lead to more balanced results, and most participants stated they had no preference regarding the gender composition of the group. Another reason for the same-sex groups was to make the participants feel comfortable enough to maybe even discuss the way their rehabilitation care affected relationships and sexuality (Wiegerink et al., 2006; 2011). This subject was not mentioned by any of the participants, however. The relevance of sensitive subjects, such as sexuality and social exclusion, could be investigated more thoroughly, perhaps by using a different strategy such as interviews (De Graaf and Rademakers, 2011). Practical implications A few recommendations can be made for future research seeking the opinions and preferences of children and adolescents about rehabilitation care, but also 36 Numbers telling the tale?

38 in other healthcare disciplines. First, sufficient attention should be paid to maximizing participation rates, for instance regarding planning, location and arousing the interest of young patients. Second, a single meeting is probably too short to create a sufficiently safe environment for young patients to voice their opinions, especially for children. Repeated meetings may provide them with more confidence and enable the researchers to explore topics more thoroughly. Also, individual interviews could be considered, in addition to focus groups. The use of online focus groups in the current patient groups has not proved its value. Conclusion The current design proved useful in researching the opinions and preferences of adolescents regarding healthcare, but less so for children. For both age groups, focus group meetings proved more feasible and useful than online forums. With some adjustments, the (online) focus group design could provide a method for actively involving both age groups in quality of care research. Chapter 2: Organizing focus groups for young patients on rehabilitation care 37

39 Appendix Questions for interviews in focus group meetings and online focus groups Focus group meetings Exploration What is a rehabilitation centre? What people are there? What are you doing in a rehabilitation centre? Of what use is a rehabilitation centre to a child? At the beginning, what did you expect from visiting the rehabilitation centre? Did that come true? What did, what did not? Why was that so? How could children be helped if there was no rehabilitation centre? Feeling What does a child feel like, visiting the rehabilitation centre for the first time? What did you feel like, then? And how do you feel at the moment? What helps if you feel bad when you are at the centre? What makes you feel comfortable when you are at the centre? Opinion What do you like about the rehabilitation centre? What do you like best? What do you like least? What would you want to take home with you? What would you remove? What do you think of the building? What do you think of the people who work there? What do you think of rehabilitating itself? What do you think of the contact with other children? Advice What would the rehabilitation centre look like, if you were the boss? How can the people at the centre make sure that children/adolescents are as comfortable as possible? How can your parents make sure that you are as comfortable as possible? What can the other children/kids do? What would you change about the building? What would you change about the people who work there? What would you change about the rehabilitation program? What would you change about the contact with other children? 38 Numbers telling the tale?

40 Online focus groups Tuesday What do you think about the rehabilitation centre? What do you like best and what do you like least? Wednesday Imagine you get to choose a rehabilitation centre to visit for your treatment. What would you want to know of each centre before you made your choice? Thursday What helps if you feel bad when you are at the centre? Friday How can the people at the centre make sure that children/adolescents are as comfortable as possible? Saturday What would you change about the rehabilitation centre, if you were the boss? (This may be anything, for instance something about the building, about the people who work there, your own treatment ) Sunday No new statement/question Monday No new statement/question Chapter 2: Organizing focus groups for young patients on rehabilitation care 39

41 40 Numbers telling the tale?

42 3 Consumer Quality Index Chronic Skin Diseases (CQI-CSD) A new instrument to measure quality of care from patients' perspective This article was submitted as: Van Cranenburgh O, Krol M, Hendriks M, De Rie MA, Smets EMA, De Korte J, Sprangers MAG. Consumer Quality Index Chronic Skin Diseases (CQI-CSD): a new instrument to measure quality of care from patients' perspective. Chapter 3: Measuring quality of dermatological care from patients perspective 41

43 Introduction Chronic skin diseases, such as psoriasis, atopic dermatitis, and hidradenitis suppurativa, have a relatively strong, negative impact on patients physical, psychological and social functioning, and well-being (Rapp et al., 1999; Ongaene et al., 2006; Wolkenstein et al., 2007; Hong et al., 2008), i.e. patients health-related quality of life (HRQoL) (WHOQOL Group, 1993). Dermatological treatment may result in temporary suppression or remission of symptoms, but chronic skin diseases cannot be cured. Therefore, patients with a chronic skin disease require prolonged use of dermatological care. Needless to say, high quality of dermatological care is of paramount importance (Kirsner and Federman, 1997). To achieve a high standard of quality of care, patient-centred care is increasingly advocated (Groene, 2011). In addition to indicators based on expert consensus and clinical measures (Renzi et al., 2001; Augustin et al., 2008; 2011), patient satisfaction is considered to be a relevant indicator to measure quality of care from patients' perspective (Williams, 1994; Van Campen et al., 1995; Kirsner and Federman, 1997; Leung et al., 2009). Concerning psoriasis, patient surveys in the U.S.A. and in Europe (Krueger et al., 2001; Stern et al., 2004; Nijsten et al., 2005; Christophers et al., 2006; Dubertret et al., 2006; Wu et al., 2007; Augustin et al., 2008; Ragnarson et al., 2012; Van Cranenburgh et al., 2013a) have suggested that patients are dissatisfied with the management of their psoriasis, despite national and international treatment guidelines (Nast et al., 2007; Pathirana et al., 2009; Zweegers et al., 2011). Dissatisfaction can lead to poor adherence and consequently suboptimal health outcomes (Finlay and Ortonne, 2004; Renzi et al., 2011; Barbosa et al., 2012), whereas higher satisfaction is found to improve HRQoL (Renzi et al., 2005). Nowadays, questions about patients actual experiences are preferred to questions about satisfaction, as the answers to these questions are less influenced by subjective expectation and provide a more discriminating measure of a hospital s performance (Salisbury et al., 2010). Information on patient experiences can be used by different stakeholders and for multiple purposes (Koopman et al., 2011). For instance, healthcare providers can use the information to monitor their provided healthcare and initiate improvement projects. In a system of regulated competition transparency of healthcare enables patients to make a well-informed choice between healthcare providers (Delnoij et al., 2006; 2010; Stubbe et al., 2007a; 2007b; De Boer et al., 2011). Insurance companies can use the information in their negotiations with healthcare providers (Koopman et al., 2011). A standardized instrument to measure patients' experience with dermatological care is currently lacking. In the Netherlands, the national standard for the measurement and comparison 42 Numbers telling the tale?

44 of patient experiences in healthcare is the Consumer Quality Index (CQ-index or CQI) (Koopman et al., 2011). A CQ-index may consider a general level (e.g. CQI Healthcare and Insurances), a sector in healthcare (e.g. CQI Physiotherapy), a specific disease (e.g. CQI Diabetes), or a specific treatment (e.g. CQI Hip and Knee Replacement). A CQ-index consists of two questionnaires: one to assess patient experiences with respect to relevant quality aspects (CQI Experience) and one to measure the importance patients attach to these aspects (CQI Importance). We developed an Experience and Importance questionnaire regarding chronic skin disease care: CQ-index Chronic Skin Disease (CQI-CSD). This new instrument is intended to provide reliable information about patient experiences with dermatological care and to reveal differences between hospitals based on patient experiences. The aims of this cross-sectional study were: 1. to evaluate the dimensional structure of the CQI-CSD; 2. to assess its ability to distinguish between hospitals according to patients experiences with quality of care; 3. to explore patient experiences with dermatological care and priorities for quality improvement according to patient; and 4. to optimize the questionnaire based on psychometric results and input of stakeholders. Materials and Methods Measurements Questionnaire development In concordance with CQI protocols (Koopman et al., 2011), the CQI-CSD was constructed in cooperation with various stakeholders: dermatologists, nurses, skin therapists, and psychologists specialised in dermatology, representatives of patient organizations and representatives of health insurance companies. Based on the literature and two focus group discussions with 13 patients we constructed a pilot version of the CQI-CSD: CQI-CSD Experience and CQI- CSD Importance. The development of the pilot version of the CQI-CSD is described in more detail elsewhere (Van Cranenburgh et al., 2013b). CQI-CSD Experience The pilot version of the CQI-CSD Experience consisted of 74 items of which 53 items referred to patients experiences with and evaluations of dermatological care: 46 items were formulated as an experience item ('Yes/No' or Chapter 3: Measuring quality of dermatological care from patients perspective 43

45 'Never/Sometimes/Usually/Always'), 2 as a problem item ('Not a problem/a small problem/a big problem') and 5 as a global rating item ('0-10' or 'Definitely not/probably not/probably/definitely'). Examples of items are included in Table 3.2. The remaining 21 items were five skip-items to screen eligibility of respondents to answer specific items, 15 items on patients' background characteristics, and one item on questionnaire improvement. The questionnaire comprised the following sections: Healthcare provided by general practitioner, Accessibility of hospital, Waiting times, Hospital facilities, Information about care process, Healthcare provided by physician, Healthcare provided by nurses, Cooperation of healthcare providers, Information provision by healthcare providers, Patient participation, Safety, Global rating of hospital, Skin complaints, About the respondent. CQI-CSD Importance For each experience/problem item in the CQI-CSD Experience, a corresponding Importance item was formulated. Quality aspects represented more than once, such as conduct of dermatologist and nurse, were converted into one item, e.g. How important is it to you that healthcare providers treat you with respect? (1= Not important at all to 4= Extremely important ). The CQI-CSD Importance consisted of 48 items. Subjects and data collection Three health insurance companies randomly selected 5,647 patients in 20 hospitals for whom costs of dermatological care were claimed between September 2011 and September 2012, according to previously identified declaration codes. These codes differentiate between diagnostic groups, but cannot distinguish between chronic and acute skin diseases. Inclusion criteria were: 1) one or more chronic skin disease diagnosis (self-reported), 2) healthcare received for this diagnosis during the past 12 months, 3) 18 years or older. We purposely included twenty hospitals with the highest patient volumes meeting our inclusion criteria, covering both academic and peripheral clinics in various regions of the Netherlands. We aimed to invite approximately 300 patients per hospital, based on the recommendation to invite at least 200 patients per hospital (Koopman et al., 2011) and our expectation that a proportion of patients would not meet our inclusion criteria (no chronic skin disease) due to our sampling strategy. In September 2012, invitations to complete the CQI-CSD Experience online were sent to the selected patients by postal mail on behalf of the health insurer. Following the Dillman protocol (Dillman et al., 2009), reminders were sent after one week to all patients and in the fifth and seventh week to non- 44 Numbers telling the tale?

46 respondents. The second reminder included a paper version of the questionnaire and a prepaid return envelope. We randomly invited a subset of patients (one out of four) to complete the CQI-CSD Importance online immediately after they completed the CQI-CSD Experience online. We aimed to attain at least 150 completed CQI-CSD Importance questionnaires, as this number was assumed to provide sufficient information on importance at an aggregated level. The study was conducted according to the Declaration of Helsinki Principles of The study was exempted for ethical approval, as research by means of once-only surveys that are not intrusive for patients is not subject to the Dutch Medical Research Involving Human Subjects Act. Statistical analyses Analyses were performed in SPSS 19.0 and MLwiN All analyses were performed at a significance level of For each analysis, we restricted analyses to patients with complete data on the particular variables involved. First, Chi square tests were performed to examine whether respondents differed from non-respondents in gender, age or diagnosis. CQI-CSD Experience: dimensional structure We performed Principal Component Analyses with oblique rotation, given the expected correlation between factors, after checking whether the following criteria were met: 1) Kaiser-Meyer-Olkin measure of sampling adequacy (KMO) >0.60, and 2) Bartlett's test of sphericity. These criteria were not met when analysing all items simultaneously. Therefore, we performed analyses for each questionnaire section separately. The number of factors was determined by Kaiser's criterion (Eigen value) (Kaiser, 1960) and scree plots. Factor loadings of items had to be 0.3 or higher for items to belong to a factor (Floyd and Widaman, 1995). To evaluate the reliability of each scale, we calculated Cronbach's α and accepted α 0.60 according to criteria of Cohen (Hammond, 1995). To obtain insight in the multidimensionality of the questionnaire, we calculated interscale correlations. Pearson s correlations of <0.70 indicate that the constructed factors can be seen as measuring separate constructs (Carey and Seibert, 1993). CQI-CSD Experience: discriminative power To examine the discriminative power of the questionnaire, we performed multilevel analyses which take the correlation of patients experiences who are treated in the same hospital into account. We used the Iterative Generalized Least Squares (IGLS) method (Goldstein, 1987; Bryk and Raudenbusch, 1992; Chapter 3: Measuring quality of dermatological care from patients perspective 45

47 Woodhouse et al., 1993; Snijders and Bosker, 1994) and calculated intra-classcorrelations (ICC) to examine whether response patterns of patients within hospitals were correlated. A higher ICC means that more of the variance in patient experiences can be attributed to differences between hospitals. When comparing hospitals, differences in respondent characteristics (age, sex, diagnosis, self-reported health status and education), so called case-mix adjusters (Zaslavsky, 1998), were taken into account. These characteristics may influence responses in their own right and an uneven distribution of these characteristics in hospitals can unfairly influence hospitals comparisons. Exploration of patient experiences and priorities for quality improvement To explore patient experiences with dermatological care, we calculated mean scores of scales and global rating items. To explore priorities for quality improvement according to patients, we calculated quality improvement scores for each separate item (Damman et al., 2009c; Zuidgeest et al., 2009; Triemstra et al., 2010). Quality improvement scores were computed by multiplying the quality aspects' mean importance score with the valid percentage of patients reporting a negative experience ( Never/Sometimes, No/A little or A small problem/a big problem ) and dividing this score by 100. Quality improvement scores could vary between 0 and 4, with higher scores suggesting more urgency for improvement. Optimizing the CQI-CSD Items were considered for removal if they decreased reliability of the relevant scale, belonged to the 10 least important quality aspects according to patients, had a proportion of 10% missing data, and/or significant high inter-item correlation (Pearson's r>0.80, p<0.001). Stakeholders discussed whether each item should be in- or excluded in the revised version. The opinion of stakeholders was leading in deciding which items to maintain. Results Sample 1,658 of the 5,647 selected patients were not eligible for inclusion because they did not have a chronic skin disease (N=1,354), did not receive care for their skin disease in the past 12 months (N=277), the invitation was returned undeliverable (N=21) or returned because the patient was deceased (N=6). Of the remaining 3,989 patients, 704 patients declined to participate, Numbers telling the tale?

48 patients completed less than five questions, and 1,453 patients did not respond. Subsequently, 26 patients were excluded because they had not completed the questionnaire themselves. Therefore, 1,160 patients (1,160/3,989=29.0%, range patients per hospital) remained for further analyses. 166/175 (94.9%) completed Importance questionnaires were valid for analyses. Respondents background characteristics are presented in Table 3.1. Respondents were comparable with non-respondents for sex, but not for age and diagnosis. Table 3.1 Background characteristics Respondents (N=1,160) Nonrespondents (N=4,487) Cramer s V % % Sex Male Female Age (years) 18 to * 35 to to to > Diagnosis (N=4,332) Acneiform dermatoses * Allergological problem Eczema Hair- and nail disorders Inflammatory dermatoses Pigment disorders Premalignant dermatoses Psoriasiform dermatoses Leg ulcers Educational status (N=1,080) No education or primary education only 12.8 Lower or senior secondary education 44.4 Secondary vocational education 19.6 Higher secondary education or higher table 3.1 continues - Chapter 3: Measuring quality of dermatological care from patients perspective 47

49 - table 3.1 continued - Respondents (N=1,160) Nonrespondents (N=4,487) Cramer s V % % Global perceived health (N=1,125) Very good/excellent 19.0 Good 52.4 Moderate 25.5 Poor 3.1 Diagnosis established (N=1,122) < 1 year ago years ago years ago 27.0 >15 years ago 21.1 Healthcare professionals contacted in past 12 months regarding chronic skin disease a Dermatologist 97.8 General Practitioner (GP) 54.4 Nurse 7.9 Other healthcare professional 7.0 Assistant to GP 4.2 Skin therapist 2.8 * p<0.001 a Multiple answers allowed. CQI-CSD Experience: dimensional structure 30 of 53 items of the Experience questionnaire could be divided into seven reliable scales (Cronbach s α ): 1. Information about care process; 2. Healthcare provided by physician; 3. Healthcare provided by nurses; 4. Cooperation of healthcare providers; 5. Information provision by healthcare providers; 6. Patient participation; 7. Safety. The remaining 23 items did not fit in either of these scales statistically and/or by content. Inter-scale correlations ranged from 0.37 to 0.69, indicating that the constructed scales measure separate aspects of dermatological care (Table 3.2). 48 Numbers telling the tale?

50 Table 3.2 Dimensional structure, reliability of scales, inter-scale correlations Scale Example item Patient experiences Dimensional Inter-scale correlations structure/reliability N Mean a SD N No. Items Α b Information about care process 18. Did the staff tell you beforehand 1, * 0.44* 0.50* 0.59* 0.57* 0.42* what a treatment or examination entailed? 2. Healthcare provided by physician 27. Did the doctor listen carefully to * 0.56* 0.69* 0.56* 0.43* you? 3. Healthcare provided by nurse 37. Did the nursing staff take enough * 0.52* 0.41* 0.37* time for you? 4. Cooperation of healthcare providers 44. Did the staff cooperate well with * 0.51* 0.42* each other? 5. Information provision of healthcare providers 48. Did you get clear answers to your 1, , * 0.38* questions from the healthcare providers? - table 3.2 continues -

51 - table 3.2 continued - Scale Example item Patient experiences Dimensional Inter-scale correlations structure/reliability N Mean a SD N No. Items Α b Patient participation 51. Were you allowed to have a say in 1, , * the (continuation of) your treatment? 7. Safety 55. At the start of a treatment, was it verified that you were the right person? SD=standard deviation. a Range: 1-4, with higher scores indicating more positive experiences. b Cronbach s alpha. * p<

52 CQI-CSD Experience: discriminative power Multilevel analyses were performed on the seven constructed scales and 16 separate items. Seven remaining items were excluded from analyses due to high non-response and/or low importance scores. A model correcting for age, education, self-reported health status, and sex, fitted the data best (Table 3.3). Likelihood ratio analyses revealed that the instrument was able to discriminate performance of hospitals on the scale 'Cooperation of healthcare providers' and four items (waiting time until consultation, information about waiting time in waiting area, facilities in waiting area and cleanness of institution). Table 3.3 Discriminative power N Model a ICC in % p (χ2 likelihoodratio test) Scales S1. Information about care process S2. Healthcare by physician S3. Healthcare by nurse S4. Cooperation of healthcare providers S5. Information provision of healthcare providers 1, S6. Patient participation 1, S7. Safety Quality aspects (separate items) 7. Reaching hospital by phone is a problem Waiting time until consultation is a problem 1, Waiting time in waiting area 1, Information about waiting time Facilities in waiting area Cleanness hospital 1, Privacy hospital 1, Nurse's attention for consequences of disease Conflicting information of healthcare providers Recommend hospital to friends/family Would choose again for this hospital 1, Skin complaints decreased past 12 months Negative consequences of skin disease past 12 months table 3.3 continues - Chapter 3: Measuring quality of dermatological care from patients perspective 51

53 - table 3.3 continued - N ICC in % Model a p (χ2 likelihood- ratio test) Global ratings 32. Physician Nurse Healthcare organization 1, a correcting for age, education, global perceived health, and sex. Significant (p<0.05) in bold. Exploration of patient experiences and priorities for quality improvement Patients reported the most positive experiences on the scales 'Healthcare provided by nurses' and 'Cooperation of healthcare providers' (Table 3.2). Global ratings of the physician (N=954, mean 8.2, s.d. 1.5), nurse (N=421, mean 8.0, s.d. 1.4) and hospital (N=1,109, mean 8.0, s.d. 1.4) were all high. Almost all patients would definitely or probably recommend the hospital to friends and family (N=1,108; 94.4%) and would themselves definitely or probably choose again for this hospital (N=1,104; 94.3%). The ten most relevant areas for quality improvement according to quality improvement scores are shown in Table 3.4. Major topics for improvement concerned information provision (e.g. information on patient associations, side effects, waiting time), accessibility (e.g. through , in case of urgency), and patient involvement (e.g. taking into account patients expectations, shared decision making). 52 Numbers telling the tale?

54 Table 3.4 Top-10 quality improvement scores, incl. importance score and percentage of patients with negative experiences Quality aspect Mean importance score SD % neg. experience Quality improvement score Healthcare providers inform the patient about patient associations Healthcare providers ask the patient about side effects Possibility to ask questions through Healthcare providers ask about patients expectations Waiting time is shown in waiting area Corresponding information provided by different healthcare providers Healthcare providers tell the patient how to reach them in case of urgency Shared decision making about treatment Others (e.g. partner) are involved during consultation Healthcare providers are aware of other medication the patient uses SD=standard deviation. Note: 8 items were excluded from analyses due to missing values. Higher quality improvement scores indicate a higher need for improvement. Optimizing the CQI-CSD Based on psychometric characteristics, 16 items were considered for removal from the questionnaire because they met one or more of the following criteria: decreased reliability of the relevant scale (12 items), belonged to the 10 least important quality aspects according to patients (9 items), had 10% of missing data (4 items), resembled another question (2 items). In consultation with stakeholders, it was agreed to maintain six items, to remove six items, and to rephrase four items. Additionally, stakeholders suggested four other items to remove, two items to rephrase and one question about self-management to add. In total, ten items were removed, six items were rephrased and one item was added. This resulted in the final version of the CQI-CSD Experience questionnaire containing 65 items, of which 41 items assess patient experiences. Chapter 3: Measuring quality of dermatological care from patients perspective 53

55 Discussion In this study, we aimed to evaluate the dimensional structure and discriminative power of the CQI-CSD, to explore patient experiences with dermatological care and priorities for quality improvement according to patients, and to optimize the questionnaire. Our results indicate that the CQI- CSD consists of seven independent, reliable scales: 1. Information about care process 2. Healthcare provided by physician 3. Healthcare provided by nurses 4. Cooperation of healthcare providers 5. Information provision by healthcare providers 6. Patient participation 7. Safety The instrument's ability to distinguish between hospitals according to patients experiences, i.e. its discriminative power, is limited. The instrument was able to detect differences in performance of hospitals on the scale 'Cooperation of healthcare providers' and four items (waiting time until consultation, information about waiting time in waiting area, facilities in waiting area and cleanness of institution). Patients were positive about the care provided by nurses and doctors, but the provision of information by healthcare providers, accessibility of care and patient involvement could be improved. We optimized the CQI-CSD based on psychometric results and stakeholders input, resulting in a revised questionnaire containing 65 items. The limited discriminative power of the CQI-CSD is not unique. Previous studies on other CQI instruments also reported limited discriminative power (Koopmans and Rademakers, 2008; Hammink and Giesen, 2010; Zuidgeest et al., 2010; De Boer et al., 2011). Differences between hospitals were mainly found in available hospital facilities and not in aspects concerning doctorpatient contact (Bensing, 1991; Swenson et al., 2004; Dibbelt et al., 2009), as was the case in our study. Lack of discriminative power may be explained in several ways. First, quality of care might be equally high in all hospitals. Second, patients experiences with doctor-patient contact might differ as much as or even more between healthcare providers within a hospital than between hospitals, leading to comparable scores at hospital level (Zandbelt et al., 2006). Third, the wording of questions might have been too generic and/or the available response formats might not have been sensitive enough to detect differences between hospitals. Last, for statistical reasons it can be questioned whether differences can be detected in as few as 20 hospitals (Koopman et al., 2011). However, since approximately 25% of all Dutch 54 Numbers telling the tale?

56 hospitals were included, covering both academic and peripheral hospitals in various regions, we feel our data are representative for all Dutch hospitals. As in previous studies, information provision and patient involvement were identified as priorities for improving the quality of dermatological care (Kirsner and Federman, 1997; Renzi et al., 2001). Printed information could aid in the transfer of information and in enhancing patients' satisfaction and outcome. Also, dermatologists' interpersonal skills, in particular the dermatologists' ability to answer a patient s questions, to give explanations about the skin problem and to demonstrate concern for the patient s health, have been associated with patient satisfaction and may be improved10. In our study, for instance, the majority of patients stated that their provider did not ask them about their expectations. A patient-centred approach, involving patients in their care, may lead to increased patient satisfaction, more treatment adherence, improved recovery and better health outcomes (Swenson et al., 2004; Hahn, 2009; Meterko et al., 2010; Groene, 2011). Limitations and strengths of the study Our study has several limitations. First, due to our sampling strategy we invited many patients who did not belong to our target group (chronic skin disease): we selected patients through registration of health insurers based on declaration codes which are categorized into diagnostic groups. For future studies we suggest that patients will be selected by the hospitals themselves, based on specific diagnoses. Second, the response rate of 29% is low. Unfortunately, we have no information on reasons for non-response. Respondents were older than non-respondents and differed in diagnoses, setting limits to generalizability of our results with respect to patient experiences and priorities for quality improvement. However, this limited representativeness is not likely to harm the psychometric results. The invitations to participate were sent on behalf of insurance companies. Although the patient association and hospital were both mentioned in the invitational letter, their involvement might not have been clear to patients. Patients may be more willing to respond when senders are more familiar or when the doctor invites them (Edwards et al., 2009). However, another study concluded that varying senders had no effect on response rates (Koopman et al., 2013). Other ways to increase response rates should be examined for future studies. Our study also has several strengths. First, the instrument was developed according to a strict methodology, consisting of both qualitative and quantitative methods, involving various stakeholders (Delnoij et al., 2010) and a substantial number of Dutch hospitals and patients. Second, patients with abroad range of diagnoses were included. Third, we were able to develop the Chapter 3: Measuring quality of dermatological care from patients perspective 55

57 first standardized instrument to reliably measure the quality of care for chronic skin diseases from patients perspective. The instrument may be internationally used after cross-cultural adaptation and a forward-backward translation procedure (Beaton et al., 2000). Conclusion The CQI-CSD provides reliable information about patient experiences with dermatological care on several quality aspects. The questionnaire may be used by healthcare providers to monitor provided healthcare in their hospital, to identify priorities for quality improvement, and to make comparisons between hospitals. 56 Numbers telling the tale?

58 4 Complementary or confusing: comparing patient experiences, patient reported outcomes and clinical indicators of hip and knee surgery This article was submitted as: Krol MW, De Boer D, Rademakers JJDJM, Delnoij DMJ. Complementary or confusing: Comparing patient experiences, patient reported outcomes and clinical indicators of hip and knee surgery. Chapter 4: Comparing PREMs, PROMs and clinical indicators 57

59 Introduction In quality of care research, many instruments are used to measure healthcare provider performance. Among these, both clinical information and patient evaluations are considered highly important (Mainz, 2003; Gibbons and Fitzpatrick, 2012; Doyle et al., 2013; Black, 2013). Collection and recording of clinical information relating to quality of care is done mainly through registrations by clinicians themselves or by independent observers (McIntyre et al., 2001). Patient evaluations of quality of care have been of growing importance in the last decades (Zastowny et al., 1995; Delnoij et al., 2006; Fung et al., 2008; Mold, 2010; De Boer et al., 2013). As healthcare users, patients are more and more encouraged to report their experiences, especially through survey research. Patients may perceive the care process differently from health care providers and value aspects of care quality differently, which makes them a unique and important source of information, especially in assessing patient-centeredness of care (Sitzia and Wood, 1997; Burney et al., 2002; Van Empel et al., 2011; Anhang Price et al., 2014). Information on the performance of healthcare providers is used for different purposes by stakeholders; for example for patient choice, quality improvement and healthcare purchasing (Delnoij et al., 2010). However, developing, measuring and interpreting performance indicators remains challenging, and continues to give rise to debates (Brook et al., 2000; Mainz, 2003; Biering et al., 2006; IAoP, 2012). These issues are even more prominent if performance indicators from different perspectives are used to measure quality of care. This is the case, for instance, for patient evaluations, which incorporate quality of care aspects from the patients perspective, and clinical indicators, which are registered by healthcare providers. One way to increase our understanding of how to interpret performance indicators is to study associations between indicators, by means of triangulation. Although combining indicators from different data sources may potentially provide stakeholders with a more comprehensive view of quality of care, it may also be confusing rather than complementary if their relationships prove contradictory. Patient evaluations commonly involve either patient experiences with the healthcare process, or patient-reported outcomes of the treatment. Patient experiences concern aspects of care that can be observed by patients, for example, how they perceive the interaction with their doctor, or whether staff members provide patients with clear and accurate information on treatments. These types of questions on patients experiences are also referred to as 58 Numbers telling the tale?

60 "PREMs" (Patient Reported Experience Measures) (Gibbons and Fitzpatrick, 2012). Outcomes of treatments are often reported by healthcare providers, but the result of a treatment can also be reported by patients themselves. This type of outcome is called "PRO" (Patient Reported Outcome), which can be measured with a specific outcome measure; a "PROM" (Patient Reported Outcome Measure) (US Department of Health, 2006; Black, 2013). PROMs can involve both physiological and psychological outcomes of care, such as quality of life (Wilson and Cleary, 1995). Clinical indicators may include a wide variety of aspects, such as the number of treatments carried out, the number of staff members, their qualifications, guideline adherence, or outcomes of care such as the incidence of adverse events and mortality rates (Mainz, 2003; Sequist et al., 2008). An important issue when looking at associations between performance indicators is the level at which these associations are examined. It is intuitively appealing to focus on associations at the institutional level (e.g. hospital), as this is the level at which performance scores are calculated and used. However, there is a risk in only assessing relationships at this level, called ecological fallacy. This refers to a situation in which associations found at an aggregate level (state, neighborhood, or hospital, for that matter) may lead to the wrong inferences at the individual level (inhabitants, or patients) (Robinson, 1950; Te Grotenhuis et al., 2011). In these situations, associations found at the aggregate level differ from the actual underlying associations at the individual level, and are sometimes even completely opposite. The opposite, where associations at the individual level are mistakenly attributed to a higher level of aggregation, may also occur. Therefore, it may be useful to make a distinction between associations at individual and at institutional level. Various studies have examined associations between patient evaluations (PREMs/PROMs) and clinical indicators of effectiveness and healthcare safety at the institutional level, mainly showing positive associations (Doyle et al., 2013; Anhang Price et al., 2014). These reported, for instance, that positive PREM scores were associated with higher disease screening rates (Sequist et al., 2008) and with lower rates of surgical complications (Isaac et al., 2010), hospital readmissions (Boulding et al., 2011), and mortality (Jaipaul and Rosenthal, 2003). At the individual level, it has been shown that higher PREMs were associated with more positive PROM scores in primary care (Safran et al., 1998) and in orthopedic care (Black et al., 2014). So far, there has not been an integrated, comprehensive study of the associations between all three types of indicators: PREMs, PROMs and clinical Chapter 4: Comparing PREMs, PROMs and clinical indicators 59

61 indicators. This concerns, for instance, the potential differences between relationships assessed at individual and at institutional level. Furthermore, we are unaware of any studies that have yet examined the associations between clinical indicators and both PREMs and PROMs. This paper seeks to add new insights, by assessing the relationships between PREMs, PROMs and clinical indicators of quality of care in Dutch hospitals for two elective treatments: total hip and total knee arthroplasty. Results from quality of care research in hospital care are usually summarized and publicly reported at the level of hospitals, in order to compare their performances. Therefore, our first research question is: 1. What are the associations between PREMs, PROMs and clinical indicators for hip/knee arthroplasty in Dutch healthcare on the level of hospitals? It is prudent, however, to also examine indicator associations at the level of individual patients, as associations at the individual level may differ from associations at the hospital level. This phenomenon may easily lead to incorrect conclusions about hospital level associations based on individual level analyses and the other way around. Clinical indicators are recorded for individual patients, but are only reported at hospital level. Therefore, it is not possible to link clinical indicator scores to patient evaluations of individual patients. As a result, we can only compare PREMs and PROMs at patient level. Although a weak but positive association was found between PREMs and PROMs in other research, it may be informative to see if these findings can be replicated in Dutch hospital care (Black et al., 2014). Our second research question is: 2. What are the associations between PREMs and PROMs for hip/knee arthroplasty at patient level in Dutch healthcare? Methods Data Data for this study came from two sources. First, PREMs and PROMs came from a 2013 nationwide study in the Netherlands, using the Consumer Quality Index (CQI) patient survey for total hip/knee arthroplasty. CQI surveys are the Dutch standard method for studying patients experiences in quality of care research, partly based on the American CAHPS methodology (Delnoij et al., 2006; 2010; Stubbe et al., 2007b). From 73 participating hospitals, patients 60 Numbers telling the tale?

62 who underwent total hip or knee arthroplasty in the 12 months prior to the research were invited to fill in the survey. Second, clinical indicators on total hip and total knee arthroplasty were reported by hospitals themselves for the Dutch national program on transparency of healthcare (Zichtbare Zorg) in 2012 (De Vos et al., 2007; Zorginstituut, 2012). For the purpose of our analyses, only hospitals were selected for which PREMs, PROMs and clinical indicators were available. 45 hospitals (34% of Dutch hospitals) could be included, involving the experiences of 5,055 patients (63% of the patient group invited to participate), of whom 2,720 underwent hip surgery and 2,335 knee surgery. The selected hospitals were widely distributed across the country. Most were general hospitals; only one academic hospital and one clinic specialized in orthopedics were included. Patient characteristics are presented in Table 4.1. Table 4.1 Patient characteristics (CQI survey Hip/Knee arthroplasty) Patient characteristics N % Age , , , Sex Male 1, Female 3, Education Low 3, Medium High Self-reported physical health Moderate/Poor Good 3, Very good/excellent 1, table 4.1 continues - Chapter 4: Comparing PREMs, PROMs and clinical indicators 61

63 - table 4.1 continued - Patient characteristics N % Self-reported mental health Moderate/Poor Good 2, Very good/excellent 2, Smoking status a No 4, Yes Mean s.d. Body Mass Index (BMI) b N=5,055; a N=4,777; b N=4,805 PREMs Eight PREMs were included in our analyses. Four of these were quality indicators constructed from combinations of survey items: Communication with doctors (3 items), Communication with nursing staff (3 items), Pain control (2 items) and Clinical information (4 items). Two other PREMs were based on single items from the survey (Cleanliness of the room and Privacy during care or consultations). The 14 items involved are presented in the appendix. From these 14 items, an overall score was also calculated (seventh PREM). Respondents were only included in the calculation of the overall score if they had valid responses for at least 11 of these 14 items (Krol et al., 2013a; Black et al., 2014). The survey items originally ranged from 1 to 4, as well as the constructed indicators, after recoding. For the purpose of interpretability, we converted all item and indicator scores to percentages. The scores were reduced by 1 (the lowest possible score) and subsequently divided by 3. In this way, scores ranged from 0% (original score 1) to 100% (original score 4). Last, patients were also asked whether they would recommend the hospital to other patients in need of hip or knee surgery, on an 11-point scale, ranging from 0 (not at all likely) to 10 (extremely likely). This item may be seen as a measure of patient satisfaction. Patients can take both process and outcome of their treatment into account when answering this question. The items and raw scores are presented in Table 4.2. Overall, experiences of patients with hip or knee surgery were positive; all the included indicators and items score between 85% and 90%. Also, patients are highly likely to recommend the hospital. 62 Numbers telling the tale?

64 PROMs In the survey, patients reported both on their health status and how they had felt before treatment (pre-treatment score) and at the moment of filling in the survey (post-treatment score). This way of obtaining PROM scores is called the quasi-indirect method (also called then-test ) (Meyer et al., 2013). Two PROMs were disease-specific: the Hip Osteoarthritis Outcome Score (HOOS- PS) (Klassbo et al., 2003; Davis et al., 2008) and the Knee Osteoarthritis Outcome Score (KOOS-PS) (Roos et al., 1998; Perruccio et al., 2008). The EuroQol 5D (EQ-5D) was included as a generic PROM (Lamers et al., 2005). The scores of the PROMs used in our analyses were the differences between post- and pre-treatment scores. Positive scores express a positive change in health status. In the analyses, a description is given of necessary adjustments of PROM scores. Table 4.2 Patient experiences (PREMs) and patient outcomes (PROMs) for hip and knee surgery Cronbach s alpha Mean s.d. Range PREMs a Communication with doctors (%) Communication with nursing staff (%) Pain control (%) Clinical Information (%) Cleanliness of room (%) Privacy (%) Overall mean score Recommendation PROMs b HOOS-PS KOOS-PS EQ-5D Self-reported complications (%) Global Perceived Effect a N=3,622-5,017; b N=1,839-5,010. HOOS-PS, KOOS-PS, and EQ-5D: average difference between post-treatment and pre-treatment scores, uncorrected for case mix. Furthermore, patients reported their experienced complications after the treatment, and a transitional item on the improvement of their health status compared to the period before surgery. This type of item is also known as a global perceived effect (GPE) (Kamper et al., 2010). All PROM items are Chapter 4: Comparing PREMs, PROMs and clinical indicators 63

65 presented in the appendix. An overview of these PROMs and their raw scores is presented in Table 4.2. All PROMs have on average positive scores. However, 25% of respondents reported having experienced at least one complication after treatment. Clinical indicators For hip and knee surgery, the clinical indicators included for analysis from each hospital are displayed in Table 4.3. These included for instance indicators on perioperative use of antibiotics, incidence of wound infections, and staff numbers and treatment volume. Almost all hospitals adhered perfectly to the guidelines on antibiotics use, according to the data. Also, the incidence of wound infections seemed to be low. On average in 2012, the hospitals in our study sample carried out 280 total hip replacements and 244 knee replacements. The number of specialists performing these varied from 2 to 8 per hospital. Table 4.3 Clinical indicators for hip and knee surgery per hospital Clinical indicators Hip surgery Knee surgery Mean s.d. Min. Max. Mean s.d. Min. Max. % Perioperative antibiotics % Antibiotics min c before surgery % Wound infections # Specialists # Surgical procedures # Procedures/specialist a N=44; b N=45; c N=43 Analyses First, associations between PREMs, PROMs and clinical indicator scores were assessed at hospital level using Pearson correlation coefficients. To this end, PREM and PROM scores were aggregated to the hospital level. Hierarchical linear regression modelling with fixed effects was used to calculate scores of PREMs and PROMs for each hospital (Snijders and Bosker, 1999). Because both measures may have been influenced by individual patient characteristics, a number of these characteristics were included in the analyses to adjust for case mix. For instance, PROM improvement scores depend partly on pretreatment scores and may be influenced by the sex and age of the patient (Neuburger et al., 2011; Rijckborst, 2013). 64 Numbers telling the tale?

66 Second, the associations between PREMs and PROMs were assessed for individual patients using partial Pearson correlation coefficients, including patient characteristics as case mix adjusters. Analyses were performed using STATA 13.0 statistical package (StataCorp., 2013). Results Associations PREMs/PROMs at hospital (and patient) level With regard to our first research question, associations between PREMs and PROMs at hospital level are presented in the top half of Table 4.4. Only a few of these associations proved to be significant. There seemed to be a relationship between PREMs and the PROM for knee surgery (KOOS-PS), but not for hip surgery (HOOS-PS). Hospital scores for improvement on the KOOS-PS were positively and moderately related with scores on the PREM overall mean score and specifically with PREMs on clinical information and privacy of consultations. A higher hospital score on the KOOS-PS was also associated with a higher average score on the willingness to recommend the hospital for knee surgery. Hospital scores for health status improvement (GPE) were positively correlated with hospital scores on PREMs regarding communication with nursing staff, pain control, privacy, and willingness to recommend. For a brief glance at our second research question, the lower half of Table 4.4 shows the associations between PREMs and PROMs for individual patients. It is clear that most PREMs and PROMs were moderately but significantly associated, and that higher scores on PREMs were associated with increased physical functioning and with fewer reported complications. The four constructed PREM indicators showed similar correlations with each of the PROMs, the items on Cleanliness of room and Privacy in consultations showed less strong associations. Willingness to recommend the hospital to other patients in need of hip or knee surgery proved moderately associated with the GPE of the treatment, and with improvement in both hip or knee functioning (HOOS-PS/KOOS-PS) and EQ-5D scores. With regard to our second research question, it is important to note that significant relationships between PREMs and PROMs found at both patient and hospital level did not differ in their directions. However, each of these specific associations proved stronger at hospital level than at patient level. Chapter 4: Comparing PREMs, PROMs and clinical indicators 65

67 Table 4.4 Hospital level b Correlation coefficients for PREMs and PROMs at patient level and at hospital level Com- doctors muni- cation Com- nursing staff Pain control Clinical muni- cation information Cleanliness of room Privacy Overall mean score HOOS-PS KOOS-PS ** ** 0.34** 0.29* EQ-5D Recom- menda- tion a Self-reported ** * complications a Global Perceived Effect (GPE) a * 0.31** ** ** Patient level c HOOS-PS 0.25*** 0.21*** 0.19*** 0.18*** 0.08*** 0.15*** 0.25*** 0.31*** KOOS-PS 0.21*** 0.18*** 0.20*** 0.22*** 0.10*** 0.11*** 0.26*** 0.28*** EQ-5D 0.22*** 0.19*** 0.20*** 0.18*** 0.07*** 0.13*** 0.24*** 0.28*** Self-reported -0.14*** -0.14*** -0.12*** -0.10*** -0.07*** -0.10*** -0.15*** -0.12*** complications a Global Perceived Effect (GPE) a 0.25*** 0.22*** 0.19*** 0.19*** 0.08*** 0.15*** 0.26*** 0.32*** a Not adjusted for case mix. b N= Pearson correlation coefficients, adjusted for case mix (age, sex, education (PREMs/PROMS), smoking status, BMI, surgery type, pre-treatment score (PROMs)). c N=1,310-4,704. Partial correlation coefficients, adjusted for case mix (age, sex, education, smoking status, BMI, surgery type, pre-treatment score. *** p<0.01; ** p<0.05; * p<0.1 Clinical indicators, PREMs and PROMs Moving back to our first research question, the associations between scores on clinical indicators and PREMs and PROMs were assessed at hospital level, and are shown in Table 4.5. The results regarding these relationships proved rather inconsistent. 66 Numbers telling the tale?

68 Table 4.5 Pearson correlation coefficients for PREMs/PROMs and clinical indicators on hip and knee surgery at hospital level PREMs PROMs Communication doctors Communication nursing staff Pain control Clinical information Cleanliness of room Privacy Overall mean score Recommendation a HOOS- PS KOOS- PS EQ-5D Selfreported complications a Global Perceived Effect (GPE) a Hip % Perioperative antibiotics % Antibiotics 60-15min before surgery NA NA % Wound infections NA ** # Specialists NA # Surgical procedures * NA * # Procedures/specialist * * * 0.29* NA Knee % Perioperative antibiotics % Antibiotics 60-15min before surgery NA NA % Wound infections * NA # Specialists -0.28* NA # Surgical procedures NA 0.30** # Procedures/specialist 0.32** ** 0.29** 0.28* 0.35** 0.32** NA 0.26* N= Adjusted for case mix (Age, sex, education [PREMs/PROMs], smoking status, BMI, surgery type, pre-treatment score [PROMs]). ** p<0.05; * p<0.1. a Not adjusted for case mix.

69 Hospital scores on process indicators concerning prophylactic antibiotics prescription showed no significant relationships with any of the scores on PREMs and PROMs. However, a positive correlation was found of the percentage of wound infections in hip surgery per hospital with the GPE. In other words, a higher incidence rate of wound infections in hip surgery per hospital was associated with more patient-reported improvement in functioning. Regarding the number of specialists, the more specialists there were in a hospital, the lower the scores on the PREM of communication with doctors in knee surgery. An overall higher number of hip surgeries seemed to relate negatively to the hip related HOOS-PS PROM, whereas a higher number of knee surgeries seemed to relate to higher scores on the knee-related KOOS-PS PROM. With regard to the average treatment volume per specialist, positive modest to moderate relationships were found with a number of PREMs in both hip and knee surgery. Also for knee surgery, a positive association was found with scores on the KOOS-PS PROM. Conclusions and discussion In this study, we have found patient experiences (PREMs) and patientreported outcomes (PROMs) to be moderately but positively related for hip and knee surgery at the level of individual patients. Patients with a higher global perceived effect (GPE) were more likely to recommend the hospital to other patients in need of hip or knee surgery. This relationship was one of the strongest found in our analyses. At the aggregated level of hospitals, associations found at the individual patient level were much less notable; some proved significant for the specific knee PROM (KOOS-PS), but none did for the specific hip PROM (HOOS-PS). Also, improvements on the generic PROM EQ-5D were not associated with any of the PREMs. Regarding clinical indicators, only a few significant relationships were observed with PREMs and PROMs. We observed a counterintuitive positive association between the incidence of wound infections in a hospital and the GPE. In other words, patients in orthopedic wards with a higher prevalence of wound infections in hip surgery, reported, on average, more improvement in their health status. It could be that this finding relates to some issues regarding our data on clinical indicators, which will be discussed later. We found that the total volume of treatments was related negatively to PROM scores in hip surgery, but positively in knee surgery, although the 68 Numbers telling the tale?

70 relationship was modest for hip surgery. Last, the average number of hip or knee arthroplasties carried out per specialist was positively related to a number of PREMs. Especially for knee surgeries, these relationships were among the strongest in our analyses. These findings illustrate the relevance of the ongoing debate regarding treatment volume and quality of care. PREMs and PROMs: complementary From our results, positive and significant relationships between PREMs and PROMs were apparent. This suggests that they are in fact related, but the strength of the correlations was modest to moderate, which was in line with other research on this topic (Black et al., 2014). From a methodological point of view on construct validity, PREMs and PROMs can be considered to measure different dimensions of healthcare (Streiner and Norman, 1999a). In fact, this is also in line with the distinction commonly made between types of quality indicators: structure, process (e.g. PREMs), and outcome indicators (e.g. PROMs) (Donabedian, 1980). This involves the widely accepted view that the structure and process of care influence healthcare outcomes. For this study, this seems plausible, although our data do not allow us to draw conclusions regarding causality. Last, we found that the significant relationships between PREMs and PROMs for individual patients and for hospitals were in the same direction, which suggests that ecological fallacy does not apply to these data. In short, PREMs and PROMs seem complementary in assessing quality of care. Even though expectations regarding the use of PROMs in quality of care research are high, research has shown that patients global rating of quality of care is linked rather to PREMs, such as communication and interactions with healthcare providers, than to outcomes (Rademakers et al., 2011; Siegrist, 2013). Nonetheless, we found the KOOS-PS PROM and GPE to be positively associated with willingness to recommend the hospital. Clearly, patient reported outcomes do matter in patients assessment of quality of care. PREMs/PROMs and clinical indicators: complementary and confusing The relationships between of PREMs and PROMs with clinical indicators were more diverse. An important issue in these analyses was the fact that differences between hospitals in the perioperative use of antibiotics (process indicators) and rates of wound infections (outcome indicators) were limited. Almost all hospitals had a 100% score on the use of antibiotics and the clinical incidence of wound infections was very low, as could be observed from Table 4.2. Although these findings suggest a high overall quality of care, it limits the usefulness of these clinical indicators for comparative research on provider performance. This Chapter 4: Comparing PREMs, PROMs and clinical indicators 69

71 might partially account for the fact that only a few of the relationships of PREMs and PROMs with clinical indicators proved significant, and that some were counterintuitive in their direction. Guidelines for orthopedic surgery recommend the use of perioperative antibiotics to decrease the incidence of infections (Gillespie and Walenkamp, 2001; De Vos et al., 2007). However, the proportion of surgical procedures in which these were administered showed no relationship with any PROM, or with wound infections as reported by either hospitals or by patients. Although there are indications that cleanliness as perceived by patients is associated with lower hospital infection rates, this relationship was not observed in our study for any of the (self-reported) complications (Isaac et al., 2010). With regard to treatment volume (structure indicators), there is evidence that health outcomes are positively associated with high treatment volumes. However, this has been studied mainly for negative clinical outcomes, such as adverse events, morbidity and mortality rates (Halm et al., 2002; Shervin et al., 2007; Critchley et al., 2012). Because mortality is a rare outcome for most treatments, including orthopedic surgery, functional status is often a more appropriate and useful indicator of healthcare quality (Mant, 2001; Shervin et al., 2007; Zuiderent-Jerak et al., 2012). In our study, analyzing this relationship using PROMs as outcomes has added to the knowledge on this topic. Furthermore, even less was yet known regarding the association between volume of care and PREMs. A higher volume of care can be associated with both better PREM and PROM scores, because high volume hospitals (or departments) potentially have better organized their care processes and have more experienced staff members. The same may apply to the average treatment volume per orthopedic specialist; this gives an indication of their experience. We did find higher PREM scores in hospitals with a higher number of treatments carried out per specialist. We also found significant associations between joint arthroplasty outcomes and both hospital and surgeon volume. However, in line with other research, these relationships proved somewhat inconsistent (Shervin et al., 2007; Critchley et al., 2012). Although we found a modest negative association between hospital volume and the HOOS-PS PROM, both hospital and surgeon volume related positively to the KOOS-PS PROM. Overall, combining healthcare provider-reported clinical indicators with patient evaluations proved rather more confusing. However, we did find surgeon volume rather than hospital volume to be positively related to patient experiences. These findings illustrate that the relation between treatment volume and outcomes is far from straightforward (Zuiderent-Jerak et al., 2012). Future research linking clinical indicators to PROMs or PREMs at the individual level may help provide more clarification. 70 Numbers telling the tale?

72 Strengths and limitations Due to the standardized methods, it could be well observed in what way patient experiences with care processes were associated with health outcomes (Mant, 2001). Because the performances of all providers are measured and analyzed using the same methodology (including case mix adjustment), differences in results should only occur due to chance or, preferably, due to actual differences in performance. Regarding limitations of our research, the variation of clinical indicator scores among hospitals on antibiotics use and wound infections was limited, as mentioned earlier. This limited our analyses and prevented us from drawing conclusions on associations between patient evaluations and these clinical indicators. Another point on account of the included hospitals was their number. Due to the fact that only hospitals could be included for which both patient evaluations and clinical indicators were available, 45 hospitals were involved in the analyses (34% of Dutch hospitals). Because of this relatively small sample size, we have been permissive in establishing the significance of correlations, by accepting p-values as high as 0.1 as significant. The power of our analyses would benefit from increasing the number of hospitals. In this study, the pre-score for the PROM was measured retrospectively using a then-test which was administered following treatment at the same time as the measurement of the post-score. The most commonly used approach to measure PROMs, however, is to have two separate (repeated) measures. This involves surveying patients at specific moments in time before and after treatment and may arguably provide more valid responses than the use of a then-test. Even though there are indications that the method of PRO measurement used in our study may lead to recall bias (Widnall et al., 2014), it is not necessarily invalid (Guyatt et al., 2002; Middel et al., 2006; Meyer et al., 2013). For the validity of the PROMs, it is important to check for these potential influences and see whether they should be controlled for when analyzing PROM scores. Although we selected commonly used patient characteristics to adjust performance scores of healthcare providers, it is still advisable to be critical regarding case mix corrections. Especially the adjustment of PROM scores for pre-treatment scores remains a challenge, because of the potential floor- or ceiling effects. Also, this set of characteristics might not cover all patient traits relevant to this specific healthcare setting. This may have been inadequate for the PROMs in particular, because we had little information on patients medical condition other than their pre-treatment scores, which were selfreported in retrospect (Halm et al., 2002; Coles, 2010; NHS, 2013). Chapter 4: Comparing PREMs, PROMs and clinical indicators 71

73 The scores of clinical indicators used in this study were collected in the year prior to the patient experience survey. Because patients were invited to participate in the survey up to 12 months after their surgery, it is highly likely that many of them are also included in the data on the clinical indicators. Nonetheless, we should be careful in making strong inferences regarding causality. In future research, including individual clinical indicator scores of patients could provide clarity, by showing how the clinical processes and outcomes actually relate to the patients experiences and outcomes. Unfortunately, these individual level clinical data were not available and could therefore not be linked to individual PREMs and PROMs in our study. Also, it would be important to see if research in other healthcare settings or on different treatments would lead to different results, in order to interpret the generalizability of our findings. General conclusions This study presented a comprehensive assessment of the relationships between patient experiences, patient-reported outcomes and clinical indicators of quality of care in Dutch orthopedic care. This type of research may not be relevant for all healthcare settings or disease areas, but for clearly organized processes such as orthopedic hip or knee arthroplasty, it is. We found PREMs and PROMs to be complementary in describing quality of care from the patients perspective. Also, surgeon volume seemed especially positively related to PREMs. Due to data limitations of other clinical indicators, however, we were impeded in researching many of the associations between patient evaluations and clinical indicators. As a result, in this study, combining patient evaluations with clinical indicators proved rather more confusing than complementary in assessing quality of care. 72 Numbers telling the tale?

74 Appendix PREMs and PROMs from CQ-index survey on total hip/knee arthroplasty PREM No. Survey Item(s) Response cat. Communication nursing staff T1 1-4 (0-100) V4 Did the nurse(s) take you seriously? Never (1); Sometimes (2); Usually (3); Always (4) V5 Did the nurse(s) listen carefully to you? Idem V6 Did the nurse(s) explain things to you in an understandable way? Idem Communication doctors T2 1-4 (0-100) v8 Did the doctor(s) take you seriously? Never (1); Sometimes (2); Usually (3); Always (4) v9 Did the doctor(s) listen carefully to you? Idem v10 Did the doctor(s) explain things to you in an understandable way? Idem Pain control T3 1-4 (0-100) v24 Did the hospital staff respond quickly when you indicated that you were in pain? v25 Was your pain kept under control properly? Idem Never (1); Sometimes (2); Usually (3); Always (4) Clinical information T4 1-4 (0-100) v17 Were you well informed about any treatment after the operation, such as physiotherapy? Yes (4); No (1) v19 Were you well informed about what you should or should not do after the operation? Idem - appendix continues - Chapter 4: Comparing PREMs, PROMs and clinical indicators 73

75 - appendix continued - PREM No. Survey Item(s) Response cat. v22 v26 When you left the hospital, did you receive information about symptoms or health problems that you had to be attentive of after your discharge? Were you informed in an understandable way on possible side effects of new medication? Idem Never (1); Sometimes (2); Usually (3); Always (4) Cleanliness of room v11 Were your room and bathroom kept clean? Idem (0-100) Privacy v13 Did the staff make sure you had enough privacy when they took care of you or talked to you? Idem (0-100) Recommendation v29 How likely is it that you would recommend this hospital or clinic for hip/knee surgery to a friend or colleague? unknot at all likely (0) Extremely likely (10) PROM No. Survey Item(s) Response cat. HOOS-PS v31 Degree of difficulty: 0-20 (0-100) a Descending stairs None (0); Mild (1); Moderate (2); Severe (3); Extreme (4); b Getting in/out of bath or shower Idem c Sitting Idem d Running Idem e Twisting/pivoting on your loaded leg Idem KOOS-PS v31 Degree of difficulty: 0-28 (0-100) a Rising from bed None (0); Mild (1); Moderate (2); Severe (3); Extreme (4); b Putting on socks/stockings Idem c Rising from sitting Idem d Bending to floor Idem e Twisting/pivoting on your injured knee Idem f Kneeling Idem g Squatting Idem - appendix continues - 74 Numbers telling the tale?

76 - appendix continued - PROM No. Survey Item(s) Response cat. EQ-5D v45a Mobility 1-3 I have/had no problems in walking about 1 I have/had some problems in walking about 2 I am/was confined to bed 3 v45b Self-Care I have/had no problems with self-care I have/had some problems washing or dressing myself I am/was unable to wash or dress myself v45c Usual activities (e.g. work, study, housework, family or leisure activities) I have/had no problems with performing my usual activities I have/had some problems with performing my usual activities I am/was unable to perform my usual activities v45d Pain/Discomfort I have/had no pain or discomfort I have/had moderate pain or discomfort I have/had extreme pain or discomfort v45e Anxiety/Depression I am/was not anxious or depressed I am/was moderately anxious or depressed I am/was extremely anxious or depressed Idem Idem Idem Idem Idem Idem Idem Idem Idem Idem Idem Idem Idem Idem Idem Idem Self-reported complications v41 Have you experienced any of the following complications after surgery? a Allergy or a reaction to medication 0/1 b Urinary problems 0/1 c Bleeding 0/1 d Wound problems 0/1 Global Perceived Effect (GPE) v39 To what extent has your overall daily functioning changed since your hip/knee operation? Very much worse (1); Much worse (2); A bit worse (3); Unchanged (4); A bit improved (5); Much improved (6); Very much improved (7) Chapter 4: Comparing PREMs, PROMs and clinical indicators 75

77 76 Numbers telling the tale?

78 5 Overall scores as an alternative to global ratings in patient experience surveys A comparison of four methods This article was published as: Krol MW, De Boer D, Rademakers J, Delnoij D. Overall scores as an alternative to global ratings in patient experience surveys; a comparison of four methods. BMC Health Services Research 2013, 13:479. Chapter 5: Overall scores as an alternative to global ratings 77

79 Background For the past two decades, use of patient experience surveys as measurements of healthcare quality has increased substantially (Williamson, 2008, Delnoij, 2009). The results of these measurements maybe used for various purposes by different stakeholders. For instance, patient experiences may enable healthcare providers to identify care elements or processes that their patients find unsatisfactory (Maarse and Ter Meulen, 2006; Zuidgeest et al., 2012). If patient surveys are standardized, the responses can be used to compare the quality of care delivered by different providers (Fung et al., 2008). Patients can use this information to decide which healthcare provider they will use (Williamson, 2008; Damman et al., 2012). This information can also be used by healthcare regulators or inspectorates to assess the overall quality of healthcare, by researchers for studying healthcare systems, or for rewarding good quality of care (Delnoij et al., 2010). Patient experience surveys usually include questions about a wide variety of healthcare characteristics, such as accessibility of healthcare, contact with healthcare providers and treatment information. Using commonly accepted methods of data reduction such as factor analysis and reliability analysis, the survey items are grouped to represent quality indicators, resulting in a quality rating for each indicator (also known as composites) (Zaslavsky et al., 2000; Chang et al., 2006). Examples of quality indicators are the attitude of providers, perceived competence of providers or the information received about treatments or medication. However, stakeholders often still feel they are presented with a wide variety of quality ratings, without a clear overall view of the results (Hibbard et al., 2002; Hibbard and Peters, 2003; Ranganathan et al., 2009; Damman et al., 2009b). In many surveys, patients are asked to rate the overall quality of the healthcare provider, usually called a global rating. Although there are examples of global ratings in other settings, the most commonly used global rating inpatient surveys consist of a single question: How would you rate the healthcare provider?, involving a scale from 0 to 10. Global ratings are often used as a summary measure (Chang et al., 2006; De Boer et al., 2010). However, it is questionable whether a single rating is a valid representation of the entire range of experiences reported in a patient survey. Research has shown that the global rating largely represents patients experiences with the process of care (e.g. communication), even though patients also consider many other aspects of care to be highly relevant (Chang et al., 2006; De Boer et al., 2010; Rademakers et al., 2011). Thus, there is a substantial risk that a global rating represents only some of the patient experience indicators. 78 Numbers telling the tale?

80 As an alternative, overall scores may be considered as summary scores of quality of care. Overall scores can be constructed retrospectively from all quality indicators of a patient survey that are considered relevant. This should ensure that all indicators are represented by the overall score and accordingly, such an overall score may constitute a more valid summary score compared to the global rating. The possibility of constructing overall scores has been explored for quality scores based on patient or hospital records (Jacobs et al., 2005; AHRQ, 2011). Although we have heard of overall scores being used in patient experience research, there is limited peer reviewed evidence on their statistical properties, as far as we are aware. It is therefore useful to study to what extent such overall scores are indeed a better representation of the various aspects of patient experiences in healthcare than global ratings. In doing so, however, some methodological challenges arise. For instance, should all quality indicators be considered equal or should weighting factors be considered? And if so, what are the consequences of using different weighting factors? The present study explores the possibility of constructing overall scores from a variety of quality indicators based on patient experiences, and addresses the following research questions: 1. Are individual indicator scores better reflected by overall scores than by global ratings? (Validity) 2. Do the overall scores vary between providers? (Discriminatory power) 3. Are overall scores to be preferred over global ratings and if so, which method is most suitable? Methods Data collection Data was used from the Consumer Quality (CQ) index for nursing home care (Triemstra et al., 2010). The CQ-index is a family of surveys, specific for one disease or provider, that are used in the Netherlands to measure and report patient experiences with healthcare (Delnoij et al., 2006; Delnoij, 2009). The data for the CQ-index for nursing home care was gathered through structured interviews with residents of nursing homes (or homes for the elderly), conducted by qualified interviewers. This survey was constructed from topics deemed relevant by all stakeholders involved (e.g. clients, branch Chapter 5: Overall scores as an alternative to global ratings 79

81 representatives, health insurance companies). After initial psychometric testing, quality indicators were identified that each consisted of one or more survey questions. Data from this survey was selected for the purpose of the present study as it is a very rich dataset, both in sample size and in the number of validated quality indicators (15 in total), each covering a specific element of the healthcare process (Triemstra et al., 2010; Zuidgeest et al., 2012). Eleven of these indicators are constructed from two or more items (Cronbach s alpha ) and four consist of a single item. Where quality indicators consisted of more than one survey item, indicator scores were constructed by calculating the average over the items for each respondent, provided that the respondent answered half or more of the items for that indicator. The original dataset used in this article consisted of 12,281 patient surveys, constituting 7.5% of all Dutch nursing home residents at the time. The surveys came from 464 nursing homes, about 25% of the Dutch nursing homes (Van der Velden et al., 2011). Since all Dutch nursing homes are legally required to participate in CQI research once every two years, bias in the selection of nursing homes in the present study is highly unlikely. Survey data was gathered through interviews with nursing home residents, conducted in the first half of Unfortunately, no information was available about the nonrespondents. However, in the current setting, non-response on the CQ-index nursing home care has never been a problem (Triemstra et al., 2010). Data selection Indicator scores ranged from 1 to 4. Respondents were only included in the calculation of the overall scores if they had given scores for at least 12 of the 15 indicators. 11,451 of 12,281 respondents met this condition and were eligible for our analyses (93%). The respondents characteristics are presented in Table 5.1. The number of respondents per nursing home varied between 8 and 82, with an average of 25 respondents (SD 6). The age of respondents ranged from 18 to 108. However, 98% of respondents were 60 years of age or older, with an average of 84 years. 80 Numbers telling the tale?

82 Table 5.1 Respondent characteristics N Mean (SD) Age (years) 11, (8.5) N % Education No education or primary education only 6, Lower secondary education (reference) 3, Higher secondary education or higher 1, Self-reported health Good 5, Moderate (reference) 5, Poor 1, Years of residence Less than 1 year 2, Between 1 and 2 years 2, Between 2 and 5 years (reference) 3, More than 5 years 2, Gender a Male 2, Female 8, a not used as case mix adjuster Overall score construction We examined four possible strategies for constructing overall scores. Each of those strategies is presented in detail in this section. For the Average Overall Score, the indicator scores for each respondent were averaged (arithmetic mean), as individual overall scores. The average overall score over all its residents provided the overall score for each nursing home. This is the most straightforward way to construct an overall score for a provider. The Patient Perspective Overall Score was calculated by adjusting each indicator score for the importance that patients attribute to the specific quality indicator. These importance scores were measured during the development of the survey by asking respondents to rate the importance of each survey item on a scale from 1 (not at all important) to 4 (very important) (Triemstra et al., 2010). The importance of each indicator was calculated as the mean importance of the underlying items. For instance, the three items on indicator 1.1 (bodily care) had an average importance of 2.97, whereas the Chapter 5: Overall scores as an alternative to global ratings 81

83 mean importance over all 15 indicators was This means that bodily care is of less than average importance for nursing home residents. For each respondent, indicator scores were adjusted for their relative importance. So for indicator 1.1, indicator scores were given a weighting of 0.96 (=2.97/3.10), thereby decreasing their contribution to the overall score. Conversely, scores on indicators with higher than average importance were given a higher weighting. Doing this means that the indicators that are important to respondents are emphasized. After these adjustments, the indicator scores were averaged for each respondent. Subsequently, the average of the residents overall scores provided the overall score for each nursing home. The third strategy, the Differences Overall Score, took account of differences between providers in indicator scores. By adjusting quality indicators for their variance, differences between providers in indicator scores maybe expanded. One way of doing this is to calculate the intraclass correlations (ICC), which show the variation in indicator scores that can be attributed to differences between providers (Singer, 1998; Reeves et al., 2010). To obtain the ICC, multilevel analyses were performed for each of the indicators (empty 2-level models). Coming back to the example of indicator 1.1 (Bodily care), the analysis showed that its ICC was This meant that 11% of the variation in scores on this indicator could be attributed to differences between nursing homes. However, the mean ICC over all 15 indicators proved to be In other words, scores on indicator 1.1 showed less differentiation between nursing homes than the average across all indicators. Indicator scores were then adjusted according to their relative ICC. In the case of indicator 1.1, individual scores were given a weighting of 0.73 (=0.11/0.15), thus decreasing their contribution to the overall score. Conversely, scores on indicators with a higher than average ICC were given a higher weighting. Differences between providers are thus emphasized; indicators on which there is relatively more differentiation are weighted more heavily in the overall score than indicators with little differentiation. After this adjustment, the indicator scores were averaged for each respondent. Subsequently, the average of the residents overall scores provided the overall score for each nursing home. Finally, the fourth strategy (Average Rating Overall Score) involved a star rating for each of the individual indicator scores. These stars are awarded based on the dispersion of scores on each indicator and subsequently on the statistical differences between the providers: two stars for an average performance, one for the worst performers and three for the best performers. Providers with three stars perform significantly better on an indicator than providers with one star (Damman et al., 2009a). These stars are a standard part of provider feedback reports on CQ-index survey results, enabling 82 Numbers telling the tale?

84 providers to compare their performance against that of others. The overall score was constructed by averaging the number of stars per provider over all quality indicators. This overall score can only be constructed using aggregated data, as each individual indicator score depends on the scores of all other providers, as described in the Data Analyses. The Global Rating of quality consisted of a single question: How would you rate the nursing home?. It involved 11 response categories, ranging from 0 to 10, in which 0 was labelled the worst possible nursing home and 10 was labelled the best possible nursing home. The residents ratings were averaged for each nursing home. Data analyses The individual indicator scores and the individual overall scores were both used in multilevel analyses (Snijders and Bosker, 1999). Scores per nursing home were adjusted for differences in case mix between homes, using the commonly accepted case mix variables of age, educational level and selfreported health of the respondent (Snijders and Bosker, 1999; Zaslavsky et al., 2001; Damman et al., 2009a). In addition, an adjustment was made for the length of stay (Triemstra et al., 2010). Empirical Bayes Estimation (EBE) was used to estimate case mix-adjusted means per nursing home for each of the quality indicators and overall scores (Effron and Morris, 1977; Casella, 1985; Snijders and Bosker, 1999; Greenland, 2000; Diez Roux, 2002). The Average Rating Overall score can only be calculated after the multilevel analyses. Based on confidence intervals, organizations receive either one, two or three stars for each quality indicator. Therefore, the average number of stars over all quality indicators can already be seen as an overall score in itself. The Average Rating Overall Score, however, is difficult to compare with the other three overall scores. Its approach is totally different and so is its scale (1 to 3 versus 1 to 4). Also, a number of statistical properties of this composite cannot be analysed: it is not possible to calculate an intraclass correlation or its reliability. To answer our first research question, Pearson correlation coefficients were calculated between individual indicators and the overall scores (and global rating) to assess the validity of the latter. The greater the association between individual indicators and a composite, the better that overall score reflects individual indicator scores. Fisher s z-transformation was used for averaging correlation coefficients (Hays, 1994). Interpreting a correlation coefficient is highly dependent of the context in which it is calculated. In the case of patient experience research, correlation coefficients between survey items are considered high when 0.7 or above, while 0.4 and lower is considered a weak relationship (Carey and Seibert, 1993). With regard to our Chapter 5: Overall scores as an alternative to global ratings 83

85 second research question (assessing discriminatory power), intraclass correlations (ICC) were calculated from the multilevel analyses. As with the Pearson correlations, there is no gold standard with regard to cut-off points for the ICC. The higher the ICC, the more the variance in scores can be attributed to the nursing home a respondent is living in. Thus, a higher ICC is preferable in view of discerning between provider performances. Differences in rankings of providers were also calculated in order to assess the influence of each of the overall score constructs and the global rating on the position of providers. In this regard, the influence of sample size will also be considered. Our third research question will be answered by assessing the results of the two other research questions, combined with the practical applicability of the four strategies. Analyses were performed using STATA 11.0 (StataCorp, 2009). Results Overall score characteristics The first three overall scores prove to be equally reliable scales at the level of individual respondents (Cronbach s alpha , data not shown). Also, the Average, Patient Perspective and Differences Overall Scores are quite similar in terms of the results at the provider level, as can be seen from Table 5.2; the ranges of the means and standard deviations are only and respectively. From additional analyses (data not shown), it is clear for the Patient Perspective that the effect of weighting indicator scores by their importance is limited: the largest adjustment was made on indicator 6.1 (Care plan), for which the scores were given a weighting of The other indicator adjustments are between 0.90 and As a result, it yields similar results to the Average Overall Score. For the Differences Overall Score, however, the adjustments are more substantial. The largest adjustment in scores is on indicator 2.3 (Housing and privacy): this was given a weighting of The other indicator adjustments are between 0.35 and Also, the adjustments for the Patient Perspective and Differences Overall Score go opposite ways for a number of indicators, but in the same direction for others. Another important aspect is the sample size needed per nursing home if reliable discrimination between them is to be possible based on their performance ratings. The required sample sizes for the overall scores prove to be quite small, as shown in the last column of Table 5.2. This is due to the relatively large differences in overall scores between organizations. The 84 Numbers telling the tale?

86 required sample sizes for the overall scores also proved to be smaller than for the global rating. Table 5.2 Characteristics of composite scores at the provider level Composite Mean SD Min Max ICC Reliability ICC Required N (rel.=0.80) Average Patient Perspective Differences Average Rating NA NA NA Global rating N(organizations)=464 Reflection of the quality indicators (Validity) The validity was tested by examining how the individual quality indicator scores were reflected in the overall scores. For this purpose, correlations of the individual quality indicators against the overall scores were calculated. The results are shown in Table 5.3. The individual indicators differ in the extent to which they are reflected in the overall scores: some indicators are more related to the overall scores than others. Seven of the indicators have a strong relationship with all individual overall scores (correlation >0.7). There are limited relationships (correlation <0.4) for two indicators: arrangements between the resident and the nursing home (6.1) and the quality of cleaning (2.1). On average, however, the overall scores are substantially correlated with the individual indicator scores: 0.67 to 0.69 (using Fisher s z-transformation) (Hays, 1994). The strengths of the correlations are broadly similar between the different overall scores. Individual indicators are more strongly associated with each of the overall scores, than they are with the global rating. All of the correlations between each of the four overall scores and the global rating are close to 0.7. Chapter 5: Overall scores as an alternative to global ratings 85

87 Table 5.3 Correlations between indicator scores, composite scores, and global rating Indicator Average Patient Perspective Differences Average Rating Global rating 1.1 Bodily care Meals Comfort Atmosphere Housing and privacy Safety of living environment 3.1 Activities Autonomy Mental well-being Competence and safety of care 5.2 Attitude and courtesy of care providers 6.1 Care planning and evaluation 6.2 Shared decision making Information Availability of personnel Average correlation (Fisher's z) Global rating NA N=464. All correlations are significant at p< Strong correlations (>0.7) in bold Differentiation between providers (Discriminatory power) The discriminatory power of the overall scores was tested by calculating the proportion of variance that is attributable to providers, i.e. in this case to the nursing home. This proportion is expressed in the intraclass correlation (ICC). For the individual indicators, intraclass correlations (ICC) ranged from approximately 0.03 (Safety) to 0.40 (Housing and privacy) (data not shown). These ICC values are substantial, compared to analyses of other CQ-index data, which gave values up to 0.05 (Stubbe et al., 2007a; 2007b; Damman et al., 2009c; De Boer et al., 2011). In other words, a large part of the variance in overall scores can be attributed to the nursing home. Moving back to Table 5.2, the ICCs for the four overall scores were between 0.22 and Importantly, the ICC of each overall score is far higher than the ICC of the global rating (0.08). As expected, the Differences Overall Score shows the largest ICC, as we expanded differences in indicator scores between organizations. 86 Numbers telling the tale?

88 Because overall scores are used for comparing healthcare providers, merely inspecting the differences in their distributions of scores is not enough. It is also essential to know what each strategy does to the ranking of the providers, as some stakeholders use performance data for this purpose. Ranking correlations (Kendall s Tau) and differences in ranking were therefore calculated for each of the four overall scores and for the global rating. Table 5.4 shows the associations between the rankings of providers for each of the overall scores and for the global rating. Table 5.4 Associations between provider rankings for global rating and composite scores Average Patient Perspective Differences Average Rating Global rating Average 1.00 Patient Perspective Differences Average Rating Global rating N(organizations)=464. All correlations significant at p<0.001 From this analysis, it is clear that the global rating yields quite a different provider ranking than each of the overall scores; associations between this rating and the overall scores are low. The associations between each of the four overall scores, however, are considerable. To assess the actual differences in ranking, they were calculated for each of the overall scores, using the global rating as a standard. Differences were expressed as the number of providers whose rank changed by more than 116 (25% of the dataset) or even by more than 232 (50% of the dataset). It turns out that for each of the overall scores, the rankings of an average of 145 providers (31%, range ) would shift more than 116 places compared to the global rating. On average, 20 providers would even move by more than 232 places (4%, range 17 23). Differences between the global rating ranking and overall score rankings are therefore considerable, whereas differences in rankings between each of the overall scores are limited. It should be noted, though, that a large change in rankings does not necessarily reflect a large absolute difference in either overall scores or the global rating. Due to the clustering of the scores, a difference in ranking of 116 can be caused by an absolute difference as small as 0.09 on the Average Overall Score, for instance. For the global rating, the same applies for absolute differences as small as To illustrate this, Figure 5.1 shows the relationship between the Average Overall Score and the global ratings of all providers, which is comparable for Chapter 5: Overall scores as an alternative to global ratings 87

89 the three other overall score strategies. As can be seen from this figure, the scores of many providers are somewhat clustered. Nonetheless, the choice of the specific overall score strategy does have a severe impact on the rankings of several providers, especially the providers further removed from the reference line. Figure 5.1 Scatterplot of the Average Overall Score and Global ratings (N=464) Discussion and conclusion In this study, four different strategies for constructing overall scores were assessed, and their characteristics compared to a global rating of quality of care. With regard to our first research question, correlations between individual quality indicators and each of the overall scores proved to be considerable, in contrast to their rather weak associations with the global rating. This means that the specific patient experiences are better reflected by the overall scores than by a global rating. Overall scores therefore turn out to be a more valid way of summarizing the survey data than a global rating. It should however be noted that overall scores consist only of the scores actually reported by 88 Numbers telling the tale?

The attitude of nurses towards inpatient aggression in psychiatric care Jansen, Gradus

The attitude of nurses towards inpatient aggression in psychiatric care Jansen, Gradus University of Groningen The attitude of nurses towards inpatient aggression in psychiatric care Jansen, Gradus IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you

More information

Title:The impact of physician-nurse task-shifting in primary care on the course of disease: a systematic review

Title:The impact of physician-nurse task-shifting in primary care on the course of disease: a systematic review Author's response to reviews Title:The impact of physician-nurse task-shifting in primary care on the course of disease: a systematic review Authors: Nahara Anani Martínez-González (Nahara.Martinez@usz.ch)

More information

Patient Reported Outcome Measures Frequently Asked Questions (PROMs FAQ)

Patient Reported Outcome Measures Frequently Asked Questions (PROMs FAQ) Patient Reported Outcome Measures Frequently Asked Questions (PROMs FAQ) Author: Secondary Care Analysis (PROMs), NHS Digital Responsible Statistician: Jane Winter 1 Copyright 2016 Health and Social Care

More information

Experience of inpatients with ulcerative colitis throughout

Experience of inpatients with ulcerative colitis throughout Experience of inpatients with ulcerative colitis throughout the UK UK inflammatory bowel disease (IBD) audit Executive summary report June 2014 Prepared by the Clinical Effectiveness and Evaluation Unit

More information

Scottish Medicines Consortium. A Guide for Patient Group Partners

Scottish Medicines Consortium. A Guide for Patient Group Partners Scottish Medicines Consortium Advising on new medicines for Scotland www.scottishmedicines.org page 1 Acknowledgements Some of the information in this booklet is adapted from guidance produced by the HTAi

More information

The Trainee Doctor. Foundation and specialty, including GP training

The Trainee Doctor. Foundation and specialty, including GP training Foundation and specialty, including GP training The duties of a doctor registered with the General Medical Council Patients must be able to trust doctors with their lives and health. To justify that trust

More information

Structure, process or outcome: which contributes most to patients' overall assessment of healthcare quality?

Structure, process or outcome: which contributes most to patients' overall assessment of healthcare quality? Postprint Version 1.0 Journal website http://qualitysafety.bmj.com/content/early/2011/02/21/bmjqs.2010.042358.abstr act Pubmed link http://www.ncbi.nlm.nih.gov/pubmed/21339310 DOI 10.1136/bmjqs.2010.042358

More information

2014 MASTER PROJECT LIST

2014 MASTER PROJECT LIST Promoting Integrated Care for Dual Eligibles (PRIDE) This project addressed a set of organizational challenges that high performing plans must resolve in order to scale up to serve larger numbers of dual

More information

Summary Report of Findings and Recommendations

Summary Report of Findings and Recommendations Patient Experience Survey Study of Equivalency: Comparison of CG- CAHPS Visit Questions Added to the CG-CAHPS PCMH Survey Summary Report of Findings and Recommendations Submitted to: Minnesota Department

More information

Quality assessment / improvement in primary care

Quality assessment / improvement in primary care Quality assessment / improvement in primary care Drivers of quality Patients should receive the care they need, which is known to be effective, and in a way that does not harm them. Patients should not

More information

How NICE clinical guidelines are developed

How NICE clinical guidelines are developed Issue date: January 2009 How NICE clinical guidelines are developed: an overview for stakeholders, the public and the NHS Fourth edition : an overview for stakeholders, the public and the NHS Fourth edition

More information

Medical Device Reimbursement in the EU, current environment and trends. Paula Wittels Programme Director

Medical Device Reimbursement in the EU, current environment and trends. Paula Wittels Programme Director Medical Device Reimbursement in the EU, current environment and trends Paula Wittels Programme Director 20 November 2009 1 agenda national and regional nature of EU reimbursement trends in reimbursement

More information

Pursuing the Triple Aim: CareOregon

Pursuing the Triple Aim: CareOregon Pursuing the Triple Aim: CareOregon The Triple Aim: An Introduction The Institute for Healthcare Improvement (IHI) launched the Triple Aim initiative in September 2007 to develop new models of care that

More information

emja: Measuring patient-reported outcomes: moving from clinical trials into clinical p...

emja: Measuring patient-reported outcomes: moving from clinical trials into clinical p... Página 1 de 5 emja Australia The Medical Journal of Home Issues emja shop My account Classifieds Contact More... Topics Search From the Patient s Perspective Editorial Measuring patient-reported outcomes:

More information

Quality Management Building Blocks

Quality Management Building Blocks Quality Management Building Blocks Quality Management A way of doing business that ensures continuous improvement of products and services to achieve better performance. (General Definition) Quality Management

More information

Standards of Practice for Professional Ambulatory Care Nursing... 17

Standards of Practice for Professional Ambulatory Care Nursing... 17 Table of Contents Scope and Standards Revision Team..................................................... 2 Introduction......................................................................... 5 Overview

More information

Doctoral Grant for Teachers

Doctoral Grant for Teachers Call for proposals Doctoral Grant for Teachers 2018, first round The Hague, November 2017 Netherlands Organisation for Scientific Research Contents 1 Introduction 1 1.1 Background 1 1.2 Available budget

More information

Resilience Approach for Medical Residents

Resilience Approach for Medical Residents Resilience Approach for Medical Residents R.A. Bezemer and E.H. Bos TNO, P.O. Box 718, NL-2130 AS Hoofddorp, the Netherlands robert.bezemer@tno.nl Abstract. Medical residents are in a vulnerable position.

More information

Core competencies* for undergraduate students in clinical associate, dentistry and medical teaching and learning programmes in South Africa

Core competencies* for undergraduate students in clinical associate, dentistry and medical teaching and learning programmes in South Africa Core competencies* for undergraduate students in clinical associate, dentistry and medical teaching and learning programmes in South Africa Developed by the Undergraduate Education and Training Subcommittee

More information

Thomas W. Vijn 1*, Hub Wollersheim 1, Marjan J. Faber 1, Cornelia R. M. G. Fluit 2 and Jan A. M. Kremer 1

Thomas W. Vijn 1*, Hub Wollersheim 1, Marjan J. Faber 1, Cornelia R. M. G. Fluit 2 and Jan A. M. Kremer 1 Vijn et al. BMC Health Services Research (2018) 18:387 https://doi.org/10.1186/s12913-018-3200-0 STUDY PROTOCOL Open Access Building a patient-centered and interprofessional training program with patients,

More information

PG snapshot PRESS GANEY IDENTIFIES KEY DRIVERS OF PATIENT LOYALTY IN MEDICAL PRACTICES. January 2014 Volume 13 Issue 1

PG snapshot PRESS GANEY IDENTIFIES KEY DRIVERS OF PATIENT LOYALTY IN MEDICAL PRACTICES. January 2014 Volume 13 Issue 1 PG snapshot news, views & ideas from the leader in healthcare experience & satisfaction measurement The Press Ganey snapshot is a monthly electronic bulletin freely available to all those involved or interested

More information

Building an infrastructure to improve cardiac rehabilitation: from guidelines to audit and feedback Verheul, M.M.

Building an infrastructure to improve cardiac rehabilitation: from guidelines to audit and feedback Verheul, M.M. UvA-DARE (Digital Academic Repository) Building an infrastructure to improve cardiac rehabilitation: from guidelines to audit and feedback Verheul, M.M. Link to publication Citation for published version

More information

Mutah University- Faculty of Medicine

Mutah University- Faculty of Medicine 561748-EPP-1-2015-1-PSEPPKA2-CBHE-JP The MEDiterranean Public HEALTH Alliance MED-HEALTH Mutah University- Faculty of Medicine Master Program in Public Health Management MSc (PHM) Suggestive Study Plan

More information

Research themes for the pharmaceutical sector

Research themes for the pharmaceutical sector CENTRE FOR THE HEALTH ECONOMY Research themes for the pharmaceutical sector Macquarie University s Centre for the Health Economy (MUCHE) was established to undertake innovative research on health, ageing

More information

4. Hospital and community pharmacies

4. Hospital and community pharmacies 4. Hospital and community pharmacies As FIP is the international professional organisation of pharmacists, this paper emphasises the role of the pharmacist in ensuring and increasing patient safety. The

More information

Nursing skill mix and staffing levels for safe patient care

Nursing skill mix and staffing levels for safe patient care EVIDENCE SERVICE Providing the best available knowledge about effective care Nursing skill mix and staffing levels for safe patient care RAPID APPRAISAL OF EVIDENCE, 19 March 2015 (Style 2, v1.0) Contents

More information

Creating a Patient-Centered Payment System to Support Higher-Quality, More Affordable Health Care. Harold D. Miller

Creating a Patient-Centered Payment System to Support Higher-Quality, More Affordable Health Care. Harold D. Miller Creating a Patient-Centered Payment System to Support Higher-Quality, More Affordable Health Care Harold D. Miller First Edition October 2017 CONTENTS EXECUTIVE SUMMARY... i I. THE QUEST TO PAY FOR VALUE

More information

1st Class Care Solutions Limited Support Service Care at Home Argyll House Quarrywood Court Livingston EH54 6AX Telephone:

1st Class Care Solutions Limited Support Service Care at Home Argyll House Quarrywood Court Livingston EH54 6AX Telephone: 1st Class Care Solutions Limited Support Service Care at Home Argyll House Quarrywood Court Livingston EH54 6AX Telephone: 01506 412698 Type of inspection: Unannounced Inspection completed on: 13 March

More information

Executive Summary. This Project

Executive Summary. This Project Executive Summary The Health Care Financing Administration (HCFA) has had a long-term commitment to work towards implementation of a per-episode prospective payment approach for Medicare home health services,

More information

NHS. The guideline development process: an overview for stakeholders, the public and the NHS. National Institute for Health and Clinical Excellence

NHS. The guideline development process: an overview for stakeholders, the public and the NHS. National Institute for Health and Clinical Excellence NHS National Institute for Health and Clinical Excellence Issue date: April 2007 The guideline development process: an overview for stakeholders, the public and the NHS Third edition The guideline development

More information

Volume 15 - Issue 2, Management Matrix

Volume 15 - Issue 2, Management Matrix Volume 15 - Issue 2, 2015 - Management Matrix Leadership in Healthcare: A Review of the Evidence Prof. Michael West ******@***lancaster.ac.uk Professor - Lancaster University Thomas West ******@***aston.ac.uk

More information

An Overview of NCQA Relative Resource Use Measures. Today s Agenda

An Overview of NCQA Relative Resource Use Measures. Today s Agenda An Overview of NCQA Relative Resource Use Measures Today s Agenda The need for measures of Resource Use Development and testing RRU measures Key features of NCQA RRU measures How NCQA calculates benchmarks

More information

Allied Health Review Background Paper 19 June 2014

Allied Health Review Background Paper 19 June 2014 Allied Health Review Background Paper 19 June 2014 Background Mater Health Services (Mater) is experiencing significant change with the move of publicly funded paediatric services from Mater Children s

More information

Accountable Care Organizations. What the Nurse Executive Needs to Know. Rebecca F. Cady, Esq., RNC, BSN, JD, CPHRM

Accountable Care Organizations. What the Nurse Executive Needs to Know. Rebecca F. Cady, Esq., RNC, BSN, JD, CPHRM JONA S Healthcare Law, Ethics, and Regulation / Volume 13, Number 2 / Copyright B 2011 Wolters Kluwer Health Lippincott Williams & Wilkins Accountable Care Organizations What the Nurse Executive Needs

More information

Health Technology Assessment (HTA) Good Practices & Principles FIFARMA, I. Government s cost containment measures: current status & issues

Health Technology Assessment (HTA) Good Practices & Principles FIFARMA, I. Government s cost containment measures: current status & issues KeyPointsforDecisionMakers HealthTechnologyAssessment(HTA) refers to the scientific multidisciplinary field that addresses inatransparentandsystematicway theclinical,economic,organizational, social,legal,andethicalimpactsofa

More information

Cardiovascular Disease Prevention and Control: Interventions Engaging Community Health Workers

Cardiovascular Disease Prevention and Control: Interventions Engaging Community Health Workers Cardiovascular Disease Prevention and Control: Interventions Engaging Community Health Workers Community Preventive Services Task Force Finding and Rationale Statement Ratified March 2015 Table of Contents

More information

O1 Readiness. O2 Implementation. O3 Success A FRAMEWORK TO EVALUATE MUSCULOSKELETAL MODELS OF CARE

O1 Readiness. O2 Implementation. O3 Success A FRAMEWORK TO EVALUATE MUSCULOSKELETAL MODELS OF CARE FOR MUSCULOSKELETAL HEALTH O1 Readiness O2 Implementation O3 Success A FRAMEWORK TO EVALUATE MUSCULOSKELETAL MODELS OF CARE GLOBAL ALLIANCE SUPPORTING ORGANISATIONS The following organisations publicly

More information

Supporting information for appraisal and revalidation: guidance for Supporting information for appraisal and revalidation: guidance for ophthalmology

Supporting information for appraisal and revalidation: guidance for Supporting information for appraisal and revalidation: guidance for ophthalmology FOREWORD As part of revalidation, doctors will need to collect and bring to their appraisal six types of supporting information to show how they are keeping up to date and fit to practise. The GMC has

More information

A Primer on Activity-Based Funding

A Primer on Activity-Based Funding A Primer on Activity-Based Funding Introduction and Background Canada is ranked sixth among the richest countries in the world in terms of the proportion of gross domestic product (GDP) spent on health

More information

ONTARIO PATIENT ORIENTED RESEARCH STRATEGY: Patient Reported Outcome-informed Innovation

ONTARIO PATIENT ORIENTED RESEARCH STRATEGY: Patient Reported Outcome-informed Innovation BRIEFING DOCUMENT SUMMARY: The following represents an initiative that has linked and implemented all of the tools, organizations, research strategies, and participatory research Knowledge User (KU)-End

More information

Guidance for Developing Payment Models for COMPASS Collaborative Care Management for Depression and Diabetes and/or Cardiovascular Disease

Guidance for Developing Payment Models for COMPASS Collaborative Care Management for Depression and Diabetes and/or Cardiovascular Disease Guidance for Developing Payment Models for COMPASS Collaborative Care Management for Depression and Diabetes and/or Cardiovascular Disease Introduction Within the COMPASS (Care Of Mental, Physical, And

More information

HOME TREATMENT SERVICE OPERATIONAL PROTOCOL

HOME TREATMENT SERVICE OPERATIONAL PROTOCOL HOME TREATMENT SERVICE OPERATIONAL PROTOCOL Document Type Unique Identifier To be set by Web and Systems Development Team Document Purpose This protocol sets out how Home Treatment is provided by Worcestershire

More information

Needs-based population segmentation

Needs-based population segmentation Needs-based population segmentation David Matchar, MD, FACP, FAMS Duke Medicine (General Internal Medicine) Duke-NUS Medical School (Health Services and Systems Research) Service mismatch: Many beds filled

More information

Models of Support in the Teacher Induction Scheme in Scotland: The Views of Head Teachers and Supporters

Models of Support in the Teacher Induction Scheme in Scotland: The Views of Head Teachers and Supporters Models of Support in the Teacher Induction Scheme in Scotland: The Views of Head Teachers and Supporters Ron Clarke, Ian Matheson and Patricia Morris The General Teaching Council for Scotland, U.K. Dean

More information

TRAINING NEEDS OF EUROPEAN PSYCHIATRIC MENTAL HEALTH NURSES TO COMPLY WITH TURKU DECLARATION. by Stephen Demicoli

TRAINING NEEDS OF EUROPEAN PSYCHIATRIC MENTAL HEALTH NURSES TO COMPLY WITH TURKU DECLARATION. by Stephen Demicoli TRAINING NEEDS OF EUROPEAN PSYCHIATRIC MENTAL HEALTH NURSES TO COMPLY WITH TURKU DECLARATION by Stephen Demicoli BACKGROUND / AIM Substantial changes to the roles and responsibilities of psychiatric mental

More information

Short Report How to do a Scoping Exercise: Continuity of Care Kathryn Ehrich, Senior Researcher/Consultant, Tavistock Institute of Human Relations.

Short Report How to do a Scoping Exercise: Continuity of Care Kathryn Ehrich, Senior Researcher/Consultant, Tavistock Institute of Human Relations. Short Report How to do a Scoping Exercise: Continuity of Care Kathryn Ehrich, Senior Researcher/Consultant, Tavistock Institute of Human Relations. short report George K Freeman, Professor of General Practice,

More information

Maximizing the Community Health Impact of Community Health Needs Assessments Conducted by Tax-exempt Hospitals

Maximizing the Community Health Impact of Community Health Needs Assessments Conducted by Tax-exempt Hospitals Maximizing the Community Health Impact of Community Health Needs Assessments Conducted by Tax-exempt Hospitals Consensus Statement from American Public Health Association (APHA), Association of Schools

More information

The right of Dr Dennis Green to be identified as author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

The right of Dr Dennis Green to be identified as author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. The right of Dr Dennis Green to be identified as author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. British Standards Institution 2005 Copyright subsists

More information

Note: This is an outcome measure and will be calculated solely using registry data.

Note: This is an outcome measure and will be calculated solely using registry data. Quality ID #304: Cataracts: Patient Satisfaction within 90 Days Following Cataract Surgery National Quality Strategy Domain: Person and Caregiver-Centered Experience and Outcomes 2018 OPTIONS FOR INDIVIDUAL

More information

Quality Framework Supplemental

Quality Framework Supplemental Quality Framework 2013-2018 Supplemental Staffordshire and Stoke on Trent Partnership Trust Quality Framework 2013-2018 Supplemental Robin Sasaru, Quality Team Manager Simon Kent, Quality Team Manager

More information

Six Key Principles for the Efficient and Sustainable Funding & Reimbursement of Medical Technologies

Six Key Principles for the Efficient and Sustainable Funding & Reimbursement of Medical Technologies Six Key Principles for the Efficient and Sustainable Funding & Reimbursement of Medical Technologies Contents Executive Summary... 2 1. Transparency... 4 2. Predictability & Consistency... 4 3. Stakeholder

More information

Pediatric Residents. A Guide to Evaluating Your Clinical Competence. THE AMERICAN BOARD of PEDIATRICS

Pediatric Residents. A Guide to Evaluating Your Clinical Competence. THE AMERICAN BOARD of PEDIATRICS 2017 Pediatric Residents A Guide to Evaluating Your Clinical Competence THE AMERICAN BOARD of PEDIATRICS Published and distributed by The American Board of Pediatrics 111 Silver Cedar Court Chapel Hill,

More information

Effectively implementing multidisciplinary. population segments. A rapid review of existing evidence

Effectively implementing multidisciplinary. population segments. A rapid review of existing evidence Effectively implementing multidisciplinary teams focused on population segments A rapid review of existing evidence October 2016 Francesca White, Daniel Heller, Cait Kielty-Adey Overview This review was

More information

Using PROMs in clinical practice: rational, evidence and implementation framework

Using PROMs in clinical practice: rational, evidence and implementation framework Using PROMs in clinical practice: rational, evidence and implementation framework Jose M Valderas Prof. Health Services & Policy, University of Exeter Disclosure Professor of Health Services & Policy (University

More information

The Scope of Practice of Assistant Practitioners in Ultrasound

The Scope of Practice of Assistant Practitioners in Ultrasound The Scope of Practice of Assistant Practitioners in Ultrasound Responsible person: Susan Johnson Published: Wednesday, April 30, 2008 ISBN: 9781-871101-52-2 Summary This document has been produced to provide

More information

Cairo University, Faculty of Medicine Strategic Plan

Cairo University, Faculty of Medicine Strategic Plan Cairo University, Faculty of Medicine Strategic Plan I would first like to introduce to you the steps carried to develop this plan. 1- The faculty council decided to perform the 5 year strategic plan and

More information

The History of the development of the Prometheus Payment model defined Potentially Avoidable Complications.

The History of the development of the Prometheus Payment model defined Potentially Avoidable Complications. The History of the development of the Prometheus Payment model defined Potentially Avoidable Complications. In 2006 the Prometheus Payment Design Team convened a series of meetings with physicians that

More information

NURSING (MN) Nursing (MN) 1

NURSING (MN) Nursing (MN) 1 Nursing (MN) 1 NURSING (MN) MN501: Advanced Nursing Roles This course explores skills and strategies essential to successful advanced nursing role implementation. Analysis of existing and emerging roles

More information

TRAINEE CLINICAL PSYCHOLOGIST GENERIC JOB DESCRIPTION

TRAINEE CLINICAL PSYCHOLOGIST GENERIC JOB DESCRIPTION TRAINEE CLINICAL PSYCHOLOGIST GENERIC JOB DESCRIPTION This is a generic job description provided as a guide to applicants for clinical psychology training. Actual Trainee Clinical Psychologist job descriptions

More information

Integrated Health and Care in Ipswich and East Suffolk and West Suffolk. Service Model Version 1.0

Integrated Health and Care in Ipswich and East Suffolk and West Suffolk. Service Model Version 1.0 Integrated Health and Care in Ipswich and East Suffolk and West Suffolk Service Model Version 1.0 This document describes an integrated health and care service model and system for Ipswich and East and

More information

Barriers & Incentives to Obtaining a Bachelor of Science Degree in Nursing

Barriers & Incentives to Obtaining a Bachelor of Science Degree in Nursing Southern Adventist Univeristy KnowledgeExchange@Southern Graduate Research Projects Nursing 4-2011 Barriers & Incentives to Obtaining a Bachelor of Science Degree in Nursing Tiffany Boring Brianna Burnette

More information

QUALITY MEASURES WHAT S ON THE HORIZON

QUALITY MEASURES WHAT S ON THE HORIZON QUALITY MEASURES WHAT S ON THE HORIZON The Hospice Quality Reporting Program (HQRP) November 2013 Plan for the Day Discuss the implementation of the Hospice Item Set (HIS) Discuss the implementation of

More information

PG snapshot Nursing Special Report. The Role of Workplace Safety and Surveillance Capacity in Driving Nurse and Patient Outcomes

PG snapshot Nursing Special Report. The Role of Workplace Safety and Surveillance Capacity in Driving Nurse and Patient Outcomes PG snapshot news, views & ideas from the leader in healthcare experience & satisfaction measurement The Press Ganey snapshot is a monthly electronic bulletin freely available to all those involved or interested

More information

Summary For someone else. Decisional responsibilities in nursing home medicine.

Summary For someone else. Decisional responsibilities in nursing home medicine. summary 311 Summary For someone else. Decisional responsibilities in nursing home medicine. The central question in this study is how to promote the interests of an elderly nursing home patient who is

More information

Patient Advocate Certification Board. Competencies and Best Practices required for a Board Certified Patient Advocate (BCPA)

Patient Advocate Certification Board. Competencies and Best Practices required for a Board Certified Patient Advocate (BCPA) Patient Advocate Certification Board Competencies and Best Practices required for a Board Certified Patient Advocate (BCPA) Attribution The Patient Advocate Certification Board (PACB) recognizes the importance

More information

Disposable, Non-Sterile Gloves for Minor Surgical Procedures: A Review of Clinical Evidence

Disposable, Non-Sterile Gloves for Minor Surgical Procedures: A Review of Clinical Evidence CADTH RAPID RESPONSE REPORT: SUMMARY WITH CRITICAL APPRAISAL Disposable, Non-Sterile Gloves for Minor Surgical Procedures: A Review of Clinical Evidence Service Line: Rapid Response Service Version: 1.0

More information

Understanding and promoting good outcomes

Understanding and promoting good outcomes Understanding and promoting good outcomes PROMs in the Best Practice Tariff for hip and knee replacement Jake Gommon (Pricing Team, NHS England) & Rafael Goriwoda (Patient & Information analytical team,

More information

Patient survey report Outpatient Department Survey 2009 Airedale NHS Trust

Patient survey report Outpatient Department Survey 2009 Airedale NHS Trust Patient survey report 2009 Outpatient Department Survey 2009 The national Outpatient Department Survey 2009 was designed, developed and co-ordinated by the Acute Surveys Co-ordination Centre for the NHS

More information

Patient survey report Survey of people who use community mental health services 2011 Pennine Care NHS Foundation Trust

Patient survey report Survey of people who use community mental health services 2011 Pennine Care NHS Foundation Trust Patient survey report 2011 Survey of people who use community mental health services 2011 The national Survey of people who use community mental health services 2011 was designed, developed and co-ordinated

More information

BCBSM Physician Group Incentive Program

BCBSM Physician Group Incentive Program BCBSM Physician Group Incentive Program Organized Systems of Care Initiatives Interpretive Guidelines 2012-2013 V. 4.0 Blue Cross Blue Shield of Michigan is a nonprofit corporation and independent licensee

More information

GUIDANCE ON SUPPORTING INFORMATION FOR REVALIDATION FOR SURGERY

GUIDANCE ON SUPPORTING INFORMATION FOR REVALIDATION FOR SURGERY ON SUPPORTING INFORMATION FOR REVALIDATION FOR SURGERY Based on the Academy of Medical Royal Colleges and Faculties Core Guidance for all doctors GENERAL INTRODUCTION JUNE 2012 The purpose of revalidation

More information

September 6, RE: CY 2017 Hospital Outpatient Prospective Payment and Ambulatory Surgical Center Payment Systems Proposed Rule

September 6, RE: CY 2017 Hospital Outpatient Prospective Payment and Ambulatory Surgical Center Payment Systems Proposed Rule September 6, 2016 VIA E-MAIL FILING Centers for Medicare & Medicaid Services Department of Health and Human Services Attention: CMS-1656-P P.O. Box 8013 Baltimore, MD 21244-1850 RE: CY 2017 Hospital Outpatient

More information

Introduction Patient-Centered Outcomes Research Institute (PCORI)

Introduction Patient-Centered Outcomes Research Institute (PCORI) 2 Introduction The Patient-Centered Outcomes Research Institute (PCORI) is an independent, nonprofit health research organization authorized by the Patient Protection and Affordable Care Act of 2010. Its

More information

NHS Somerset CCG OFFICIAL. Overview of site and work

NHS Somerset CCG OFFICIAL. Overview of site and work NHS Somerset CCG Overview of site and work NHS Somerset CCG comprises 400 GPs (310 whole time equivalents) based in 72 practices and has responsibility for commissioning services for a dispersed rural

More information

BOLTON NHS FOUNDATION TRUST. expansion and upgrade of women s and children s units was completed in 2011.

BOLTON NHS FOUNDATION TRUST. expansion and upgrade of women s and children s units was completed in 2011. September 2013 BOLTON NHS FOUNDATION TRUST Strategic Direction 2013/14 2018/19 A SUMMARY Introduction Bolton NHS Foundation Trust was formed in 2011 when hospital services merged with the community services

More information

This is the consultation responses analysis put together by the Hearing Aid Council and considered at their Council meeting on 12 November 2008

This is the consultation responses analysis put together by the Hearing Aid Council and considered at their Council meeting on 12 November 2008 Analysis of responses - Hearing Aid Council and Health Professions Council consultation on standards of proficiency and the threshold level of qualification for entry to the Hearing Aid Audiologists/Dispensers

More information

Prepared for North Gunther Hospital Medicare ID August 06, 2012

Prepared for North Gunther Hospital Medicare ID August 06, 2012 Prepared for North Gunther Hospital Medicare ID 000001 August 06, 2012 TABLE OF CONTENTS Introduction: Benchmarking Your Hospital 3 Section 1: Hospital Operating Costs 5 Section 2: Margins 10 Section 3:

More information

American Board of Dental Examiners (ADEX) Clinical Licensure Examinations in Dental Hygiene. Technical Report Summary

American Board of Dental Examiners (ADEX) Clinical Licensure Examinations in Dental Hygiene. Technical Report Summary American Board of Dental Examiners (ADEX) Clinical Licensure Examinations in Dental Hygiene Technical Report Summary October 16, 2017 Introduction Clinical examination programs serve a critical role in

More information

Evaluating the HRQOL model 1. Analyzing the health related quality of life model by instituting Fawcett s evaluation. criteria.

Evaluating the HRQOL model 1. Analyzing the health related quality of life model by instituting Fawcett s evaluation. criteria. Evaluating the HRQOL model 1 Analyzing the health related quality of life model by instituting Fawcett s evaluation criteria. Colleen Dudley, Jenny Mathew, Jessica Savage & Vannesia Morgan-Smith. Wiki

More information

How to measure patient empowerment

How to measure patient empowerment How to measure patient empowerment Jaime Correia de Sousa Horizonte Family Health Unit Matosinhos Health Centre - Portugal Health Sciences School (ECS) University of Minho, Braga Portugal Aims At the

More information

PBGH Response to CMMI Request for Information on Advanced Primary Care Model Concepts

PBGH Response to CMMI Request for Information on Advanced Primary Care Model Concepts PBGH Response to CMMI Request for Information on Advanced Primary Care Model Concepts 575 Market St. Ste. 600 SAN FRANCISCO, CA 94105 PBGH.ORG OFFICE 415.281.8660 FACSIMILE 415.520.0927 1. Please comment

More information

Accountable Care Atlas

Accountable Care Atlas Accountable Care Atlas MEDICAL PRODUCT MANUFACTURERS SERVICE CONTRACRS Accountable Care Atlas Overview Map Competency List by Phase Detailed Map Example Checklist What is the Accountable Care Atlas? The

More information

Risk Adjustment Methods in Value-Based Reimbursement Strategies

Risk Adjustment Methods in Value-Based Reimbursement Strategies Paper 10621-2016 Risk Adjustment Methods in Value-Based Reimbursement Strategies ABSTRACT Daryl Wansink, PhD, Conifer Health Solutions, Inc. With the move to value-based benefit and reimbursement models,

More information

Drivers of HCAHPS Performance from the Front Lines of Healthcare

Drivers of HCAHPS Performance from the Front Lines of Healthcare Drivers of HCAHPS Performance from the Front Lines of Healthcare White Paper by Baptist Leadership Group 2011 Organizations that are successful with the HCAHPS survey are highly focused on engaging their

More information

STUDY PLAN Master Degree In Clinical Nursing/Critical Care (Thesis )

STUDY PLAN Master Degree In Clinical Nursing/Critical Care (Thesis ) STUDY PLAN Master Degree In Clinical Nursing/Critical Care (Thesis ) I. GENERAL RULES AND CONDITIONS:- 1. This plan conforms to the valid regulations of the programs of graduate studies. 2. Areas of specialty

More information

Definitions/Glossary of Terms

Definitions/Glossary of Terms Definitions/Glossary of Terms Submitted by: Evelyn Gallego, MBA EgH Consulting Owner, Health IT Consultant Bethesda, MD Date Posted: 8/30/2010 The following glossary is based on the Health Care Quality

More information

NCLEX PROGRAM REPORTS

NCLEX PROGRAM REPORTS for the period of OCT 2014 - MAR 2015 NCLEX-RN REPORTS US48500300 000001 NRN001 04/30/15 TABLE OF CONTENTS Introduction Using and Interpreting the NCLEX Program Reports Glossary Summary Overview NCLEX-RN

More information

EVALUATION OF THE SMALL AND MEDIUM-SIZED ENTERPRISES (SMEs) ACCIDENT PREVENTION FUNDING SCHEME

EVALUATION OF THE SMALL AND MEDIUM-SIZED ENTERPRISES (SMEs) ACCIDENT PREVENTION FUNDING SCHEME EVALUATION OF THE SMALL AND MEDIUM-SIZED ENTERPRISES (SMEs) ACCIDENT PREVENTION FUNDING SCHEME 2001-2002 EUROPEAN AGENCY FOR SAFETY AND HEALTH AT WORK EXECUTIVE SUMMARY IDOM Ingeniería y Consultoría S.A.

More information

Issue date: October Guide to the multiple technology appraisal process

Issue date: October Guide to the multiple technology appraisal process Issue date: October 2009 Guide to the multiple technology appraisal process Guide to the multiple technology appraisal process Issued: October 2009 This document is one of a series describing the processes

More information

Zukunftsperspektiven der Qualitatssicherung in Deutschland

Zukunftsperspektiven der Qualitatssicherung in Deutschland Zukunftsperspektiven der Qualitatssicherung in Deutschland Future of Quality Improvement in Germany Prof. Richard Grol Fragmentation in quality assessment and improvement Integration of initiatives and

More information

Developing a framework for the secondary use of My Health record data WA Primary Health Alliance Submission

Developing a framework for the secondary use of My Health record data WA Primary Health Alliance Submission Developing a framework for the secondary use of My Health record data WA Primary Health Alliance Submission November 2017 1 Introduction WAPHA is the organisation that oversights the commissioning activities

More information

Paper no. 23 E-Business Providing a High-Tech Home-Based Employment Solution to Women in Kuwait with the Assist of e-government Incubators

Paper no. 23 E-Business Providing a High-Tech Home-Based Employment Solution to Women in Kuwait with the Assist of e-government Incubators Paper no. 23 E-Business Providing a High-Tech Home-Based Employment Solution to Women in Kuwait with the Assist of e-government Incubators Abstract The educated women of Kuwait have been faced with sociological

More information

Statistical presentation and analysis of ordinal data in nursing research.

Statistical presentation and analysis of ordinal data in nursing research. Statistical presentation and analysis of ordinal data in nursing research. Jakobsson, Ulf Published in: Scandinavian Journal of Caring Sciences DOI: 10.1111/j.1471-6712.2004.00305.x Published: 2004-01-01

More information

University of Groningen. Functional ability, social support and quality of life Doeglas, Dirk Maarten

University of Groningen. Functional ability, social support and quality of life Doeglas, Dirk Maarten University of Groningen Functional ability, social support and quality of life Doeglas, Dirk Maarten IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to

More information

CMS-0044-P; Proposed Rule: Medicare and Medicaid Programs; Electronic Health Record Incentive Program Stage 2

CMS-0044-P; Proposed Rule: Medicare and Medicaid Programs; Electronic Health Record Incentive Program Stage 2 May 7, 2012 Submitted Electronically Ms. Marilyn Tavenner Acting Administrator Centers for Medicare and Medicaid Services Department of Health and Human Services Room 445-G, Hubert H. Humphrey Building

More information

NATIONAL TOOLKIT for NURSES IN GENERAL PRACTICE. Australian Nursing and Midwifery Federation

NATIONAL TOOLKIT for NURSES IN GENERAL PRACTICE. Australian Nursing and Midwifery Federation NATIONAL TOOLKIT for NURSES IN GENERAL PRACTICE Australian Nursing and Midwifery Federation Acknowledgements This tool kit was prepared by the Project Team: Julianne Bryce, Elizabeth Foley and Julie Reeves.

More information

Competencies for the Registered Nurse Scope of Practice Approved by the Council: June 2005

Competencies for the Registered Nurse Scope of Practice Approved by the Council: June 2005 Competencies for the Registered Nurse Scope of Practice Approved by the Council: June 2005 Domains of competence for the registered nurse scope of practice There are four domains of competence for the

More information

Quality Standards. Process and Methods Guide. October Quality Standards: Process and Methods Guide 0

Quality Standards. Process and Methods Guide. October Quality Standards: Process and Methods Guide 0 Quality Standards Process and Methods Guide October 2016 Quality Standards: Process and Methods Guide 0 About This Guide This guide describes the principles, process, methods, and roles involved in selecting,

More information

FRIENDS OF EVIDENCE CASE STUDY

FRIENDS OF EVIDENCE CASE STUDY Asthma Improvement Collaborative FRIENDS OF EVIDENCE CASE STUDY This is one of a series of illustrative case studies, under the auspices of the Friends of Evidence, describing powerful approaches to evidence

More information

Test Content Outline Effective Date: December 23, 2015

Test Content Outline Effective Date: December 23, 2015 Board Certification Examination There are 200 questions on this examination. Of these, 175 are scored questions and 25 are pretest questions that are not scored. Pretest questions are used to determine

More information