Evaluation of a Decision Support System for Pressure Ulcer

Similar documents
Quality Improvement Plan

Analysis of Nursing Workload in Primary Care

APPLICATION OF SIMULATION MODELING FOR STREAMLINING OPERATIONS IN HOSPITAL EMERGENCY DEPARTMENTS

Recent changes in the delivery and financing of health

Employers are essential partners in monitoring the practice

2. Title Of Initiative Quality Improvement Project

Final Report No. 101 April Trends in Skilled Nursing Facility and Swing Bed Use in Rural Areas Following the Medicare Modernization Act of 2003

Validating Pilot Program to Improve Discharge Medication in 12 West at C.S. Mott Children s Hospital. Final Report. Submitted To:

INPATIENT SURVEY PSYCHOMETRICS

Overview of Presentation

ABSTRACT. dose", all steps in the setup of the secondary infusion must be conducted correctly.

National findings from the 2013 Inpatients survey

A comparison of two measures of hospital foodservice satisfaction

Information systems with electronic

USE OF NURSING DIAGNOSIS IN CALIFORNIA NURSING SCHOOLS AND HOSPITALS

PG snapshot PRESS GANEY IDENTIFIES KEY DRIVERS OF PATIENT LOYALTY IN MEDICAL PRACTICES. January 2014 Volume 13 Issue 1

Quality Management Building Blocks

National Patient Experience Survey UL Hospitals, Nenagh.

Title Student and Registered Nursing Staff's Perceptions of 12- Hour Clinical Rotations in an Undergraduate Baccalaureate Nursing Program

Missed Nursing Care: Errors of Omission

Mental Health Community Service User Survey 2017 Management Report

2016 REPORT Community Care for the Elderly (CCE) Client Satisfaction Survey

Results from the Green House Evaluation in Tupelo, MS

The New Jersey Gainsharing Experience By Robert G. Coates, MD, MMM, CPE

Best Practices in Clinical Teaching and Evaluation

Improving Nursing Workflow Efficiency & Nurses Knowledge & Attitude Toward Computers. WellStar Health System. Background

Scoring Methodology FALL 2017

NURSING FACILITY ASSESSMENTS

Staffing and Scheduling

2004 Customer Satisfaction Survey For Form 1065 e-file

Publication Year: 2013

Nursing Manpower Allocation in Hospitals

A Study on AQ (Adversity Quotient), Job Satisfaction and Turnover Intention According to Work Units of Clinical Nursing Staffs in Korea

Implementing the situation background assessment recommendation (SBAR) communication in a rural acute care hospital in Kenya

Quality Management and Improvement 2016 Year-end Report

Searching for Clinical Guidelines, Algorithms, and Mixed Methods Studies: What s Wrong with PICO?

Determining Like Hospitals for Benchmarking Paper #2778

Evaluation of a Telehealth Initiative in Wound Management. Margarita Loyola Interior Health

Social Work Assessment and Outcomes Measurement in Hospice and Palliative Care

Best Practices in Clinical Teaching and Evaluation

Choice of a Case Mix System for Use in Acute Care Activity-Based Funding Options and Considerations

Oran Street Day Centre Support Service Without Care at Home 45 Oran Street Maryhill Glasgow G20 8LY Telephone:

INM, Faculty of Medicine, NTNU, Trondheim, Norway. INM, Faculty of Medicine, Norwegian University of Science and Technology, Trondheim, Norway

Patient experiences of Discharge at the Royal Shrewsbury Hospital June 2016

Satisfaction and Experience with Health Care Services: A Survey of Albertans December 2010

Rutgers School of Nursing-Camden

USING ACUTE CARE PLANS TO IMPROVE COORDINATION AMONG ED HIGH UTILIZER PATIENTS MASSACHUSETTS GENERAL HOSPITAL Publication Year: 2014

Nursing skill mix and staffing levels for safe patient care

Individuals with mental illness are at

Arrest Rates Decline Post-Implementation of Nurse Led Teams. Nicole Lincoln MS, RN, APRN-BC, CCRN Date June 16, 2016 Time: 2:45 pm- 3:15 pm

OBQI for Improvement in Pain Interfering with Activity

Scoring Methodology FALL 2016

National Patient Experience Survey Mater Misericordiae University Hospital.

Text-based Document. Implementing Strategies to Improve Patient Perception of Nursing Communication. Dunbar, Ghada; Nagar, Stacey

BOARD OF DIRECTORS PAPER COVER SHEET. Meeting Date: 27 May 2009

TELLIGENCE. Workflow Solutions. Integrated Workflow Intelligence. Ascom

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.

Helping Students Achieve First-Time NCLEX Success

Oh No! I need to write an abstract! How do I start?

Christy Rose, MSN, RN, CCRN Denver Health Medical Center. 7th Annual Nursing Quality Conference: Reaching the Core of Quality

Morningside College Department of Nursing Outcome Measures Report

North Carolina. CAHPS 3.0 Adult Medicaid ECHO Report. December Research Park Drive Ann Arbor, MI 48108

Research Design: Other Examples. Lynda Burton, ScD Johns Hopkins University

Analysis of a Clinical Evaluation Tool Teresa Connolly, PhD, RN, CNRN Brenda Owen, MSN, CNM, RN Glenda Robertson, MA, RN Joan Ward, MS, RN, CNE

Abstract Development:

Total Joint Partnership Program Identifies Areas to Improve Care and Decrease Costs Joseph Tomaro, PhD

Executive Summary. This Project

2012 Report. Client Satisfaction Survey PSA 9 RICK SCOTT. Program Services, Direct Service Workers, and. Impact of Programs on Lives of Clients

Dialysis Laboratory Services and Reports December 2012 January 2013

An Academic Based Nurse Practitioner Fellowship Program: A Pilot Project Designed to Ease Nurse Practitioner Transition to Practice

Outcome and Process Evaluation Report: Crisis Residential Programs

Survey of Physicians Utilization of Home Health Services June 2009

Understanding the Palliative Care Needs of Older Adults & Their Family Caregivers

Adolescent Experiences With Ambient Therapy

Accelerated Second-Degree Program Evaluation at Graduation and 1 year later

Richard E. Ray, MS, RN, PMH BC 1. The speaker has no conflict of interest to disclose.

NCLEX PROGRAM REPORTS

Therapeutic Apheresis Services. User Satisfaction Survey. April 2017

The Association of Community Cancer Centers 2011 Cancer Program Administrator Survey

Rapid Recovery Therapy Program. GTA Rehab Network Best Practices Day 2017 Joan DeBruyn & Helen Janzen

2016 National NHS staff survey. Results from Surrey And Sussex Healthcare NHS Trust

THE USE OF SIMULATION TO DETERMINE MAXIMUM CAPACITY IN THE SURGICAL SUITE OPERATING ROOM. Sarah M. Ballard Michael E. Kuhl

William J. Ennis D.O.,MBA University of Illinois at Chicago Professor Clinical Surgery, Chief Section wound healing and tissue repair

Maggie Turner RN RAC-CT Kara Schilling RN RAC-CT Lisa Gourley RN RAC-CT

Strains on an ICU s Capacity to Provide Optimal Care

2017 National NHS staff survey. Results from The Newcastle Upon Tyne Hospitals NHS Foundation Trust

Improving ethnic data collection for equality and diversity monitoring NHSScotland

Scoring Methodology SPRING 2018

Creating a Virtual Continuing Care Hospital (CCH) to Improve Functional Outcomes and Reduce Readmissions and Burden of Care. Opportunity Statement

Title: Quality/Safety Education Physician Champion Phone:

Food for Thought: Maximizing the Positive Impact Food Can Have on a Patient s Stay

Development and Evaluation of a PBL-based Continuing Education for Clinical Nurses: A Pilot Study

CMS-0044-P; Proposed Rule: Medicare and Medicaid Programs; Electronic Health Record Incentive Program Stage 2

Simulation Roles and Clinical Decision Making Accuracy in an Acute Care Scenario

2008 International Infantry & Joint Services Small Arms Systems Symposium System Analysis: Infantry Studies and Simulations

MEASURING POST ACUTE CARE OUTCOMES IN SNFS. David Gifford MD MPH American Health Care Association Atlantic City, NJ Mar 17 th, 2015

The speaker has no conflict of interest to disclose.

Critique of a Nurse Driven Mobility Study. Heather Nowak, Wendy Szymoniak, Sueann Unger, Sofia Warren. Ferris State University

Postacute care (PAC) cost variation explains a large part

Charlotte Banks Staff Involvement Lead. Stage 1 only (no negative impacts identified) Stage 2 recommended (negative impacts identified)

Transcription:

Evaluation of a Decision Support System for Pressure Ulcer Prevention and Management: Preliminary Findings Rita D. Zielstorff, RN MS*, Greg Estey, M Ed*, Amanda Vickery, RN MS+, Glenys Hamilton, RN DNSc+, Joan B. Fitzmaurice, RN PhD+, G. Octo Barnett, MD* *Laboratory of Computer Science +Department of Nursing Massachusetts General Hospital Boston, MA 02114 A decision support system for prevention and management ofpressure ulcers was developed based on AHCPR guidelines and other sources. The system was implemented for 21 weeks on a 20-bed clinical care unit. Fifteen nurses on that unit volunteered as subjects of the intervention to see whether use of the system would have a positive effect on their knowledge about pressure ulcers and on their decision-making skills related to this topic. A similar care unit was used as a control. In addition, the system was evaluated by expertsfor its instructional adequacy, and by end users for their satisfaction with the system. Preliminary results show no effect on knowledge about pressure ulcers and no effect on clinical decision making skills. The system was rated positively for instructional adequacy, and positively for user satisfaction. User interviews related to satisfaction supplemented the quantitative findings. A discussion of the issues of conducting experiments like this in today's clinical environment is included. INTRODUCTION As part of a research project intended to provide problem-based knowledge to clinicians at the point of care, we developed a system that supports the nurse's development of patient-specific, guideline-based treatment plans for patients who have pressure ulcers or are at risk for developing them. The design and implementation of the system have been described in previous publications 12. The system was implemented experimentally in December 1995 on a 20-bed inpatient orthopedic/neurosurgery unit at the Massachusetts General Hospital. Fifteen of 22 nurses enrolled in the study, and 12 remained enrolled through completion of post-testing. Those who didn't enroll were part-time personnel, or permanent off-shift nurses who felt that the system was mostly intended to help primary nurses with assessment and treatment planning. All of the 15 enrollees on the experimental care unit entered real patient data at least once during the 2 1-week experimental period. The control unit was a 28-bed acute orthopedic unit specializing in trauma. Seventeen of 25 permanent RN staff voluntarily enrolled in the study, and 9 of these remained enrolled through completion of post-testing. As an incentive, volunteers on each unit received nominal gift certificates at local stores for each portion of the study protocol they completed. 1091-8280/97/$5.00 0 1997 AMIA, Inc. 248 The evaluation was designed to answer several questions, among them: a) Will the system be acceptable to instructional and content matter experts? b) Will the system improve knowledge about pressure ulcer prevention and treatnent? c) Will the system improve clinical decision making skills related to pressure ulcer prevention and treatment? and d) Will the system be acceptable to clinicians who use it? We developed protocols to answer each of these questions. The methods, preliminary results and conclusions for each protocol are presented below. PROTOCOL 1. INSTRUCTIONAL ADEQUACY A one-time survey was completed by three registered nurses who have expertise in the clinical area and/or in instructional technology. The survey instrument used is a slight modification of the Underwood Software Evaluation Tool3, a 30-item instrument using Thurstone's equal-appearing interval scaling technique with bipolar descriptors. The items produce scores along four major dimensions: ten items evaluate Content, six items are concemed with Pedagogy, seven items assess Technical Quality and and eight items are concerned with Policy Issues. In addition, each of the experts wrote comments which further explained their ratings. The mean scores for each of the four dimensions are presented in Table 1. There was consistently positive scoring among the raters for content and for policy issues, but a marked diversity in scoring for pedagogy and for technical quality of the program. The text comments revealed that the person who gave the low ratings on technical quality of the program had never used a Windows application, and experienced frustration in being expected to know the conventions for interacting with graphical interfaces (such as having to press the tab key to move from field to field in an input form, and having to close a pop-up window before trying to interact with its parent). The same person produced the lowest mean score on the pedagogy dimension. Three of the thirty individual items on the survey received a unanimous score of +3. These were: "well balanced and representative information is

presented"; "the software package is compatible with the goals of the clinical training program"; and "the software package fits in well with other instructional materials already being used in clinical areas". The lowest-scoring individual item, which had a mean of -1 among the three raters, was for the item which described the ability of the program to be used by two or more users interacting with each other (the application was not designed to do this). The reviewers appended a total of 31 comments to their survey forms. Sixteen comments were specific suggestions for the clinical content or the decision rules. Nine comments were criticisms of the system behavior or interface design, or questions regarding the intent of some part of the system. Six comments praised the system or some specific aspect of it. Although there was some diversity in the ratings of the three evaluators, the program received an overall positive evaluation as reflected in both the survey form items and in the text comments. Lack of familiarity with graphical interfaces influenced the ratings of one of the evaluators with regard to technical quality of the program. PROTOCOL 2. IMPACT ON KNOWLEDGE Knowledge was measured with a 30-item, computerbased multiple choice test developed for this project following classic methods of test construction. An initial set of 92 items was culled through expert review and pretesting to thirty. The items ranged from easy to difficult, with adequate discrimination power and consideration of the representation of all content areas. The test was administered to the volunteer subjects from the experimental unit and the control unit. Both groups completed the test prior to implementation of the system on the experimental unit; both groups completed the test again (with the same questions but asked in a different order) after the system had been implemented on the experimental unit for 21 weeks. Because of attrition, pre- and post- paired scores are available for 12 of the 15 original enrollees on the experimental unit, and 9 of the 17 original enrollees on the control unit. Table 2 summarizes the results. There was no appreciable difference between the scores of nurses who used the system on the experimental unit vs. those who did not on the control unit; using a paired-samples comparison, nurses on the experimental unit showed no appreciable difference in knowledge scores after using the system. A 21-week exposure to the system had no effect on nurses' knowledge of pressure ulcer prevention and treatment, as reflected in the knowledge test. PROTOCOL 3. IMPACT ON CLINICAL DECISION MAKING To test clinical decision making a computer based simulation program pertaining to pressure ulcer prevention and treatment was developed. Case simulations describing a patient scenario were constructed and reviewed by expert clinicians in skin care management. Scoring is based on: a) Information collection (completeness and efficiency); b) Identification of presence of pressure ulcer; c) Selection of pressure ulcer etiology(ies); d) Identification of risk factor presence; e) Selection of risk factor etiology (ies); and f) Selection oftreatments. The program has three cases, a sample case and two test cases. The sample case gives the participant an opportunity to become familiar with the format and to receive a case analysis and a comprehensive analysis of performance. This analysis is not given after the two test cases. The sample case may be repeated. The sample case and the two test cases take approximately 45 minutes to complete. All enrolled subjects from the experimental and control units completed the three computer-based simulations prior to implementation of the program on the experimental unit. The cases were completed again by 13 of the 15 original enrollees on the experimental unit (87%), and 9 or the original 17 enrollees on the control unit (53%) after the 21-week experimental period. Table 3 summarizes the results of the preliminary analysis. These results are similar to those on the knowledge test. The control group's performance declined or stayed the same by most measures; the experimental group's perfonnance varied slightly in both directions, but generally stayed the same. A 21-week exposure to the system had no effect on nurses' clinical decision making related to pressure ulcer prevention and treatment, as reflected in the case simulations. The small number of participants, and the attrition of 47% of the control group, make definitive conclusions impossible. PROTOCOL 4. USER SATISFACTION End-user satisfaction was assessed at the end of the experimental period from the 15 volunteer subjects, all of whom worked on the experimental unit and used the system at least once. Both quantitative and qualitative approaches were used. First, the clinicians were given a survey form to complete. A 249

Table 1. Ratings of Instructional Adequacy* Rater I ater 2 ater 3 ean FContent +2.25 +2.401 ±2.60 24 *Possible range: - 3 = lowest, + 3 = highest Table 2. Knowledge Scores* Experimental Control Median 18 s 21 19 llow 1 15 1 1 1l4z 3 H-igh I---1~~12 3l2 Fossible range: 0)-30 Table 3. Clinical Simulation Scores Experimental Control (n=9) (n=1 3) Pr_e l Post Pre Post Case 1 Diagnosis correct 7 8 4 5 Proportion correct etiology (mean).64 58.59 4 Risk assessment correct TF 12 9 9 Proportion correct risk factors (mean).78.89.70.67 Proportion correct therapies (mean).44.44.54.47 Case 2 Diagnosis correct 12 11 9 9 Proportion correct etiology (mean) NA _N/A N/AT N7/ Risk assessment correct 7 8 4 4 Proportion correct risk factors (mean).58.42 447.4T Proportion correct therapies (mean).58.56.63.56 Table 4. End-User Satisfaction: Mean ratings for individual items* Item Mean (s.d.) 1. Does the system provide the precise information you need? 3.93 (0.96) 2. Does the information content meet your needs? 3.66 (1.04) 3. Does the system provide displays that seem to be exactly what you need? 3.3 (1.18) 4. Does the system provide sufficient information? 4.20 (0.67) 5. Is the system accurate? 4.13 (0.74) 6. Are you satisfied with the accuracy of the system? 4.26 (0.59) 7. Do you think the output is presented in a useful format? 4.33 (0.89) 8. Is the information clear? 4.33 (0.81) 9. Is the system user friendly? 4.28 (0.91) 1O. Is the system easy to use? 3.93 (1.16) 11. Do you get the information you need in time? 3.93 (0.79) 12. Does the system provide up-to-date information? 4.26 (0.59) *Scoring: 1 Almost never, 2- Some of the time, 3- About half-of the time, 4- Most o1 the time, 5- Almost always Table 5. Mean scores of Components of End User Satisfaction Component Mean Score (1=lowest, 5=highest) Content -3.83 Accuracy.2 Format 4.33 Ease of Use 4.06 Timeliness I 250

12-item questionnaire developed by Doll and Torkzadeh 4 was distributed. The instrument utilizes a five-point Likert-type scale to quantify users' perceptions of five system components: Content, Accuracy, Format, Ease of Use, and Timeliness. There are four questions assessing the dimension of Content; two questions assessing Accuracy; two questions assessing Format; two questions assessing Ease of Use, and two questions assessing Timeliness. We selected this instrument because of its documented reliability and validity, because it assessed parameters that were appropriate for our application, and because it was brief enough to be practical in our environment. Of several tools we reviewed, this was the only one that met all of these criteria. In addition to the written survey, face-to-face interviews were conducted with the same group of clinicians. The developers of the End-User Computing Satisfaction questionnaire used the form to evaluate a range of applications in 44 firms representing various industries. The industries included health care, the range of applications included decision support systems, and the categories of workers included Professional Employees Without Supervisory Responsibilities (typical of our users). Over 600 users responded to those surveys. Out of this experience, the following statistics were reported: the mean score among applications was 49.09; median, 51; minimum, 16; maximum, 60; standard deviation, 8.302. In addition, percentile scores were reported: A score of 48 was in the 40th percentile, and 51 was in the 50th percentile.5 We will refer to these statistics in reviewing the results of our own survey. All fifteen users of the experimental system returned the survey. The mean ratings for each of the twelve questions in the survey are itemized in Table 4. The mean score for each of the dimensions is shown in Table 5. The mean total score for the survey was 48.67 out of a possible 60, with a standard deviation of 6.27. This is similar to the population statistics reported by Doll & Torkzadeh. The average total score of 48.6 for the Pressure Ulcer Prevention and Management System falls in approximately the 40th percentile of total scores for all applications surveyed in the Doll & Torkzadeh study.5 The face-to-face interviews were consistent with the survey results. The lowest-scoring dimension was Content. There were several comments pertaining to content, such as "I would rather have put in basic information about my patient and then have it give me a simple recommendation," and "It's too specific to pressure ulcers... it would be better if it had general wound care also," and "I didn't feel I got the treatment knowledge that I needed." The highestscoring dimension was Format, which is reflected in such comments as "Easy to read," "Nice to have a printed form [treatment plan]," and "Very orderly and logical." There were no comments on accuracy of the content. Related to Ease of Use, there were comments such as "Just as easy as writing," and "It was easy when I did it." The system received an overall positive rating by end users, both on the written structured survey and in face-to-face interviews. DISCUSSION A set of protocols for evaluating the impact of the Pressure Ulcer Prevention and Management System has shown mixed results thus far. Written ratings, textual comments, and structured interviews with users and with content experts yielded generally positive results, but there is no evidence that the system has influenced more distant outcomes such as knowledge and clinical decision making. There are many possible reasons for this, some having to do with the experiment itself, and some having to do with the environment in which it was conducted. It is possible, for example, that the length of time that the subjects were exposed to the intervention was not long enough to have measurable effects. Or it may be that some of the outcomes are too distant from the intervention to have been influenced, or that the instruments were not sensitive enough to measure the changes. We know from direct observation that usage among the subjects was uneven and in some cases infrequent, due to unexpectedly low census during the experimental period. We also know that nurses on both units were subjected to many environmental stresses during the experimental period, with beds closing, and staff often in danger of being transferred or even laid off. Obtaining their cooperation and stimulating their continued commitment was extremely difficult under these conditions. With regard to the experiment itself, the small number of participants makes it difficult to make clear judgments about the results of the quantitative measures. The high attrition rate of subjects on the control unit, despite persistent efforts on the part of our research assistants, further limits our understanding of the results. We were able to draw some conclusions from our observations of the nurses' behavior on the experimental unit, and from the comments they made during the interviews. A common theme, for instance, was that there was not enough gain for the effort required to enter the data into the system. In today's world, clinicians need rapid access to very specific knowledge related to a particular problem at hand. In our system design, we required the nurses to use a comprehensive assessment module, provided 251

advice on diagnosis only when it was corrective, and encouraged a comprehensive plan of care. We are redesigning the system to convert the procedural rules imbedded in the system to a set of problem-specific algorithms that we will make available over the World Wide Web (see related poster by Hulse et al.6). In this way, clinicians can seek answers to their specific questions on an asneeded basis, with data entry focused only on information needed to supply an answer to the specified problem. Knowledge will thus be provided "just-in-time" to influence the decision at hand. Acknowledgments This work was supported by Grant 5 RI 8 HS06575, Agency for Health Care Policy and Research; by Grant 5 ROI LM05200, and Grant 1 T15 LM07092, National Library of Medicine; and by an educational grant from Hewlett Packard Corporation. References 1. Zielstorff RD, Bamett GO, Fitzmaurice JB, et al. A Decision Support System for Prevention and Treatment of Pressure Ulcers Based on AHCPR Guidelines. In Cimino, J (ed.). Proceedings ofthe 1996 AMIA Annual Fall Symposium. Phila: Hanley & Belfus, 1996, pp. 562-566. 2. Estey G, Shahzad C, Zielstorff R, et al. A Demonstration of Integrated Access to Pressure Ulcer Guidelines. In Cimino J (ed). Proceedings of the 1996 AMIA Annual Fall Symposium. Phila: Hanley & Belfus, 1996, p. 922. 3. Underwood SM. Measuring the validity of computer-assisted instructional media. In: OL Strickland & CF Waltz (eds). Measurement of Nursing Outcomes. Vol 2. NY: Springer Publishing, 1988:295-313. 4. Doll WJ, & Torkzadeh G. The Measurement of End-User Computing Satisfaction. MIS Quart 1988, 12:259-274. 5. Ibid, p. 270. 6. Hulse M, Zielstorff RD, Estey G, et al. Design Considerations for User Interface to an Expert System on the World Wide Web. (these proceedings). 252