Skip to main content

    Theodore Speroff

    Hospital-acquired acute kidney injury (HA-AKI) is a potentially preventable cause of morbidity and mortality. Identifying high-risk patients prior to the onset of kidney injury is a key step towards AKI prevention. A national... more
    Hospital-acquired acute kidney injury (HA-AKI) is a potentially preventable cause of morbidity and mortality. Identifying high-risk patients prior to the onset of kidney injury is a key step towards AKI prevention. A national retrospective cohort of 1,620,898 patient hospitalizations from 116 Veterans Affairs hospitals was assembled from electronic health record (EHR) data collected from 2003 to 2012. HA-AKI was defined at stage 1+, stage 2+, and dialysis. EHR-based predictors were identified through logistic regression, least absolute shrinkage and selection operator (lasso) regression, and random forests, and pair-wise comparisons between each were made. Calibration and discrimination metrics were calculated using 50 bootstrap iterations. In the final models, we report odds ratios, 95% confidence intervals, and importance rankings for predictor variables to evaluate their significance. The area under the receiver operating characteristic curve (AUC) for the different model outcome...
    Bureaucratic organisational culture is less favourable to quality improvement, whereas organisations with group (teamwork) culture are better aligned for quality improvement. To determine if an organisational group culture shows better... more
    Bureaucratic organisational culture is less favourable to quality improvement, whereas organisations with group (teamwork) culture are better aligned for quality improvement. To determine if an organisational group culture shows better alignment with patient safety climate. Cross-sectional administration of questionnaires. Setting 40 Hospital Corporation of America hospitals. 1406 nurses, ancillary staff, allied staff and physicians. Competing Values Measure of Organisational Culture, Safety Attitudes Questionnaire (SAQ), Safety Climate Survey (SCSc) and Information and Analysis (IA). The Cronbach alpha was 0.81 for the group culture scale and 0.72 for the hierarchical culture scale. Group culture was positively correlated with SAQ and its subscales (from correlation coefficient r = 0.44 to 0.55, except situational recognition), ScSc (r = 0.47) and IA (r = 0.33). Hierarchical culture was negatively correlated with the SAQ scales, SCSc and IA. Among the 40 hospitals, 37.5% had a hier...
    Objective. To determine predictors of influenza virus vaccination status in children who are hospitalized during the influenza season. Methods. A cross-sectional study was conducted among children who were hospitalized with fever be-... more
    Objective. To determine predictors of influenza virus vaccination status in children who are hospitalized during the influenza season. Methods. A cross-sectional study was conducted among children who were hospitalized with fever be- tween 6 months and 3 years of age or with respiratory symptoms between 6 months and 18 years of age. The 1999 to 2000 influenza vaccination status of
    Introduction: In the evaluation of the cervical spine, Helical CT scan has higher sensitivity and specificity than plain radiographs in the high-risk trauma population but is more costly. We hypothesized that institutional indemnity... more
    Introduction: In the evaluation of the cervical spine, Helical CT scan has higher sensitivity and specificity than plain radiographs in the high-risk trauma population but is more costly. We hypothesized that institutional indemnity payments associated with missed injuries make helical CT scan the least-costly approach.Methods: A decision analytic model was created for helical CT scan vs. radiographic evaluation of the
    Most US hospitals lack primary percutaneous coronary intervention (PCI) capabilities to treat patients with ST-elevation myocardial infarction (STEMI) necessitating transfer to PCI-capable centers. Transferred patients rarely meet the... more
    Most US hospitals lack primary percutaneous coronary intervention (PCI) capabilities to treat patients with ST-elevation myocardial infarction (STEMI) necessitating transfer to PCI-capable centers. Transferred patients rarely meet the 120-minute benchmark for timely reperfusion, and referring emergency departments (EDs) are a major source of preventable delays. We sought to use more granular data at transferring EDs to describe the variability in length of stay at referring EDs. We retrospectively analyzed a secondary data set used for quality improvement for patients with STEMI transferred to a single PCI center between 2008 and 2012. We conducted a descriptive analysis of the total time spent at each referring ED (door-in-door-out [DIDO] interval), periods that comprised DIDO (door to electrocardiogram [EKG], EKG-to-PCI activation, and PCI activation to exit), and the relationship of each period with overall time to reperfusion (medical contact-to-balloon [MCTB] interval). We iden...
    To examine the association of patient- and medication-related factors with postdischarge medication errors. The Vanderbilt Inpatient Cohort Study includes adults hospitalized with acute coronary syndromes and/or acute decompensated heart... more
    To examine the association of patient- and medication-related factors with postdischarge medication errors. The Vanderbilt Inpatient Cohort Study includes adults hospitalized with acute coronary syndromes and/or acute decompensated heart failure. We measured health literacy, subjective numeracy, marital status, cognition, social support, educational attainment, income, depression, global health status, and medication adherence in patients enrolled from October 1, 2011, through August 31, 2012. We used binomial logistic regression to determine predictors of discordance between the discharge medication list and the patient-reported list during postdischarge medication review. Among 471 patients (mean age, 59 years), the mean total number of medications reported was 12, and 79 patients (16.8%) had inadequate or marginal health literacy. A total of 242 patients (51.4%) were taking 1 or more discordant medication (ie, appeared on either the discharge list or patient-reported list but not both), 129 (27.4%) failed to report a medication on their discharge list, and 168 (35.7%) reported a medication not on their discharge list. In addition, 279 participants (59.2%) had a misunderstanding in indication, dose, or frequency in a cardiac medication. In multivariable analyses, higher subjective numeracy (odds ratio [OR], 0.81; 95% CI, 0.67-0.98) was associated with lower odds of having discordant medications. For cardiac medications, participants with higher health literacy (OR, 0.84; 95% CI, 0.74-0.95), with higher subjective numeracy (OR, 0.77; 95% CI, 0.63-0.95), and who were female (OR, 0.60; 95% CI, 0.46-0.78) had lower odds of misunderstandings in indication, dose, or frequency. Medication errors are present in approximately half of patients after hospital discharge and are more common among patients with lower numeracy or health literacy.
    To determine if using dense data capture to measure heart rate volatility (standard deviation) measured in 5-minute intervals predicts death. Fundamental approaches to assessing vital signs in the critically ill have changed little since... more
    To determine if using dense data capture to measure heart rate volatility (standard deviation) measured in 5-minute intervals predicts death. Fundamental approaches to assessing vital signs in the critically ill have changed little since the early 1900s. Our prior work in this area has demonstrated the utility of densely sampled data and, in particular, heart rate volatility over the entire patient stay, for predicting death and prolonged ventilation. Approximately 120 million heart rate data points were prospectively collected and archived from 1316 trauma ICU patients over 30 months. Data were sampled every 1 to 4 seconds, stored in a relational database, linked to outcome data, and de-identified. HR standard deviation was continuously computed over 5-minute intervals (CVRD, cardiac volatility-related dysfunction). Logistic regression models incorporating age and injury severity score were developed on a test set of patients (N = 923), and prospectively analyzed in a distinct vali...
    Living wills, a type of advance directive, are promoted as a way for patients to document preferences for life-sustaining treatments should they become incompetent. Previous research, however, has found that these documents do not guide... more
    Living wills, a type of advance directive, are promoted as a way for patients to document preferences for life-sustaining treatments should they become incompetent. Previous research, however, has found that these documents do not guide decision making in the hospital. To test the hypothesis that people with living wills are less likely to die in a hospital than in their residence before death. Secondary analysis of data from a nationally representative longitudinal study. Publicly available data from the Asset and Health Dynamics Among the Oldest Old (AHEAD) study. People older than 70 years of age living in the community in 1993 who died between 1993 and 1995. Self-report and proxy informant interviews conducted in 1993 and 1995. Having a living will was associated with lower probability of dying in a hospital for nursing home residents and people living in the community. For people living in the community, the probability of in-hospital death decreased from 0.65 (95% CI, 0.58 to 0.71) to 0.52 (CI, 0.42 to 0.62). For people living in nursing homes, the probability of in-hospital death decreased from 0.35 (CI, 0.23 to 0.49) to 0.13 (CI, 0.07 to 0.22). Retrospective survey data do not contain detailed clinical information on whether the living will was consulted. Living wills are associated with dying in place rather than in a hospital. This implies that previous research examining only people who died in a hospital suffers from selection bias. During advance care planning, physicians should discuss patients' preferences for location of death.
    The current study was undertaken to identify factors specific to kidney transplantation that are associated with posttransplant functional performance (FP) and health-related quality of life (HRQOL). Karnofsky FP status was assessed... more
    The current study was undertaken to identify factors specific to kidney transplantation that are associated with posttransplant functional performance (FP) and health-related quality of life (HRQOL). Karnofsky FP status was assessed longitudinally in 86 adult kidney transplant recipients. Patients reported HRQOL using the Short Form-36 (SF-36) health survey and the Psychosocial Adjustment to Illness Scale (PAIS). FP improved (P <0.001) after kidney transplantation (from 75 +/- 1 to 77 +/- 1, 81 +/- 1, and 82 +/- 1 at 0, 3, 6, and 12 months, respectively). Patients receiving organs from living donors showed continued improvement through posttransplant year 1 while those receiving cadaveric organs stabilized at month 6 (simple interaction contrast, year 1 versus pretransplant; P <0.05). Patients receiving dialysis therapy for 6 months or more prior to transplantation demonstrated lower SF-36 posttransplant physical component scores in comparison with patients who were transplanted preemptively (38 +/- 1 versus 45 +/- 2, P <0.05). Path analysis demonstrated the positive direct effect of time on FP with kidney transplantation (beta = 0.23, P <0.05), and the negative direct effects on FP of diabetes (beta = -0.22) and cadaveric organs (beta = -0.22, both P <0.05). In turn, FP had a positive direct effect on HRQOL (beta = 0.40, P <0.001). Overall improvement in FP is attenuated 1 year after kidney transplantation in recipients of organs from cadaveric donors. The positive effect of time after transplantation, and the negative effects of cadaveric organs and diabetes on posttransplant HRQOL, are indirect and are mediated by the direct effects of these variables on posttransplant FP.
    Some previous studies suggested that transplantation performed in Department of Veterans Affairs (VA) patients was associated with a higher rate of complications and poorer outcomes. We examined more than a decade of experience with solid... more
    Some previous studies suggested that transplantation performed in Department of Veterans Affairs (VA) patients was associated with a higher rate of complications and poorer outcomes. We examined more than a decade of experience with solid organ transplantation at a single center and compared VA patients with nonveteran patients to assess long-term patient and graft survival and health-related quality of life (HRQOL). Demographic, clinical, and survival data were extracted from a database that included all transplants from January 1990 through December 2002 at Vanderbilt University Medical Center (non-VA) and the Nashville VA Medical Center (VA). The HRQOL was assessed in a subset of patients using the Karnofsky functional performance (FP) index and the Short-Form-36 self-report questionnaire. Data were analyzed by Kaplan-Meier survival and analysis of variance methods. One thousand eight hundred nine adult patients receiving solid organ transplants (1,896 grafts) between 1990 and 2002 were reviewed: 380 VA patients (141 liver, 54 heart, 183 kidney, 2 lung) and 1429 non-VA patients (280 liver, 246 heart, 749 kidney, 154 lung). Mean follow-up time was 46 +/- 1 months. Five-year graft survival for VA and non-VA patients, respectively, was liver 65% +/- 5% versus 69% +/- 3% (P = 0.97); heart 73% +/- 8% versus 73% +/- 3% (P = 0.67); and kidney 76% +/- 5% versus 77% +/- 2% (P = 0.84). Five-year patient survival was liver 75% +/- 5% versus 78% +/- 3% (P = 0.94); heart 73% +/- 8% versus 74% +/- 3% (P = 0.75); and kidney 84% +/- 4% versus 87% +/- 2% (P = 0.21) for VA and non-VA, respectively. In the first 3 years after transplant, the FP scores for VA versus non-VA patients were 85 +/- 2 versus 87 +/- 1 (P = 0.50). The SF-36 mental component scales were 47 +/- 3 versus 49 +/- 1 (P = 0.39); and the SF-36 physical component scales were 37 +/- 2 versus 38 +/- 1 (P = 0.59), respectively. Longer-term (through year 7) HRQOL scores for VA versus non-VA patients were FP 85 +/- 1 versus 88 +/- 1 (P = 0.17); mental component scales 47 +/- 2 versus 49 +/- 1 (P = 0.29); and physical component scales 35 +/- 2 versus 39 +/- 1 (P = 0.05), respectively. The veteran patients have similar graft and patient survival as the nonveteran patients. Overall quality of life is similar between veterans and nonveterans during the first three years after transplantation. A trend toward a later decline in the veterans' perception of their physical functioning may stem from the increased prevalence of hepatitis C virus among VA liver transplant recipients, a known factor reducing late HRQOL.
    Diabetes care in our inner-city primary care clinic was suboptimal, despite provider education and performance feedback targeting improved adherence to evidence-based clinical guidelines. A crew resource management (CRM) intervention... more
    Diabetes care in our inner-city primary care clinic was suboptimal, despite provider education and performance feedback targeting improved adherence to evidence-based clinical guidelines. A crew resource management (CRM) intervention (communication and teamwork, process and workflow organisation, and standardised information debriefings) was implemented to improve diabetes care and patient outcomes. To assess the effect of the CRM intervention on adherence to evidence-based diabetes care standards, work processes, standardised clinical communication and patient outcomes. Time-series analysis was used to assess the effect on the delivery of standard diabetes services and patient outcomes among medically indigent adults (n = 619). The CRM principles were translated into useful process redesign and standardised care approaches. Significant improvements in microalbumin testing and associated patient outcome measures were attributed to the intervention. The CRM approach provided tools for management that, in the short term, enabled reorganisation and prevention of service omissions and, in the long term, can produce change in the organisational culture for continuous improvement.
    To determine predictors of influenza virus vaccination status in children who are hospitalized during the influenza season. A cross-sectional study was conducted among children who were hospitalized with fever between 6 months and 3 years... more
    To determine predictors of influenza virus vaccination status in children who are hospitalized during the influenza season. A cross-sectional study was conducted among children who were hospitalized with fever between 6 months and 3 years of age or with respiratory symptoms between 6 months and 18 years of age. The 1999 to 2000 influenza vaccination status of hospitalized children and potential factors that influence decisions to vaccinate were obtained from a questionnaire administered to parents/guardians. Influenza vaccination rates for hospitalized children with and without high-risk medical conditions were 31% and 14%, respectively. For both groups of children, the vaccination status was strongly influenced by recommendations from physicians. More than 70% of children were vaccinated if a physician had recommended the influenza vaccine, whereas only 3% were vaccinated if a physician had not. Lack of awareness that children can receive the influenza vaccine was a commonly cited reason for nonvaccination. A minority of hospitalized children with high-risk conditions had received the influenza vaccine. However, parents' recalling that a clinician had recommended the vaccine had a positive impact on the vaccination status of children.
    Diabetes education has largely been accepted in diabetes care. The effect of diabetes education on glycemic control and the components of education responsible for such an effect are uncertain. We performed a meta-analysis of randomized... more
    Diabetes education has largely been accepted in diabetes care. The effect of diabetes education on glycemic control and the components of education responsible for such an effect are uncertain. We performed a meta-analysis of randomized controlled trials of diabetes patient education published between 1990 and December 2000 to quantitatively assess and characterize the effect of patient education on glycated hemoglobin (HbA(1c)). Additionally, we used meta-regression to analyze which variables within an education intervention that best explained variance in glycemic control. Twenty-eight educational interventions (n=2439) were included in the analysis. The net glycemic change was 0.320% lower in the intervention group than in the control group. Meta-regression revealed that interventions which included a face-to-face delivery, cognitive reframing teaching method, and exercise content were more likely to improve glycemic control. Those three areas collectively explained 44% of the variance in glycemic control. Current patient education interventions modestly improve glycemic control in adults with diabetes. We highlight three potential components of educational interventions that may predict an increased likelihood of success in ameliorating glycemic control.
    Transitions to patient-centered health care, the increasing complexity of care, and growth in self-management have all increased the frequency and intensity of clinical services provided outside office settings and between visits.... more
    Transitions to patient-centered health care, the increasing complexity of care, and growth in self-management have all increased the frequency and intensity of clinical services provided outside office settings and between visits. Understanding how electronic messaging, which is often used to coordinate care, affects care is crucial. A taxonomy for codifying clinical text messages into standardized categories could facilitate content analysis of work performed or enhanced via electronic messaging. To codify electronic messages exchanged among the primary care providers and the staff managing diabetes patients at an academic medical center. Retrospective analysis of 27,061 electronic messages exchanged among 578 providers and staff caring for a cohort of 639 adult primary care patients with diabetes between April 1, 2003 and October 31, 2003. Providers and staff using locally developed electronic messaging in an academic medical center's adult primary care clinic. Raw data included clinical text message content, message ID, thread ID, and user ID. Derived measures included user job classification, 35 flags codifying message content, and a taxonomy grouping the flags. Messages contained diverse content: communications with patients, families, and other providers (47.2%), diagnoses (25.4%), documentation (33%), logistics and support functions (29.6%), medications (32.9%), and treatments (28.9%). All messages could be classified; 59.5% of messages addressed 2 or more content areas. Systematic content analysis of provider and staff electronic messages yields specific insight regarding clinical and administrative work carried out via electronic messaging.
    Learning about the factors that influence safety climate and improving the methods for assessing relative performance among hospital or units would improve decision-making for clinical improvement. To measure safety climate in intensive... more
    Learning about the factors that influence safety climate and improving the methods for assessing relative performance among hospital or units would improve decision-making for clinical improvement. To measure safety climate in intensive care units (ICU) owned by a large for-profit integrated health delivery systems; identify specific provider, ICU, and hospital factors that influence safety climate; and improve the reporting of safety climate data for comparison and benchmarking. We administered the Safety Attitudes Questionnaire (SAQ) to clinicians, staff, and administrators in 110 ICUs from 61 hospitals. A total of 1502 surveys (43% response) from physicians, nurses, respiratory therapists, pharmacists, mangers, and other ancillary providers. The survey measured safety climate across 6 domains: teamwork climate; safety climate; perceptions of management; job satisfaction; working conditions; and stress recognition. Percentage of positive scores, mean scores, unadjusted random effects, and covariate-adjusted random effect were used to rank ICU performance. The cohort was characterized by a positive safety climate. Respondents scored perceptions of management and working conditions significantly lower than the other domains of safety climate. Respondent job type was significantly associated with safety climate and domain scores. There was modest agreement between ranking methodologies using raw scores and random effects. The relative proportion of job type must be considered before comparing safety climate results across organizational units. Ranking methodologies based on raw scores and random effects are viable for feedback reports. The use of covariate-adjusted random effects is recommended for hospital decision-making.
    The aim of this study was to build electronic algorithms using a combination of structured data and natural language processing (NLP) of text notes for potential safety surveillance of 9 postoperative complications. Postoperative... more
    The aim of this study was to build electronic algorithms using a combination of structured data and natural language processing (NLP) of text notes for potential safety surveillance of 9 postoperative complications. Postoperative complications from 6 medical centers in the Southeastern United States were obtained from the Veterans Affairs Surgical Quality Improvement Program (VASQIP) registry. Development and test datasets were constructed using stratification by facility and date of procedure for patients with and without complications. Algorithms were developed from VASQIP outcome definitions using NLP-coded concepts, regular expressions, and structured data. The VASQIP nurse reviewer served as the reference standard for evaluating sensitivity and specificity. The algorithms were designed in the development and evaluated in the test dataset. Sensitivity and specificity in the test set were 85% and 92% for acute renal failure, 80% and 93% for sepsis, 56% and 94% for deep vein thrombosis, 80% and 97% for pulmonary embolism, 88% and 89% for acute myocardial infarction, 88% and 92% for cardiac arrest, 80% and 90% for pneumonia, 95% and 80% for urinary tract infection, and 77% and 63% for wound infection, respectively. A third of the complications occurred outside of the hospital setting. Computer algorithms on data extracted from the electronic health record produced respectable sensitivity and specificity across a large sample of patients seen in 6 different medical centers. This study demonstrates the utility of combining NLP with structured data for mining the information contained within the electronic health record.
    To evaluate an electronic quality (eQuality) assessment tool for dictated disability examination records. We applied automated concept-based indexing techniques to automated quality screening of Department of Veterans Affairs spine... more
    To evaluate an electronic quality (eQuality) assessment tool for dictated disability examination records. We applied automated concept-based indexing techniques to automated quality screening of Department of Veterans Affairs spine disability examinations that had previously undergone gold standard quality review by human experts using established quality indicators. We developed automated quality screening rules and refined them iteratively on a training set of disability examination reports. We applied the resulting rules to a novel test set of spine disability examination reports. The initial data set was composed of all electronically available examination reports (N=125,576) finalized by the Veterans Health Administration between July and September 2001. Sensitivity was 91% for the training set and 87% for the test set (P-.02). Specificity was 74% for the training set and 71% for the test set (P=.44). Human performance ranged from 4% to 6% higher (P<.001) than the eQuality tool in sensitivity and 13% to 16% higher in specificity (P<.001). In addition, the eQuality tool was equivalent or higher in sensitivity for 5 of 9 individual quality indicators. The results demonstrate that a properly authored computer-based expert systems approach can perform quality measurement as well as human reviewers for many quality indicators. Although automation will likely always rely on expert guidance to be accurate and meaningful, eQuality is an important new method to assist clinicians in their efforts to practice safe and effective medicine.
    SIMON (Signal Interpretation and Monitoring) monitors and archives continuous physiologic data in the ICU (HR, BP, CPP, ICP, CI, EDVI, SVO2, SPO2, SVRI, PAP, and CVP). We hypothesized: heart rate (HR) volatility predicts outcome better... more
    SIMON (Signal Interpretation and Monitoring) monitors and archives continuous physiologic data in the ICU (HR, BP, CPP, ICP, CI, EDVI, SVO2, SPO2, SVRI, PAP, and CVP). We hypothesized: heart rate (HR) volatility predicts outcome better than measures of central tendency (mean and median). More than 600 million physiologic data points were archived from 923 patients over 2 years in a level one trauma center. Data were collected every 1 to 4 seconds, stored in a MS-SQL 7.0 relational database, linked to TRACS, and de-identified. Age, gender, race, Injury Severity Score (ISS), and HR statistics were analyzed with respect to outcome (death and ventilator days) using logistic and Poisson regression. We analyzed 85 million HR data points, which represent more than 71,000 hours of continuous data capture. Mean HR varied by age, gender and ISS, but did not correlate with death or ventilator days. Measures of volatility (SD, % HR >120) correlated with death and prolonged ventilation. 1) Volatility predicts death better than measures of central tendency. 2) Volatility is a new vital sign that we will apply to other physiologic parameters, and that can only be fully explored using techniques of dense data capture like SIMON. 3) Densely sampled aggregated physiologic data may identify sub-groups of patients requiring new treatment strategies.
    To determine whether assisted annotation using interactive training can reduce the time required to annotate a clinical document corpus without introducing bias. A tool, RapTAT, was designed to assist annotation by iteratively... more
    To determine whether assisted annotation using interactive training can reduce the time required to annotate a clinical document corpus without introducing bias. A tool, RapTAT, was designed to assist annotation by iteratively pre-annotating probable phrases of interest within a document, presenting the annotations to a reviewer for correction, and then using the corrected annotations for further machine learning-based training before pre-annotating subsequent documents. Annotators reviewed 404 clinical notes either manually or using RapTAT assistance for concepts related to quality of care during heart failure treatment. Notes were divided into 20 batches of 19-21 documents for iterative annotation and training. The number of correct RapTAT pre-annotations increased significantly and annotation time per batch decreased by ~50% over the course of annotation. Annotation rate increased from batch to batch for assisted but not manual reviewers. Pre-annotation F-measure increased from 0.5 to 0.6 to >0.80 (relative to both assisted reviewer and reference annotations) over the first three batches and more slowly thereafter. Overall inter-annotator agreement was significantly higher between RapTAT-assisted reviewers (0.89) than between manual reviewers (0.85). The tool reduced workload by decreasing the number of annotations needing to be added and helping reviewers to annotate at an increased rate. Agreement between the pre-annotations and reference standard, and agreement between the pre-annotations and assisted annotations, were similar throughout the annotation process, which suggests that pre-annotation did not introduce bias. Pre-annotations generated by a tool capable of interactive training can reduce the time required to create an annotated document corpus by up to 50%.
    Collaborative and toolkit approaches have gained traction for improving quality in health care. To determine if a quality improvement virtual collaborative intervention would perform better than a toolkit-only approach at preventing... more
    Collaborative and toolkit approaches have gained traction for improving quality in health care. To determine if a quality improvement virtual collaborative intervention would perform better than a toolkit-only approach at preventing central line-associated bloodstream infections (CLABSIs) and ventilator-associated pneumonias (VAPs). Cluster randomized trial with the Intensive Care Units (ICUs) of 60 hospitals assigned to the Toolkit (n=29) or Virtual Collaborative (n=31) group from January 2006 through September 2007. CLABSI and VAP rates. Follow-up survey on improvement interventions, toolkit utilization, and strategies for implementing improvement. A total of 83% of the Collaborative ICUs implemented all CLABSI interventions compared to 64% of those in the Toolkit group (P = 0.13), implemented daily catheter reviews more often (P = 0.04), and began this intervention sooner (P < 0.01). Eighty-six percent of the Collaborative group implemented the VAP bundle compared to 64% of the Toolkit group (P = 0.06). The CLABSI rate was 2.42 infections per 1000 catheter days at baseline and 2.73 at 18 months (P = 0.59). The VAP rate was 3.97 per 1000 ventilator days at baseline and 4.61 at 18 months (P = 0.50). Neither group improved outcomes over time; there was no differential performance between the 2 groups for either CLABSI rates (P = 0.71) or VAP rates (P = 0.80). The intensive collaborative approach outpaced the simpler toolkit approach in changing processes of care, but neither approach improved outcomes. Incorporating quality improvement methods, such as ICU checklists, into routine care processes is complex, highly context-dependent, and may take longer than 18 months to achieve.
    Health-related quality of life and functional performance are important outcome measures following heart transplantation. This study investigates the impact of pre-transplant functional performance and post-transplant rejection episodes,... more
    Health-related quality of life and functional performance are important outcome measures following heart transplantation. This study investigates the impact of pre-transplant functional performance and post-transplant rejection episodes, obesity and osteopenia on post-transplant health-related quality of life and functional performance. Functional performance and health-related quality of life were measured in 70 adult heart transplant recipients. A composite health-related quality of life outcome measure was computed via principal component analysis. Iterative, multiple regression-based path analysis was used to develop an integrated model of variables that affect post-transplant functional performance and health-related quality of life. Functional performance, as measured by the Karnofsky scale, improved markedly during the first 6 months post-transplant and was then sustained for up to 3 years. Rejection Grade > or =2 was negatively associated with health-related quality of life, measured by Short Form-36 and reversed Psychosocial Adjustment to Illness Scale scores. Patients with osteopenia had lower Short Form-36 physical scores and obese patients had lower functional performance. Path analysis demonstrated a negative direct effect of obesity (beta = - 0.28, p < 0.05) on post-transplant functional performance. Post-transplant functional performance had a positive direct effect on the health-related quality of life composite score (beta = 0.48, p < 0.001), and prior rejection episodes grade > or =2 had a negative direct effect on this measure (beta = -0.29, p < 0.05). Either directly or through effects mediated by functional performance, moderate-to-severe rejection, obesity and osteopenia negatively impact health-related quality of life. These findings indicate that efforts should be made to devise immunosuppressive regimens that reduce the incidence of acute rejection, weight gain and osteopenia after heart transplantation.
    Our aim was to examine the effects of hepatitis C virus (HCV) infection, a leading cause of end-stage liver disease, and its recurrence after liver transplantation on functional performance and health-related quality of life. Functional... more
    Our aim was to examine the effects of hepatitis C virus (HCV) infection, a leading cause of end-stage liver disease, and its recurrence after liver transplantation on functional performance and health-related quality of life. Functional performance, liver function, and HCV recurrence were assessed longitudinally in 75 adult transplant recipients (28 with HCV). Quality of life was reported once after transplantation. Functional performance improved through year 2 (P < 0.001) and then declined in those with HCV, whereas the others remained stable (P = 0.05). Time had a positive effect (beta = 0.22, P = 0.05) and HCV infection had a negative effect (beta = -0.28, P = 0.01) on post-transplant functional performance. Educational level (beta = 0.24, P < 0.05) and recent functional performance (beta = 0.31, P = 0.01) had positive effects on quality of life. HCV recurrence was associated with relatively poorer pretransplant functional performance, a greater rate of improvement through month 3 (P < 0.05), and abnormal transaminase values between years 1 and 2 (P < 0.001). Rehospitalization for recurrent HCV was associated with reduced functional performance (P < 0.05). Functional performance improves with time following liver transplantation, but HCV infection exerts an opposing and comparably strong effect. Post-transplant functional performance, in turn, directly affects post-transplant quality of life. Severe, recurrent HCV illness is associated with reduced functional performance.
    In the intensive care unit (ICU), delirium is a common yet underdiagnosed form of organ dysfunction, and its contribution to patient outcomes is unclear. To determine if delirium is an independent predictor of clinical outcomes, including... more
    In the intensive care unit (ICU), delirium is a common yet underdiagnosed form of organ dysfunction, and its contribution to patient outcomes is unclear. To determine if delirium is an independent predictor of clinical outcomes, including 6-month mortality and length of stay among ICU patients receiving mechanical ventilation. Prospective cohort study enrolling 275 consecutive mechanically ventilated patients admitted to adult medical and coronary ICUs of a US university-based medical center between February 2000 and May 2001. Patients were followed up for development of delirium over 2158 ICU days using the Confusion Assessment Method for the ICU and the Richmond Agitation-Sedation Scale. Primary outcomes included 6-month mortality, overall hospital length of stay, and length of stay in the post-ICU period. Secondary outcomes were ventilator-free days and cognitive impairment at hospital discharge. Of 275 patients, 51 (18.5%) had persistent coma and died in the hospital. Among the remaining 224 patients, 183 (81.7%) developed delirium at some point during the ICU stay. Baseline demographics including age, comorbidity scores, dementia scores, activities of daily living, severity of illness, and admission diagnoses were similar between those with and without delirium (P>.05 for all). Patients who developed delirium had higher 6-month mortality rates (34% vs 15%, P =.03) and spent 10 days longer in the hospital than those who never developed delirium (P<.001). After adjusting for covariates (including age, severity of illness, comorbid conditions, coma, and use of sedatives or analgesic medications), delirium was independently associated with higher 6-month mortality (adjusted hazard ratio [HR], 3.2; 95% confidence interval [CI], 1.4-7.7; P =.008), and longer hospital stay (adjusted HR, 2.0; 95% CI, 1.4-3.0; P<.001). Delirium in the ICU was also independently associated with a longer post-ICU stay (adjusted HR, 1.6; 95% CI, 1.2-2.3; P =.009), fewer median days alive and without mechanical ventilation (19 [interquartile range, 4-23] vs 24 [19-26]; adjusted P =.03), and a higher incidence of cognitive impairment at hospital discharge (adjusted HR, 9.1; 95% CI, 2.3-35.3; P =.002). Delirium was an independent predictor of higher 6-month mortality and longer hospital stay even after adjusting for relevant covariates including coma, sedatives, and analgesics in patients receiving mechanical ventilation.
    In this manuscript we report an evaluation of the reliability of clinical research rules creation by multiple clinicians using the Health Archetype Language (HAL-42) and user interface. HAL-42 is a language which allows real time... more
    In this manuscript we report an evaluation of the reliability of clinical research rules creation by multiple clinicians using the Health Archetype Language (HAL-42) and user interface. HAL-42 is a language which allows real time epidemiological inquiry using automatically derived clinical encodings with any health Ontology. This evaluation used SNOMED CT as the underlying Ontology. The inquiries were performed on a population of 17,731 patients whose 50,000 clinical records have all been fully encoded in SNOMED CT. Four subject matter experts (SMEs) were asked independently to encode and run 10 rules/studies. The inter-rater agreement was 74.8% (p=0.6526) with a Kappa statistic of 0.49217 (p=0.5722). The ten rules were divided into three easy rules, four moderate and three complex rules. There was no significant difference in the SME's agreement when representing easy and complex rules (p=0.6243). We conclude that although the usability of the HAL-42 language is usable enough to achieve reasonable inter-rater reliability, some training will be necessary to reach high levels of reliability for ad hoc queries. We also conclude that SMEs are just as competent to perform complex queries as easy queries of ontologically indexed clinical data.
    To develop and validate an instrument for use in the intensive care unit to accurately diagnose delirium in critically ill patients who are often nonverbal because of mechanical ventilation. Prospective cohort study. The adult medical and... more
    To develop and validate an instrument for use in the intensive care unit to accurately diagnose delirium in critically ill patients who are often nonverbal because of mechanical ventilation. Prospective cohort study. The adult medical and coronary intensive care units of a tertiary care, university-based medical center. Thirty-eight patients admitted to the intensive care units. We designed and tested a modified version of the Confusion Assessment Method for use in intensive care unit patients and called it the CAM-ICU. Daily ratings from intensive care unit admission to hospital discharge by two study nurses and an intensivist who used the CAM-ICU were compared against the reference standard, a delirium expert who used delirium criteria from the Diagnostic and Statistical Manual of Mental Disorders (fourth edition). A total of 293 daily, paired evaluations were completed, with reference standard diagnoses of delirium in 42% and coma in 27% of all observations. To include only interactive patient evaluations and avoid repeat-observer bias for patients studied on multiple days, we used only the first-alert or lethargic comparison evaluation in each patient. Thirty-three of 38 patients (87%) developed delirium during their intensive care unit stay, mean duration of 4.2 +/- 1.7 days. Excluding evaluations of comatose patients because of lack of characteristic delirium features, the two critical care study nurses and intensivist demonstrated high interrater reliability for their CAM-ICU ratings with kappa statistics of 0.84, 0.79, and 0.95, respectively (p <.001). The two nurses' and intensivist's sensitivities when using the CAM-ICU compared with the reference standard were 95%, 96%, and 100%, respectively, whereas their specificities were 93%, 93%, and 89%, respectively. The CAM-ICU demonstrated excellent reliability and validity when used by nurses and physicians to identify delirium in intensive care unit patients. The CAM-ICU may be a useful instrument for both clinical and research purposes to monitor delirium in this challenging patient population.
    Pregnancy and contraceptive methods both have important health effects that include risks and benefits. The net impact of contraception on women's health has not been reported previously. This is a cost-utility... more
    Pregnancy and contraceptive methods both have important health effects that include risks and benefits. The net impact of contraception on women's health has not been reported previously. This is a cost-utility analysis using a Markov model evaluated by Monte Carlo simulation using the societal perspective for costs. The analysis compared 13 methods of contraception to nonuse of contraception with respect to healthcare costs and quality-adjusted life years (QALYs). Discounting was applied for future costs and health effects. The base-case analysis applies to women of average health and fertility, ranging from 15 to 50 years of age, who are sexually active in a mutually monogamous relationship; smoking rates observed in women of reproductive age were used. Sensitivity analysis extended the analysis to nonmonogamous status and smoking status. Compared with use of no contraception, contraceptive methods of all types result in substantial cost savings over 2 years, ranging from US$5907 per woman for tubal sterilization to US$9936 for vasectomy and health gains ranging from 0.088 QALYs for diaphragm to 0.147 QALYs for depot medroxyprogesterone acetate. Compared with nonuse, even with a time horizon as short as 1 year, use of any method other than sterilization results in financial savings and health gains. Most of the financial savings and health gains were due to contraceptive effects. In a population of patients, even modest increases in the use of the most effective methods result in financial savings and health gains. Every method of contraception dominates nonuse in most clinical settings. Increasing the use of more effective methods even modestly at the expense of less effective methods will improve health and reduce costs. Methods that require action by the user less frequently than daily are both less costly and more effective than methods requiring action on a daily basis.
    The psychometric properties of generic health-related quality of life (HRQOL) assessment instruments were evaluated to identify a reliable, valid, and non-redundant battery to measure longitudinal outcomes in organ transplant patients.... more
    The psychometric properties of generic health-related quality of life (HRQOL) assessment instruments were evaluated to identify a reliable, valid, and non-redundant battery to measure longitudinal outcomes in organ transplant patients. Objective functional performance and subjective HRQOL were assessed in 371 solid organ (liver, heart, kidney, lung) transplant patients using the Karnofsky scale, the SF-36 Health Survey (SF-36), and Psychosocial Adjustment to Illness Scale (PAIS). The surveys' internal-consistency reliability, criterion-related validity, and redundancy were tested. The SF-36 mental (MCS) and physical components (PCS), and PAIS summary scales were internally consistent (all alpha > or = 0.83). Four out of seven PAIS scales (vocational, domestic, sexual, social) were collectively associated with the PCS (R = 0.65, P < 0.001), as was functional performance (r = 0.52, P < 0.001). Three PAIS scales (family, social, psychological distress) were associated with the MCS (R = 0.72, P < 0.001). Only the PAIS healthcare orientation (satisfaction) scale was not associated with the SF-36((R)). The relationship between functional performance and the PCS is stronger (r = 0.52, P < 0.001) than with the MCS (r = 0.25, P < 0.001) and the PAIS global score (r = 0.37, P < 0.001). The SF-36 and PAIS are internally consistent and exhibit divergent criterion-related validity but, with the exception of the PAIS healthcare orientation scale, are statistically redundant. The advantages of the SF-36 include wider use, more norms, and a lesser response burden. A transplant-specific patient satisfaction inventory was indicated and was developed.
    The purpose of this study is to describe the involvement of nurses in the decision-making process of seriously ill hospitalized adults. Nurses (696) completed interviews with 1,427 patients. Patient, surrogate, and physician interviews... more
    The purpose of this study is to describe the involvement of nurses in the decision-making process of seriously ill hospitalized adults. Nurses (696) completed interviews with 1,427 patients. Patient, surrogate, and physician interviews were also completed. Patients and surrogates perceive the nurse as more influential in decision making than does the nurse or physician. Many nurses reported having no (31%) or little (36%) knowledge of their patients' preferences, and 53% of the nurses did not advocate for their patients' preferences. Only 50% of the nurses reported educating their patients about the treatment plan chosen or discussing treatment options with their patients, and few (17%) discuss prognosis. This study indicates nurses are not actively involved in the decision-making process of their patients, especially older or more experienced nurses and those working in intensive care units.
    In response to residency work hour restrictions, programs restructured call schedules, increasing the use of short call (daytime admitting teams). Few data exist on the effect of short call on quality of patient care. Our objective was to... more
    In response to residency work hour restrictions, programs restructured call schedules, increasing the use of short call (daytime admitting teams). Few data exist on the effect of short call on quality of patient care. Our objective was to examine the effect of short call admission on length of stay and quality of care for patients with acute decompensated heart failure. We conducted a retrospective cohort study of 218 patients admitted with acute decompensated heart failure to the Nashville VA Medical Center between July 1, 2003, and June 30, 2005. The primary exposure was short call, and the primary outcome was length of stay. The secondary outcomes--diuretic dosing, weight monitoring, and hospital complications--were determined through a combination of administrative data and chart review. Patients admitted to short call had a longer median length of stay than patients admitted to long call (5.2 days [25% to 75%, 3.2 to 8 days] versus 3.9 days [interquartile range, 2.7 to 6.5 days]; P=0.0004). After adjustment for covariates, short call had a 44% increase in length of stay (95% CI, 15 to 80) compared with long call. Short call patients received fewer diuretic doses in the first 24 hours of hospitalization (1.80 versus 2.12; P=0.014) and had a longer median time to the second dose of loop diuretics compared with long call patients (17.9 hours versus 16.2 hours; P=0.044). Admission to short call is predictive of increased length of stay, a decreased number of diuretic doses, and delays in the timing of diuretics among patients with acute decompensated heart failure. Additional studies are needed to clarify the impact of short call admission on inpatient quality of care.
    ... zu Köln Klinik III für Innere Medizin Köln, Germany Oliver Vonend, MD Lars C. Rump, MD Klinik für Nephrologie Universitätskliniken Düsseldorf Düsseldorf, Germany Paul A. Sobotka, MD The Ohio State University Columbus, OH Ardian, Inc... more
    ... zu Köln Klinik III für Innere Medizin Köln, Germany Oliver Vonend, MD Lars C. Rump, MD Klinik für Nephrologie Universitätskliniken Düsseldorf Düsseldorf, Germany Paul A. Sobotka, MD The Ohio State University Columbus, OH Ardian, Inc Palo Alto, CA Henry Krum, MBBS ...

    And 13 more