The Research on Medical Education Outcomes (ROMEO) Registry: Addressing Ethical and Practical Challenges of Using "Bigger," Longitudinal Educational Data
PROBLEM: Efforts to evaluate and optimize the effectiveness of medical education have been limited by the difficulty of designing medical education research. Longitudinal, epidemiological views of educational outcomes can help overcome limitations, but these approaches require "bigger data"-more learners, sources, and time points. The rich data institutions collect on students and residents can be mined, however, ethical and practical barriers to using these data must first be overcome. APPROACH: In 2008, the authors established the Research on Medical Education Outcomes (ROMEO) Registry, an educational data registry modeled after patient registries. New York University School of Medicine students, residents, and fellows provide consent for routinely collected educational, performance, quality improvement, and clinical practice data to be compiled into a deidentified, longitudinal database. As of January 2015, this registry included 1,225 residents and fellows across 12 programs (71% consent rate) and 841 medical students (86% consent rate). Procedures ensuring voluntary informed consent are essential to ethical enrollment and data use. Substantial resources are required to provide access to and manage the data. OUTCOMES: The registry supports educational scholarship. Seventy-two studies using registry data have been presented or published. These focus on evaluating the curriculum, quality of care, and measurement quality and on assessing needs, competencies, skills development, transfer of skills to practice, remediation patterns, and links between education and patient outcomes. NEXT STEPS: The authors are working to integrate assessment of relevant outcomes into the curriculum, maximize both the quantity and quality of the data, and expand the registry across institutions.
Charting a Key Competency Domain: Understanding Resident Physician Interprofessional Collaboration (IPC) Skills
BACKGROUND: Interprofessional collaboration (IPC) is essential for quality care. Understanding residents' level of competence is a critical first step to designing targeted curricula and workplace learning activities. In this needs assessment, we measured residents' IPC competence using specifically designed Objective Structured Clinical Exam (OSCE) cases and surveyed residents regarding training needs. METHODS: We developed three cases to capture IPC competence in the context of physician-nurse collaboration. A trained actor played the role of the nurse (Standardized Nurse - SN). The Interprofessional Education Collaborative (IPEC) framework was used to create a ten-item behaviorally anchored IPC performance checklist (scored on a three-point scale: done, partially done, well done) measuring four generic domains: values/ethics; roles/responsibilities; interprofessional communication; and teamwork. Specific skills required for each scenario were also assessed, including teamwork communication (SBAR and CUS) and patient-care-focused tasks. In addition to evaluating IPC skills, the SN assessed communication, history-taking and physical exam skills. IPC scores were computed as percent of items rated well done in each domain (Cronbach's alpha > 0.77). Analyses include item frequencies, comparison of mean domain scores, correlation between IPC and other skills, and content analysis of SN comments and resident training needs. RESULTS: One hundred and seventy-eight residents (of 199 total) completed an IPC case and results are reported for the 162 who participated in our medical education research registry. IPC domain scores were: Roles/responsibilities mean = 37 % well done (SD 37 %); Values/ethics mean = 49 % (SD 40 %); Interprofessional communication mean = 27 % (SD 36 %); Teamwork mean = 47 % (SD 29 %). IPC was not significantly correlated with other core clinical skills. SNs' comments focused on respect and IPC as a distinct skill set. Residents described needs for greater clarification of roles and more workplace-based opportunities structured to support interprofessional education/learning. CONCLUSIONS: The IPC cases and competence checklist are a practical method for conducting needs assessments and evaluating IPC training/curriculum that provides rich and actionable data at both the individual and program levels.
Cultivating a New Generation of Biomedical Entrepreneurs
In recent years, scientific and technological advances have brought great innovation within the life sciences industry, introducing the need for entrepreneurship training for medical and engineering graduates. With this in mind, Michal Gilon-Yanai, Dr Robert Schneider and their collaborators developed an academic program designed to provide students and faculty members with the skills they need to become successful entrepreneurs. The team of collaborators includes Dr Gabrielle Gold-von Simson, an expert in implementing academic programs, and Dr Colleen Gillespie, who specialises in education, evaluation and dissemination science. Their pioneering program trains students on how to bring new biomedical technologies to the market.
Supporting a learning healthcare system-using an ongoing unannounced standardized patient program to continuously improve primary care resident education, team training, and healthcare quality [Meeting Abstract]
STATEMENT OF PROBLEM OR QUESTION (ONE SENTENCE): In order to describe quality improvement (QI) methods for health systems, we report on 10-years of using Unannounced Standardized Patient (USP) visits as the core of a program of education, training, and improvement in a system serving vulnerable patients in partnership with an academic medical center. LEARNING OBJECTIVES 1: Consider methods for supporting learning healthcare systems LEARNING OBJECTIVES 2: Identify performance data to improve care DESCRIPTION OF PROGRAM/INTERVENTION, INCLUDING ORGANIZATIONAL CONTEXT (E.G. INPATIENT VS. OUTPATIENT, PRACTICE OR COMMUNITY CHARACTERISTICS): The IOM defines a Learning Healthcare System (LHCS) as one in which science, informatics, incentives and culture are aligned for continuous improvement and innovation and where best practices are seamlessly embedded in the delivery process and new knowledge is captured as an integral by-product of the delivery experience. As essential as electronic health records are to LHCS, such data fail to capture all actionable information needed to sustain learning within complex systems. USPs are trained actors who present to clinics, incognito, to portray standardized chief complaints, histories, and characteristics. We designed and delivered USP visits to two urban, safety net clinics, focusing on assessing physician, team, and clinical micro system functioning. MEASURES OF SUCCESS (DISCUSS QUALITATIVE AND/OR QUANTITATIVEMETRICSWHICHWILL BE USEDTOEVALUATE PROGRAM/INTERVENTION): Behaviorally anchored assessments are used to assess core clinical skills (e.g., communication, information gathering, patient education, adherence to guidelines, patient centeredness, and patient activation). Team functioning assessments include professionalism and coordination. Micro system assessment focuses on safety issues like identity confirmation, hand washing, and navigation. Data from these visits has been provided to the residency, primary care teams, and to leadership and have been used to drive education, team training, and QI. FINDINGS TO DATE (IT IS NOT SUFFICIENT TO STATE FINDINGS WILL BE DISCUSSED): 1111 visits have been sent to internal medicine and primary care residents and their teams/clinics. At the resident level, needs for additional education and training in depression management, opioid prescribing, smoking cessation, and patient activation were identified and informed education. Chart reviews found substantial variation in ordering of labs and tests. At the team level, USPs uncovered needs for staff training, enhanced communication, and better processes for eliciting and documenting Social Determinants of Health (SDoH). Audit/feedback reports on provider responses to embedded SDoH combined with targeted education/resources, were associated with increased rates of eliciting and effectively responding to SDoH. In the early COVID wave, USPs tested clinic response to a potentially infectious patient. Currently, USPs are being deployed to understand variability in patients' experience of telemedicine given the rapid transformation to this modality. Finally, generalizable questions about underlying principles of medical education and quality improvement are being asked & answered using USP data to foster deeper understanding of levers for change. KEY LESSONS FOR DISSEMINATION (WHAT CAN OTHERS TAKE AWAY FOR IMPLEMENTATION TO THEIR PRACTICE OR COMMUNITY): A comprehensive USP program can provide unique insights for driving QI and innovation and help sustain a LHCS
Communication skills over time for eight medical school cohorts: Exploration of selection, curriculum, and measurement effects [Meeting Abstract]
BACKGROUND: NYU uses the same 14-item checklist for assessing medical student communication skills across our curriculum, which includes highquality Objective Structured Clinical Skills Exams throughout the first three years of medical school: a 3-station Introductory Clinical Experience OSCE (ICE), a 3-station end-of-clinical skills OSCE (Practice of Medicine; POM); and an 8-station, high- stakes OSCE (Comprehensive Clinical Skills Exam; CCSE) after core clerkship. We describe how skills change throughout school and explore how patterns vary by cohort (class) in ways that could be explained by admissions criteria, measurement quality, and/or curriculum changes.
METHOD(S): Three domains are assessed: Info gathering (6 items), relationship development (5 items); and patient education & counseling (3 items). Checklist items use a 3-point scale (not done, partly, well done) with behavioral anchors. Internal consistency (Cronbach's alpha) exceeds .75 for all subdomains and across all years. Domains are supported by Confirmatory Factor Analysis. Mean average % well done was calculated across cases and individuals for each subdomain in an OSCE and compared over the OSCEs and between 8 classes of medical school students entering from 2009 to 2016 (graduating 2013 to 2020) (n=1569).
RESULT(S): Cohorts showed similar patterns communication skills trajectories - improvement over time. Despite changes in admissions criteria and processes, cohorts did not differ in terms of demographics, undergraduate GPA, or MCAT scores. Variability in scores decreased in all cohorts over time while communication improved. Patient education & counseling was significantly and substantially lower than other domains. In terms of cohort effects, communication scores for the entering class of 2013 at the start of medical school (ICE OSCE) were significantly higher than the previous 4. At the end of MS2, scores were similar for cohorts for info gathering and relationship development domains (and high, mean range=77-87% well done) but patient education & counseling varied: Improvement from the 1st to 3rd cohort and then decline for the last 5 cohorts. Within the CCSE (8-station pass/fail, MS3), communication scores increased steadily across entering classes, especially from cohort 4 on. These changes over time and between cohorts were mapped onto a priori descriptions of curricular, measurement and admission changes.
CONCLUSION(S): Our cohort data showed interesting and complex patterns. This study reinforces some limitations of linking curriculum to performance (e.g., no direct measures of the curriculum in terms of content, process and intensity over time, limited data on what makes cohorts different, variable measurement over time, and being unable to control for broader trends likely to influence both cohort and time effects) while also demonstrating the promise of longitudinal perspectives on the development of core competencies. LEARNING OBJECTIVE #1: Understand cohort performance in relation to curricular trends. LEARNING OBJECTIVE #2: Describe variation in performance
Changing hats: Lessons learned integrating coaching into UME and GME [Meeting Abstract]
BACKGROUND: The transition from medical school to residency is characterized by an abrupt transition of learning needs and goals. Coaching is a promising intervention to support individual learning and growth trajectories of learners. It is uncommon for medical school faculty to have undergone training as coaches. We explored our faculty's perceptions and skills after instituting a new coaching program.
METHOD(S): Faculty advisors (N=12) and GME (N=16) participated in a coaching development program and in community of practice meetings where challenging coaching scenarios were shared. GME faculty also participated in a Group Objective Structured Clinical Exam (GOSCE) to practice and receive feedback on their skills. Peer-faculty observers and resident raters used behaviorally grounded checklists to assess faculty performance. We conducted 2 focus groups: 1) UME advisors engaged in longitudinal coaching (n=9) and 2) GME faculty participating in the coaching development program (n=8) to better understand how faculty make sense of and put into practice these new coaching roles and skills.
RESULT(S): Simple thematic coding showed that both groups emphasized the blurring of the many roles they serve when interacting with trainees and struggled with recognizing both which hat to wear (role to adopt) and which skills to call upon in specific situations. UME advisors who have dedicated advising/coaching roles reported assuming multiple roles at different times with their same students. Many of the GME coaches serve as Associate Program Directors, and described adopting a coaching frame of reference (mentality) and requiring external reinforcement for coaching skills. Some reported realizing after the fact that coaching would have been a valuable approach. Faculty newer to their role felt more successful in engaging in coaching mindset and coaching. Faculty were curious about how trainees would feel about this approach and anticipated that some would appreciate this more than others. 12 faculty participated in a three station Coaching GOSCE. Both resident raters and faculty peer raters suggested faculty coaches were able to establish trust and engage in authentic listening. Coaches negotiated the tension between empathetic listening with supporting goal-setting. Residents provided slightly lower ratings than peer observers on coaches' ability to ask questions and assume a coachee- focused agenda.
CONCLUSION(S): Medical educators may benefit from obtaining coaching skills, but deliberate training in how these skills complement, and differ, from existing skills requires both didactic and experiential learning. Cultivating a community of practice and offering opportunities for deliberate practice, observation and feedback is essential for medical educators to achieve mastery as coaches. LEARNING OBJECTIVE #1: Identify and perform appropriate learning activities to guide personal and professional development (PBL) LEARNING OBJECTIVE #2: Understand and apply core longitudinal coaching skills (Professionalism)
Internal medicine tele-takeover: Lessons learned from the emerging pandemic [Meeting Abstract]
BACKGROUND: Healthcare systems rose to the challenges of COVID-19 by creating or expanding telehealth programs to ensure that patients could access care from home. Traditionally, though, physicians receive limited formal telemedicine training, which made preparedness for this transition uneven. We designed a survey for General Internal Medicine (GIM) physicians within our diverse health system to describe experiences with providing virtual patient care; with the ultimate goal of identifying actionable recommendations for health system leaders and medical educators.
METHOD(S): Surveys were sent to all faculty outpatient GIM physicians working at NYU Langone Health, NYC Health + Hospitals/Bellevue and Gouverneur, and the VA NY Harbor Health System (n=378) in May & June of 2020. Survey items consisted of Likert and open-ended questions on experience with televisits (13 items) and attitudes toward care (24 items). Specific questions covered barriers to communication over remote modalities.
RESULT(S): 195/378 (52%) responded to the survey. 96% of providers reported having problems establishing a connection from the patient's end while 84% reported difficultly establishing connection on the provider's end. Using interpreter services over the phone was also problematic for providers, with 38% reporting troubles. Regarding teamness, 35% of physicians found it difficult to share information with healthcare team members during virtual visits and 42% found it difficult to work collaboratively with team members, both when compared to in-person visits. When subdivided, 24% of private and 40% of public providers found info sharing more difficult (p<0.04). 31% of private providers and 45% of public found team collaboration more difficult (ns). Physicians also identified challenges in several domains including physical exams (97%), establishing relationships with new patients (74%), taking a good history (48%), and educating patients (35%). In thematic analysis of open-ended comments, themes emerged related to technological challenges, new systems issues, and new patient/provider communication experiences. Positives noted by physicians included easier communication with patients who often struggle with keeping in-person appointments, easier remote monitoring, and a more thorough understanding of patients' home lives.
CONCLUSION(S): Provider experience differences were rooted in the type of technology employed. Safety-net physicians conducted mostly telephonic visits while private outpatient physicians utilized video visits, despite both using the same brand of electronic medical record system. As we consider a new normal and prolonged community transmission of COVID-19, it is essential to establish telemedicine training, tools, and protocols that meet the needs of both patients and physicians across diverse settings. LEARNING OBJECTIVE #1: Describe challenges and barriers to effective communication and clinical skill utilization during televisits LEARNING OBJECTIVE #2: Conceptualize recommendations for educational curricula and health service improvement areas
Fighting the COVID-19 pandemic from the clinic-impact of the primary care provider [Meeting Abstract]
BACKGROUND: COVID-19 has overwhelmed hospitals at various stages of the pandemic, leading to intense focus on availability of inpatient resources and less attention to primary care contributions. There is clear evidence that medical comorbidities, social determinants of health, and individual behaviors such as mask-wearing affect COVID-19 outcomes. By managing medical comorbidities and modifying social behaviors, it is plausible that primary care physicians (PCPs) improve COVID-19 outcomes. Socioeconomic status (SES) and environment likely affect the number of PCPs and their effectiveness in a community. Notwithstanding these factors, we hypothesize that PCPs contribute to healthier communities and that this will correlate with decreased COVID-19 cases and mortality.
METHOD(S): We used three surrogate measures of PCP effectiveness: PCP rate (#PCPs/population), flu vaccination rate, and number of preventable hospital stays. We merged county-level data from USA Facts, the New York Times masking survey, the Robert Wood Johnson Foundation County Health Data, and the Health Resources & Services Administration. We ran multiple linear regression models to measure the contributed variance in COVID-19 cases or deaths of the measures of PCP effectiveness after adjusting for age, race, economic, and environmental factors. A second model also measured the effect of PCP rates on mask adherence adjusted for the same confounders. Data were merged and analyzed using SPSS v.25.
RESULT(S): Data were available from 2957 of 3143 county equivalents. There were an average of 55 PCPs per 100,000 population. By December 27, 2020 there were 18,750,038 COVID-19 cases and 325,507 deaths nationally. In multiple linear regression models, PCP rate (beta=-0.07), flu vaccination rate (beta=-0.067), and preventable hospital stays (beta=0.136) were all significant (p<=0.001) contributors to the variance seen in COVID-19 cases after adjusting for confounding variables. Similarly, PCP rate (beta=-0.056, p=0.003), flu vaccination rate (beta=-0.006, p=0.001), and preventable hospital stays (beta=0.166, p<0.001) were significant contributors to the variance seen in COVID-19 deaths. PCP rate was also found to be a significant contributor to variance in mask adherence (beta=0.078, p<0.001).
CONCLUSION(S): All measures of PCP effectiveness were significantly correlated with lower COVID-19 cases and deaths and higher self-reported mask adherence even after accounting for SES and environmental factors. The pandemic has exposed an American healthcare system that is detrimentally more reactive than preventative. Our study demonstrates the modest-but significant- success of prevention efforts by PCPs. We hope it will serve to increase resource allocation and attention toward the primary care sector of the healthcare workforce. LEARNING OBJECTIVE #1: Identify how increasing resource allocation to primary care may improve systems-based practice. LEARNING OBJECTIVE #2: Recognize the role that primary care physicians may play in improving COVID-19 outcomes
Describing trends from a decade of resident performance on core clinical skills as measured by unannounced standardized patients [Meeting Abstract]
BACKGROUND: Primary care (PC) residency training is a period that provides opportunity to develop skills required for independent practice. Unannounced Standardized Patients (USPs), or secret shoppers, are a controlled measure of clinical skills in actual practice. We sought to describe differences in core clinical communication skills over the last decade for PC residents.
METHOD(S): USPs presented as a new patient for a comprehensive visit while portraying one of six unique, outpatient cases (with either chronic or acute symptomology). Actors received extensive training to ensure accurate case portrayal. Each completed a post-visit, behaviorally anchored checklist (not, partly, or well done) in order to provide extensive, actionable feedback. A standardized checklist was used, consisting of individual items across domains including information gathering, relationship development, patient education, activation and satisfaction. Chronbach's alpha for domains ranged from 0.62- 0.89. Summary scores (mean % well done) were calculated by domain and compared by year for all learners and by PGY within year for the primary care (PC) residency. Differences were assessed using ANOVA. Case portrayal accuracy was ensured using audio tape review.
RESULT(S): 396 visits were conducted with PC residents in our urban, safetynet hospital system between 2013 and 2020. While looking across the 8 years, there was variation in mean scores per domain, though Kruskal-Wallis H test did not show any statistical difference. Relationship development and info gathering were the highest rated skills, at 75% and 76% well done, respectively, on average. Patient satisfaction and activation remained uniformly low across years, with scores averaging 36% and 39% well done, respectively. Multi-variate analysis showed no significant changes across domains by cohort (grad year) and PGY levels. Further, there were no significant differences by PGY year or cohort in terms of scoring using a two-way ANOVA, though there was a slight upward trend in relationship development skills since 2017 for all PGY levels. There were similar trends in most domains, with 2020 scores being higher than previous years. There were no significant differences across domains while looking at PGY1 learners only.
CONCLUSION(S): While there were no significant differences in scores, we can postulate that PC residents enter the residency with consistent foundational communication skills, possibly attributable to training. We elected to use the visit itself as the unit of analysis, which does not allow us to tease out differences in individual learners. We also have small sample sizes for earlier years of the USP visit program, which may hinder results. Regardless, results warrant further research in order to gain a more thorough understanding, possibly in relation to curricular trends. Further study will look at individual resident differences and ideally provide insight into curricular improvement areas. LEARNING OBJECTIVE #1: Describe assessment measures LEARNING OBJECTIVE #2: Explore clinical competency
Validation of the comprehensive clinical skills exam (CCSE) measurement model [Meeting Abstract]
BACKGROUND: Performance-based assessment & feedback during medical training is essential for a successful transition before moving onto residency and independent clinical practice. Learners at New York University's School of Medicine (NYUSOM) participate in a routine comprehensive clinical skills examination (CCSE) that takes place at the tail end of medical school. During this exam, learners interact with standardized patients (SPs) and are rated on specific skills using a standardized checklist, measuring important clinical skills domains. NYUSOM has utilized the same assessment tool since 2005. To date, there is limited evidence on the tool's validity and ability to differentiate among students. We sought to provide evidence for it's reliability, validity, and generalizability.
METHOD(S): 1157 learners participated in the CCSE from 2011-2019 and were included in the analysis. Communication domain items assessed included patient education (3 items), relationship development (4 items), information gathering (6 items) and organization/ time management (3 items). Items were scored using a 3-point behaviorally-anchored scale (not, partly, or well done). In order to determine the degree to which the data mapped onto our theoretically-informed communication domains, we conducted a four-factor confirmatory factor analysis (CFA) allowing for factors to correlate (oblique rotation) and using means and variance adjusted weighted-least squares estimation (WLSMV) in order to account for the ordered categorical nature of the communication items. Model fit was assessed using root mean square of approximation (RMSEA) < 0.08, comparative fit index (CFI) > 0.95, and standardized root mean square error (SRMR) <0.08.
RESULT(S): The model fit the data using RMSEA (0.04), CFI (0.98), and SRMR (0.05). All factors were significantly correlated with one another (p < 0.05), with the largest correlation between patient education and organization/ time management (0.86), and information gathering (0.77). The smallest correlation was between organization/ time management and information gathering (0.66). All items (factor loadings) significantly loaded on the factors they measured. Only one item had an insignificant threshold loading between partly and well done, suggesting this part of the response scale may be hard for SPs to differentiate between students with varying ability on this item. Each factor had at least one item that had a factor loading less than 0.7.
CONCLUSION(S): The analysis suggests each item on the communication checklist significantly measures domains they were designed to measure, and that items can be summated to compute overall scores. Domains had one item with a lower loading than the rest, suggesting these items may be measuring something different. Follow up measurement modeling and profile analysis is the next logical step in determining if there is an important sub-domain that identifies a student group operating differentially. LEARNING OBJECTIVE #1: Understand clinical communication LEARNING OBJECTIVE #2: Describe communication measures