Try a new search

Format these results:

Searched for:

person:pusicm01

in-biosketch:true

Total Results:

140


The critical role of infrastructure and organizational culture in implementing competency-based education and individualized pathways in undergraduate medical education

Lomis, Kimberly D; Mejicano, George C; Caverzagie, Kelly J; Monrad, Seetha U; Pusic, Martin; Hauer, Karen E
In 2010, several key works in medical education predicted the changes necessary to train modern physicians to meet current and future challenges in health care, including the standardization of learning outcomes paired with individualized learning processes. The reframing of a medical expert as a flexible, adaptive team member and change agent, effective within a larger system and responsive to the community's needs, requires a new approach to education: competency-based medical education (CBME). CBME is an outcomes-based developmental approach to ensuring each trainee's readiness to advance through stages of training and continue to grow in unsupervised practice. Implementation of CBME with fidelity is a complex and challenging endeavor, demanding a fundamental shift in organizational culture and investment in appropriate infrastructure. This paper outlines how member schools of the American Medical Association Accelerating Change in Medical Education Consortium developed and implemented CBME, including common challenges and successes. Critical supporting factors include adoption of the master adaptive learner construct, longitudinal views of learner development, coaching, and a supportive learning environment.
PMID: 34291715
ISSN: 1466-187x
CID: 4950502

A Target Population Derived Method for Developing a Competency Standard in Radiograph Interpretation

Lee, Michelle S; Pusic, Martin V; Camp, Mark; Stimec, Jennifer; Dixon, Andrew; Carrière, Benoit; Herman, Joshua E; Boutis, Kathy
CONSTRUCT/UNASSIGNED:For assessing the skill of visual diagnosis such as radiograph interpretation, competency standards are often developed in an ad hoc method, with a poorly delineated connection to the target clinical population. BACKGROUND/UNASSIGNED:Commonly used methods to assess for competency in radiograph interpretation are subjective and potentially biased due to a small sample size of cases, subjective evaluations, or include an expert-generated case-mix versus a representative sample from the clinical field. Further, while digital platforms are available to assess radiograph interpretation skill against an objective standard, they have not adopted a data-driven competency standard which informs educators and the public that a physician has achieved adequate mastery to enter practice where they will be making high-stakes clinical decisions. APPROACH/UNASSIGNED:Operating on a purposeful sample of radiographs drawn from the clinical domain, we adapted the Ebel Method, an established standard setting method, to ascertain a defensible, clinically relevant mastery learning competency standard for the skill of radiograph interpretation as a model for deriving competency thresholds in visual diagnosis. Using a previously established digital platform, emergency physicians interpreted pediatric musculoskeletal extremity radiographs. Using one-parameter item response theory, these data were used to categorize radiographs by interpretation difficulty terciles (i.e. easy, intermediate, hard). A panel of emergency physicians, orthopedic surgeons, and plastic surgeons rated each radiograph with respect to clinical significance (low, medium, high). These data were then used to create a three-by-three matrix where radiographic diagnoses were categorized by interpretation difficulty and significance. Subsequently, a multidisciplinary panel that included medical and parent stakeholders determined acceptable accuracy for each of the nine cells. An overall competency standard was derived from the weighted sum. Finally, to examine consequences of implementing this standard, we reported on the types of diagnostic errors that may occur by adhering to the derived competency standard. FINDINGS/UNASSIGNED:To determine radiograph interpretation difficulty scores, 244 emergency physicians interpreted 1,835 pediatric musculoskeletal extremity radiographs. Analyses of these data demonstrated that the median interpretation difficulty rating of the radiographs was -1.8 logits (IQR -4.1, 3.2), with a significant difference of difficulty across body regions (p < 0.0001). Physician review classified the radiographs as 1,055 (57.8%) as low, 424 (23.1%) medium or 356 (19.1%) high clinical significance. The multidisciplinary panel suggested a range of acceptable scores between cells in the three-by-three table of 76% to 95% and the sum of equal-weighted scores resulted in an overall performance-based competency score of 85.5% accuracy. Of the 14.5% diagnostic interpretation errors that may occur at the bedside if this competency standard were implemented, 9.8% would be in radiographs of low-clinical significance, while 2.5% and 2.3% would be in radiographs of medium or high clinical significance, respectively. CONCLUSION(S)/UNASSIGNED:This study's novel integration of radiograph selection and a standard setting method could be used to empirically drive evidence-based competency standard for radiograph interpretation and can serve as a model for deriving competency thresholds for clinical tasks emphasizing visual diagnosis.
PMID: 34000944
ISSN: 1532-8015
CID: 4876792

Physicians' Electrocardiogram Interpretations-Reply

Cook, David A; Pusic, Martin V
PMID: 33523118
ISSN: 2168-6114
CID: 4775932

Learning Curves in Health Professions Education Simulation Research: A Systematic Review

Howard, Neva M; Cook, David A; Hatala, Rose; Pusic, Martin V
STATEMENT/UNASSIGNED:Learning curves are used in health professions education to graphically represent paths to competence and expertise. However, research using learning curves often omit important information. The authors conducted a systematic review of the reporting quality of learning curves in simulation-based education research to identify specific areas for improvement. Reviewers extracted information on graphical, statistical, and conceptual elements. The authors identified 230 eligible articles. Most learning curve elements were reported infrequently, including use of an optimal linking function, detailed description of feedback or learning intervention, use of advanced visualization techniques such as overlaying and stacking, and depiction of competency thresholds. Reporting did not improve over time for most elements. Reporting of learning curves in health professions education research is incomplete and often underutilizes their desirable properties. Recommendations for improvement of the statistical, graphical, and conceptual reporting of learning curves, as well as applications to simulation research and education, are presented.
PMID: 32675731
ISSN: 1559-713x
CID: 4528522

Prepubescent Female Genital Examination Images: Evidence Informed Learning Opportunities

Campos, S; Smith, T; Davis, A L; Pusic, M V; Shouldice, M; Brown, J; Legano, L; Pecaric, M; Boutis, K
OBJECTIVES/OBJECTIVE:To determine diagnoses and image features that are associated with difficult prepubescent female genital image interpretations. DESIGN/METHODS:and Setting. This was a mixed methods study conducted at a tertiary care pediatric centre using images from a previously developed education platform. PARTICIPANTS/METHODS:These included 107 medical students, residents, fellows and attendings who interpreted 158 cases to derive case difficulty estimates. INTERVENTIONS/METHODS:This was a planned secondary analysis of participant performance data obtained from a prospective multi-center cross-sectional study. An expert panel also performed a descriptive review of images with the highest frequency of diagnostic error. MAIN OUTCOME MEASURES/METHODS:We derived the proportion of participants that interpreted an image correctly and features that were common in images with the most frequent diagnostic error. RESULTS:We obtained 16,906 image interpretations. The mean proportion correct scores for each diagnosis were as follows: normal/normal variants 0.84 (95% CI 0.82, 0.87); infectious/dermatology pathology 0.59 (95% CI 0.45, 0.73); anatomic pathology 0.61 (95% CI 0.41, 0.81); and, traumatic pathology 0.64 (95% CI 0.49, 0.79). The mean proportion correct scores varied by diagnosis, p < 0.001. The descriptive review demonstrated that poor image quality, infant genitalia, normal variant anatomy, external material (cream) in genital area, and nonspecific erythema were common features in images with lower accuracy scores. CONCLUSIONS:A quantitative and qualitative examination of prepubescent female genital examination image interpretations provided insight into diagnostic challenges for this complex examination. These data can be used to inform the design of teaching interventions to improve skill in this area.
PMID: 33189899
ISSN: 1873-4332
CID: 4695772

Image interpretation: Learning analytics-informed education opportunities

Thau, Elana; Perez, Manuela; Pusic, Martin V; Pecaric, Martin; Rizzuti, David; Boutis, Kathy
Objectives/UNASSIGNED:Using a sample of pediatric chest radiographs (pCXR) taken to rule out pneumonia, we obtained diagnostic interpretations from physicians and used learning analytics to determine the radiographic variables and participant review processes that predicted for an incorrect diagnostic interpretation. Methods/UNASSIGNED:This was a prospective cross-sectional study. A convenience sample of frontline physicians with a range of experience levels interpreted 200 pCXR presented using a customized online radiograph presentation platform. Participants were asked to determine absence or presence (with respective location) of pneumonia. The pCXR were categorized for specific image-based variables potentially associated with interpretation difficulty. We also generated heat maps displaying the locations of diagnostic error among normal pCXR. Finally, we compared image review processes in participants with higher versus lower levels of clinical experience. Results/UNASSIGNED:We enrolled 83 participants (20 medical students, 40 postgraduate trainees, and 23 faculty) and obtained 12,178 case interpretations. Variables that predicted for increased pCXR interpretation difficulty were pneumonia versus no pneumonia (β = 8.7, 95% confidence interval [CI] = 7.4 to 10.0), low versus higher visibility of pneumonia (β = -2.2, 95% CI = -2.7 to -1.7), nonspecific lung pathology (β = 0.9, 95% CI = 0.40 to 1.5), localized versus multifocal pneumonia (β = -0.5, 95% CI = -0.8 to -0.1), and one versus two views (β = 0.9, 95% CI = 0.01 to 1.9). A review of diagnostic errors identified that bony structures, vessels in the perihilar region, peribronchial thickening, and thymus were often mistaken for pneumonia. Participants with lower experience were less accurate when they reviewed one of two available views (p < 0.0001), and accuracy of those with higher experience increased with increased confidence in their response (p < 0.0001). Conclusions/UNASSIGNED:Using learning analytics, we identified actionable learning opportunities for pCXR interpretation, which can be used to allow for a customized weighting of which cases to practice. Furthermore, experienced-novice comparisons revealed image review processes that were associated with greater diagnostic accuracy, providing additional insight into skill development of image interpretation.
PMCID:8062270
PMID: 33898916
ISSN: 2472-5390
CID: 4852972

Importance Ranking of Electrocardiogram Rhythms: A Primer for Curriculum Development

Penalo, Laura; Pusic, Martin; Friedman, Julie Lynn; Rosenzweig, Barry P; Lorin, Jeffrey D
INTRODUCTION/BACKGROUND:Electrocardiogram interpretation is an essential skill for emergency and critical care nurses and physicians. There remains a gap in standardized curricula and evaluation strategies used to achieve and assess competence in electrocardiogram interpretation. The purpose of this study was to develop an importance ranking of the 120 American Heart Association electrocardiogram diagnostic labels with interdisciplinary perspectives to inform curriculum development. METHODS:Data for this mixed methods study were collected through focus groups and individual semi-structured interviews. A card sort was used to assign relative importance scores to all 120 American Heart Association electrocardiogram diagnostic labels. Thematic analysis was used for qualitative data on participants' rationale for the rankings. RESULTS:The 18 participants included 6 emergency and critical care registered nurses, 5 cardiologists, and 7 emergency medicine physicians. The 5 diagnoses chosen as the most important by all disciplines were ventricular tachycardia, ventricular fibrillation, atrial fibrillation, complete heart block, and normal electrocardiogram. The "top 20" diagnoses by each discipline were also reported. Qualitative thematic content analysis revealed that participants from all 3 disciplines identified skill in electrocardiogram interpretation as clinically imperative and acknowledged the importance of recognizing normal, life threatening, and time-sensitive electrocardiogram rhythms. Additional qualitative themes, identified by individual disciplines, were reported. DISCUSSION/CONCLUSIONS:This mixed-methods approach provided valuable interdisciplinary perspectives concerning electrocardiogram curriculum case selection and prioritization. Study findings can provide a foundation for emergency and critical care educators to create local ECG educational programs. Further work is recommended to validate the list amongst a larger population of emergency and critical care frontline nurses and physicians.
PMID: 33546884
ISSN: 1527-2966
CID: 4779162

Implicit bias in residency interview allocation? When surveys are silent

Pusic, Martin V; Wyatt, Tasha R
In this issue of Medical Education, Kremer et al [1] present the results of the Texas STAR (Seeking Transparency in Residency Application) survey, which details medical students' experience with the residency match process in the U.S. It is a considerable undertaking, collecting data representing 115 medical schools across the nation with over 7000 student respondents in 2020 alone. Their goal in reporting this survey is ultimately to "create a healthier match climate nationwide" by making transparent which factors contribute to success in obtaining an interview, the first step towards matriculating into a competitive residency program.[2].
PMID: 33128248
ISSN: 1365-2923
CID: 4647222

Accuracy of Physicians' Electrocardiogram Interpretations: A Systematic Review and Meta-analysis

Cook, David A; Oh, So-Young; Pusic, Martin V
Importance/UNASSIGNED:The electrocardiogram (ECG) is the most common cardiovascular diagnostic test. Physicians' skill in ECG interpretation is incompletely understood. Objectives/UNASSIGNED:To identify and summarize published research on the accuracy of physicians' ECG interpretations. Data Sources/UNASSIGNED:A search of PubMed/MEDLINE, Embase, Cochrane CENTRAL (Central Register of Controlled Trials), PsycINFO, CINAHL (Cumulative Index to Nursing and Allied Health), ERIC (Education Resources Information Center), and Web of Science was conducted for articles published from database inception to February 21, 2020. Study Selection/UNASSIGNED:Of 1138 articles initially identified, 78 studies that assessed the accuracy of physicians' or medical students' ECG interpretations in a test setting were selected. Data Extraction and Synthesis/UNASSIGNED:Data on study purpose, participants, assessment features, and outcomes were abstracted, and methodological quality was appraised with the Medical Education Research Study Quality Instrument. Results were pooled using random-effects meta-analysis. Main Outcomes and Measures/UNASSIGNED:Accuracy of ECG interpretation. Results/UNASSIGNED:Of 1138 studies initially identified, 78 assessed the accuracy of ECG interpretation. Across all training levels, the median accuracy was 54% (interquartile range [IQR], 40%-66%; n = 62 studies) on pretraining assessments and 67% (IQR, 55%-77%; n = 47 studies) on posttraining assessments. Accuracy varied widely across studies. The pooled accuracy for pretraining assessments was 42.0% (95% CI, 34.3%-49.6%; n = 24 studies; I2 = 99%) for medical students, 55.8% (95% CI, 48.1%-63.6%; n = 37 studies; I2 = 96%) for residents, 68.5% (95% CI, 57.6%-79.5%; n = 10 studies; I2 = 86%) for practicing physicians, and 74.9% (95% CI, 63.2%-86.7%; n = 8 studies; I2 = 22%) for cardiologists. Conclusions and Relevance/UNASSIGNED:Physicians at all training levels had deficiencies in ECG interpretation, even after educational interventions. Improved education across the practice continuum appears warranted. Wide variation in outcomes could reflect real differences in training or skill or differences in assessment design.
PMID: 32986084
ISSN: 2168-6114
CID: 4616522

Child Abuse Recognition Training for Prehospital Providers using Deliberate Practice

Adelgais, Kathleen; Pusic, Martin; Abdoo, Denise; Caffrey, Sean; Snyder, Katherine; Alletag, Michelle; Balakas, Ashley; Givens, Timothy; Kane, Ian; Mandt, Maria; Roswell, Kelley; Saunders, Mary; Boutis, Kathy
BackgroundIn most states, prehospital Professionals (PHPs) are mandated reporters of suspected abuse but cite a lack of training as a challenge to recognizing and reporting physical abuse. We developed a learning platform for the visual diagnosis of pediatric abusive versus non-abusive burn and bruise injuries and examined the amount and rate of skill acquisition.MethodsThis was a prospective cross-sectional study of PHPs participating in an online educational intervention containing 114 case vignettes. PHPs indicated whether they believed a case was concerning for abuse and would report a case to child protection services. Participants received feedback after submitting a response, permitting deliberate practice of the cases. We describe learning curves, overall accuracy, sensitivity (diagnosis of abusive injuries) and specificity (diagnosis of non-abusive injuries) to determine the amount of learning. We performed multivariable regression analysis to identify specific demographic and case variables associated with a correct case interpretation. After completing the educational intervention, PHPs completed a self-efficacy survey on perceived gains in their ability to recognize cutaneous signs of abuse and report to social services.ResultsWe enrolled 253 PHPs who completed all the cases; 158 (63.6%) emergency medical technicians (EMT), 95 (36.4%) advanced EMT and paramedics. Learning curves demonstrated that, with one exception, there was an increase in learning for participants throughout the educational intervention. Mean diagnostic accuracy increased by 4.9% (95% CI 3.2, 6.7), and the mean final diagnostic accuracy, sensitivity, and specificity were 82.1%, 75.4%, and 85.2%, respectively. There was an increased odds of getting a case correct for bruise versus burn cases (OR =1.4; 95% CI 1.3, 1.5); if the PHP was an Advanced EMT/Paramedic (OR =1.3; 95% CI 1.1, 1.4) ; and, if the learner indicated prior training in child abuse (OR =1.2; 95% CI 1.0, 1.3) . Learners indicated increased comfort in knowing which cases should be reported and interpreting exams in children with cutaneous injuries with a median Likert score of 5 out of 6 (IQR 5, 6).ConclusionAn online module utilizing deliberate practice led to measurable skill improvement among PHPs for differentiating abusive from non-abusive burn and bruise injuries.
PMID: 33054522
ISSN: 1545-0066
CID: 4660832