Searched for: in-biosketch:true
person:mannd01
Comparing Users to Non-Users of Remote Patient Monitoring for Postpartum Hypertension [Letter]
Kidd, Jennifer M J; Alku, Dajana; Vertichio, Rosanne; Akerman, Meredith; Prasannan, Lakha; Mann, Devin M; Testa, Paul A; Chavez, Martin; Heo, Hye J
PMID: 39396754
ISSN: 2589-9333
CID: 5718282
Effect of a behavioral nudge on adoption of an electronic health record-agnostic pulmonary embolism risk prediction tool: a pilot cluster nonrandomized controlled trial
Richardson, Safiya; Dauber-Decker, Katherine L; Solomon, Jeffrey; Seelamneni, Pradeep; Khan, Sundas; Barnaby, Douglas P; Chelico, John; Qiu, Michael; Liu, Yan; Sanghani, Shreya; Izard, Stephanie M; Chiuzan, Codruta; Mann, Devin; Pekmezaris, Renee; McGinn, Thomas; Diefenbach, Michael A
OBJECTIVE/UNASSIGNED:Our objective was to determine the feasibility and preliminary efficacy of a behavioral nudge on adoption of a clinical decision support (CDS) tool. MATERIALS AND METHODS/UNASSIGNED:We conducted a pilot cluster nonrandomized controlled trial in 2 Emergency Departments (EDs) at a large academic healthcare system in the New York metropolitan area. We tested 2 versions of a CDS tool for pulmonary embolism (PE) risk assessment developed on a web-based electronic health record-agnostic platform. One version included behavioral nudges incorporated into the user interface. RESULTS/UNASSIGNED: < .001). DISCUSSION/UNASSIGNED:We demonstrated feasibility and preliminary efficacy of a PE risk prediction CDS tool developed using insights from behavioral science. The tool is well-positioned to be tested in a large randomized clinical trial. TRIAL REGISTRATION/UNASSIGNED:Clinicaltrials.gov (NCT05203185).
PMCID:11293639
PMID: 39091509
ISSN: 2574-2531
CID: 5731572
Uptake of Cancer Genetic Services for Chatbot vs Standard-of-Care Delivery Models: The BRIDGE Randomized Clinical Trial
Kaphingst, Kimberly A; Kohlmann, Wendy K; Lorenz Chambers, Rachelle; Bather, Jemar R; Goodman, Melody S; Bradshaw, Richard L; Chavez-Yenter, Daniel; Colonna, Sarah V; Espinel, Whitney F; Everett, Jessica N; Flynn, Michael; Gammon, Amanda; Harris, Adrian; Hess, Rachel; Kaiser-Jackson, Lauren; Lee, Sang; Monahan, Rachel; Schiffman, Joshua D; Volkmar, Molly; Wetter, David W; Zhong, Lingzi; Mann, Devin M; Ginsburg, Ophira; Sigireddi, Meenakshi; Kawamoto, Kensaku; Del Fiol, Guilherme; Buys, Saundra S
IMPORTANCE/UNASSIGNED:Increasing numbers of unaffected individuals could benefit from genetic evaluation for inherited cancer susceptibility. Automated conversational agents (ie, chatbots) are being developed for cancer genetics contexts; however, randomized comparisons with standard of care (SOC) are needed. OBJECTIVE/UNASSIGNED:To examine whether chatbot and SOC approaches are equivalent in completion of pretest cancer genetic services and genetic testing. DESIGN, SETTING, AND PARTICIPANTS/UNASSIGNED:This equivalence trial (Broadening the Reach, Impact, and Delivery of Genetic Services [BRIDGE] randomized clinical trial) was conducted between August 15, 2020, and August 31, 2023, at 2 US health care systems (University of Utah Health and NYU Langone Health). Participants were aged 25 to 60 years, had had a primary care visit in the previous 3 years, were eligible for cancer genetic evaluation, were English or Spanish speaking, had no prior cancer diagnosis other than nonmelanoma skin cancer, had no prior cancer genetic counseling or testing, and had an electronic patient portal account. INTERVENTION/UNASSIGNED:Participants were randomized 1:1 at the patient level to the study groups at each site. In the chatbot intervention group, patients were invited in a patient portal outreach message to complete a pretest genetics education chat. In the enhanced SOC control group, patients were invited to complete an SOC pretest appointment with a certified genetic counselor. MAIN OUTCOMES AND MEASURES/UNASSIGNED:Primary outcomes were completion of pretest cancer genetic services (ie, pretest genetics education chat or pretest genetic counseling appointment) and completion of genetic testing. Equivalence hypothesis testing was used to compare the study groups. RESULTS/UNASSIGNED:This study included 3073 patients (1554 in the chatbot group and 1519 in the enhanced SOC control group). Their mean (SD) age at outreach was 43.8 (9.9) years, and most (2233 of 3063 [72.9%]) were women. A total of 204 patients (7.3%) were Black, 317 (11.4%) were Latinx, and 2094 (75.0%) were White. The estimated percentage point difference for completion of pretest cancer genetic services between groups was 2.0 (95% CI, -1.1 to 5.0). The estimated percentage point difference for completion of genetic testing was -1.3 (95% CI, -3.7 to 1.1). Analyses suggested equivalence in the primary outcomes. CONCLUSIONS AND RELEVANCE/UNASSIGNED:The findings of the BRIDGE equivalence trial support the use of chatbot approaches to offer cancer genetic services. Chatbot tools can be a key component of sustainable and scalable population health management strategies to enhance access to cancer genetic services. TRIAL REGISTRATION/UNASSIGNED:ClinicalTrials.gov Identifier: NCT03985852.
PMCID:11385050
PMID: 39250153
ISSN: 2574-3805
CID: 5690012
Mixed methods assessment of the influence of demographics on medical advice of ChatGPT
Andreadis, Katerina; Newman, Devon R; Twan, Chelsea; Shunk, Amelia; Mann, Devin M; Stevens, Elizabeth R
OBJECTIVES/OBJECTIVE:To evaluate demographic biases in diagnostic accuracy and health advice between generative artificial intelligence (AI) (ChatGPT GPT-4) and traditional symptom checkers like WebMD. MATERIALS AND METHODS/METHODS:Combination symptom and demographic vignettes were developed for 27 most common symptom complaints. Standardized prompts, written from a patient perspective, with varying demographic permutations of age, sex, and race/ethnicity were entered into ChatGPT (GPT-4) between July and August 2023. In total, 3 runs of 540 ChatGPT prompts were compared to the corresponding WebMD Symptom Checker output using a mixed-methods approach. In addition to diagnostic correctness, the associated text generated by ChatGPT was analyzed for readability (using Flesch-Kincaid Grade Level) and qualitative aspects like disclaimers and demographic tailoring. RESULTS:ChatGPT matched WebMD in 91% of diagnoses, with a 24% top diagnosis match rate. Diagnostic accuracy was not significantly different across demographic groups, including age, race/ethnicity, and sex. ChatGPT's urgent care recommendations and demographic tailoring were presented significantly more to 75-year-olds versus 25-year-olds (P < .01) but were not statistically different among race/ethnicity and sex groups. The GPT text was suitable for college students, with no significant demographic variability. DISCUSSION/CONCLUSIONS:The use of non-health-tailored generative AI, like ChatGPT, for simple symptom-checking functions provides comparable diagnostic accuracy to commercially available symptom checkers and does not demonstrate significant demographic bias in this setting. The text accompanying differential diagnoses, however, suggests demographic tailoring that could potentially introduce bias. CONCLUSION/CONCLUSIONS:These results highlight the need for continued rigorous evaluation of AI-driven medical platforms, focusing on demographic biases to ensure equitable care.
PMID: 38679900
ISSN: 1527-974x
CID: 5651762
Bridging Gaps with Generative AI: Enhancing Hypertension Monitoring Through Patient and Provider Insights
Andreadis, Katerina; Rodriguez, Danissa V; Zakreuskaya, Anastasiya; Chen, Ji; Gonzalez, Javier; Mann, Devin
This study introduces a Generative Artificial Intelligence (GenAI) assistant designed to address key challenges in Remote Patient Monitoring (RPM) for hypertension. After a comprehensive needs assessment from clinicians and patients, we identified pivotal issues in RPM data management and patient engagement. The GenAI RPM assistant integrates a patient-facing chatbot, clinician-facing smart summaries, and automated draft portal messages to enhance communication and streamline data review. Validated through six rounds of testing and evaluations by ten participants, the initial prototype was positively received, highlighting the importance of personalized interactions. Our findings demonstrate GenAI's potential to improve RPM by optimizing data management and enhancing patient-provider communication.
PMID: 39176946
ISSN: 1879-8365
CID: 5681122
The Impact of an Electronic Best Practice Advisory on Patients' Physical Activity and Cardiovascular Risk Profile
McCarthy, Margaret M; Szerencsy, Adam; Fletcher, Jason; Taza-Rocano, Leslie; Weintraub, Howard; Hopkins, Stephanie; Applebaum, Robert; Schwartzbard, Arthur; Mann, Devin; D'Eramo Melkus, Gail; Vorderstrasse, Allison; Katz, Stuart D
BACKGROUND:Regular physical activity (PA) is a component of cardiovascular health and is associated with a lower risk of cardiovascular disease (CVD). However, only about half of US adults achieved the current PA recommendations. OBJECTIVE:The study purpose was to implement PA counseling using a clinical decision support tool in a preventive cardiology clinic and to assess changes in CVD risk factors in a sample of patients enrolled over 12 weeks of PA monitoring. METHODS:This intervention, piloted for 1 year, had 3 components embedded in the electronic health record: assessment of patients' PA, an electronic prompt for providers to counsel patients reporting low PA, and patient monitoring using a Fitbit. Cardiovascular disease risk factors included PA (self-report and Fitbit), body mass index, blood pressure, lipids, and cardiorespiratory fitness assessed with the 6-minute walk test. Depression and quality of life were also assessed. Paired t tests assessed changes in CVD risk. RESULTS:The sample who enrolled in the remote patient monitoring (n = 59) were primarily female (51%), White adults (76%) with a mean age of 61.13 ± 11.6 years. Self-reported PA significantly improved over 12 weeks ( P = .005), but not Fitbit steps ( P = .07). There was a significant improvement in cardiorespiratory fitness (469 ± 108 vs 494 ± 132 m, P = .0034), and 23 participants (42%) improved at least 25 m, signifying a clinically meaningful improvement. Only 4 participants were lost to follow-up over 12 weeks of monitoring. CONCLUSIONS:Patients may need more frequent reminders to be active after an initial counseling session, perhaps getting automated messages based on their step counts syncing to their electronic health record.
PMCID:10787798
PMID: 37467192
ISSN: 1550-5049
CID: 5738192
Virtual-first care: Opportunities and challenges for the future of diagnostic reasoning
Lawrence, Katharine; Mann, Devin
PMID: 38221668
ISSN: 1743-498x
CID: 5732542
From silos to synergy: integrating academic health informatics with operational IT for healthcare transformation
Mann, Devin M; Stevens, Elizabeth R; Testa, Paul; Mherabi, Nader
We have entered a new age of health informatics—applied health informatics—where digital health innovation cannot be pursued without considering operational needs. In this new digital health era, creating an integrated applied health informatics system will be essential for health systems to achieve informatics healthcare goals. Integration of information technology (IT) and health informatics does not naturally occur without a deliberate and intentional shift towards unification. Recognizing this, NYU Langone Health’s (NYULH) Medical Center IT (MCIT) has taken proactive measures to vertically integrate academic informatics and operational IT through the establishment of the MCIT Department of Health Informatics (DHI). The creation of the NYULH DHI showcases the drivers, challenges, and ultimate successes of our enterprise effort to align academic health informatics with IT; providing a model for the creation of the applied health informatics programs required for academic health systems to thrive in the increasingly digitized healthcare landscape.
PMCID:11233608
PMID: 38982211
ISSN: 2398-6352
CID: 5732312
Large Language Model-Based Responses to Patients' In-Basket Messages
Small, William R; Wiesenfeld, Batia; Brandfield-Harvey, Beatrix; Jonassen, Zoe; Mandal, Soumik; Stevens, Elizabeth R; Major, Vincent J; Lostraglio, Erin; Szerencsy, Adam; Jones, Simon; Aphinyanaphongs, Yindalon; Johnson, Stephen B; Nov, Oded; Mann, Devin
IMPORTANCE/UNASSIGNED:Virtual patient-physician communications have increased since 2020 and negatively impacted primary care physician (PCP) well-being. Generative artificial intelligence (GenAI) drafts of patient messages could potentially reduce health care professional (HCP) workload and improve communication quality, but only if the drafts are considered useful. OBJECTIVES/UNASSIGNED:To assess PCPs' perceptions of GenAI drafts and to examine linguistic characteristics associated with equity and perceived empathy. DESIGN, SETTING, AND PARTICIPANTS/UNASSIGNED:This cross-sectional quality improvement study tested the hypothesis that PCPs' ratings of GenAI drafts (created using the electronic health record [EHR] standard prompts) would be equivalent to HCP-generated responses on 3 dimensions. The study was conducted at NYU Langone Health using private patient-HCP communications at 3 internal medicine practices piloting GenAI. EXPOSURES/UNASSIGNED:Randomly assigned patient messages coupled with either an HCP message or the draft GenAI response. MAIN OUTCOMES AND MEASURES/UNASSIGNED:PCPs rated responses' information content quality (eg, relevance), using a Likert scale, communication quality (eg, verbosity), using a Likert scale, and whether they would use the draft or start anew (usable vs unusable). Branching logic further probed for empathy, personalization, and professionalism of responses. Computational linguistics methods assessed content differences in HCP vs GenAI responses, focusing on equity and empathy. RESULTS/UNASSIGNED:A total of 16 PCPs (8 [50.0%] female) reviewed 344 messages (175 GenAI drafted; 169 HCP drafted). Both GenAI and HCP responses were rated favorably. GenAI responses were rated higher for communication style than HCP responses (mean [SD], 3.70 [1.15] vs 3.38 [1.20]; P = .01, U = 12 568.5) but were similar to HCPs on information content (mean [SD], 3.53 [1.26] vs 3.41 [1.27]; P = .37; U = 13 981.0) and usable draft proportion (mean [SD], 0.69 [0.48] vs 0.65 [0.47], P = .49, t = -0.6842). Usable GenAI responses were considered more empathetic than usable HCP responses (32 of 86 [37.2%] vs 13 of 79 [16.5%]; difference, 125.5%), possibly attributable to more subjective (mean [SD], 0.54 [0.16] vs 0.31 [0.23]; P < .001; difference, 74.2%) and positive (mean [SD] polarity, 0.21 [0.14] vs 0.13 [0.25]; P = .02; difference, 61.5%) language; they were also numerically longer (mean [SD] word count, 90.5 [32.0] vs 65.4 [62.6]; difference, 38.4%), but the difference was not statistically significant (P = .07) and more linguistically complex (mean [SD] score, 125.2 [47.8] vs 95.4 [58.8]; P = .002; difference, 31.2%). CONCLUSIONS/UNASSIGNED:In this cross-sectional study of PCP perceptions of an EHR-integrated GenAI chatbot, GenAI was found to communicate information better and with more empathy than HCPs, highlighting its potential to enhance patient-HCP communication. However, GenAI drafts were less readable than HCPs', a significant concern for patients with low health or English literacy.
PMCID:11252893
PMID: 39012633
ISSN: 2574-3805
CID: 5686582
Navigating Remote Blood Pressure Monitoring-The Devil Is in the Details
Schoenthaler, Antoinette M; Richardson, Safiya; Mann, Devin
PMID: 38829621
ISSN: 2574-3805
CID: 5665042