Try a new search

Format these results:

Searched for:

person:rizzoj01 or hudsot01

active:yes

exclude-minors:true

Total Results:

107


Using Virtual Reality to Enhance Mobility, Safety, and Equity for Persons with Vision Loss in Urban Environments

Ricci, Fabiana Sofia; Ukegbu, Charles K; Krassner, Anne; Hazarika, Sanjukta; White, Jade; Porfiri, Maurizio; Rizzo, John-Ross
This study explores the use of virtual reality (VR) as an innovative tool to enhance awareness, understanding of accessibility for persons with vision loss (VL), and acceptance. Through a VR-based workshop developed in collaboration with New York City's Department Of Transportation, participants experienced immersive simulations of VL and related immersive mobility challenges. The methodology included the development of a VR environment, simulations of vision loss, testing with the DOT team during the workshop, and an assessment of changes in participants' knowledge, confidence in addressing accessibility challenges, and overall perception through pre- and post-intervention questionnaires. Participants included urban planners, designers, and architects. Results showed a significant increase in awareness of VL-related challenges that affect design guidelines, as well as improved confidence in addressing such challenges. Participants also expressed strong support for VR as a pedagogical tool, noting its potential for reshaping professional practices, improving capacity building, and enhancing inclusive design. The study demonstrates the effectiveness of VR as an experiential learning platform, fostering empathy and a long-term commitment to integrating VL considerations into urban design. These findings highlight the transformative potential of VR in advancing equity and accessibility in urban environments.
PMID: 40014220
ISSN: 1468-2869
CID: 5801222

Haptics-based, higher-order sensory substitution designed for object negotiation in blindness and low vision: Virtual Whiskers

Feng, Junchi; Hamilton-Fletcher, Giles; Hudson, Todd E; Beheshti, Mahya; Porfiri, Maurizio; Rizzo, John-Ross
PURPOSE/UNASSIGNED:People with blindness and low vision (pBLV) face challenges in navigating. Mobility aids are crucial for enhancing independence and safety. This paper presents an electronic travel aid that leverages a haptic-based, higher-order sensory substitution approach called Virtual Whiskers, designed to help pBLV navigate obstacles effectively, efficiently, and safely. MATERIALS AND METHODS/UNASSIGNED:Virtual Whiskers is equipped with a plurality of modular vibration units that operate independently to deliver haptic feedback to users. Virtual Whiskers features two navigation modes: open path mode and depth mode, each addressing obstacle negotiation from different perspectives. The open path mode detects and delineates a traversable area within an analyzed field of view and then guides the user in the most traversable direction with adaptive vibratory feedback. Depth mode assists users in negotiating obstacles by highlighting spatial areas with prominent obstacles; haptic feedback is generated by re-mapping proximity to vibration intensity. We recruited 10 participants with blindness or low vision for user testing of Virtual Whiskers. RESULTS/UNASSIGNED:Both approaches reduce hesitation time (idle periods) and decrease the number of cane contacts with objects and walls. CONCLUSIONS/UNASSIGNED:Virtual Whiskers is a promising obstacle negotiation strategy that demonstrates great potential to assist with pBLV navigation.
PMID: 39982810
ISSN: 1748-3115
CID: 5801602

Multi-faceted sensory substitution using wearable technology for curb alerting: a pilot investigation with persons with blindness and low vision

Ruan, Ligao; Hamilton-Fletcher, Giles; Beheshti, Mahya; Hudson, Todd E; Porfiri, Maurizio; Rizzo, John-Ross
Curbs separate the edge of raised sidewalks from the street and are crucial to locate in urban environments as they help delineate safe pedestrian zones from dangerous vehicular lanes. However, the curbs themselves are also significant navigation hazards, particularly for people who are blind or have low vision (pBLV). The challenges faced by pBLV in detecting and properly orienting themselves for these abrupt elevation changes can lead to falls and serious injuries. Despite recent advancements in assistive technologies, the detection and early warning of curbs remains a largely unsolved challenge. This paper aims to tackle this gap by introducing a novel, multi-faceted sensory substitution approach hosted on a smart wearable; the platform leverages an RGB camera and an embedded system to capture and segment curbs in real time and provide early warning and orientation information. The system utilizes a YOLOv8 segmentation model which has been trained on our custom curb dataset to interpret camera input. The system output consists of adaptive auditory beeps, abstract sonifications, and speech, which convey curb distance and orientation. Through human-subjects experimentation, we demonstrate the effectiveness of the system as compared to the white cane. Results show that our system can provide advanced warning through a larger safety window than the cane, while offering nearly identical curb orientation information. Future enhancements will focus on expanding our curb segmentation dataset, improving distance estimations through advanced 3D sensors and AI-models, refining system calibration and stability, and developing user-centric sonification methods to cater for a diverse range of visual impairments.
PMID: 39954234
ISSN: 1748-3115
CID: 5794092

Reducing barriers through education: A scoping review calling for structured disability curricula in surgical training programs

Keegan, Grace; Rizzo, John-Ross; Gonzalez, Cristina M; Joseph, Kathie-Ann
BACKGROUND:Patients with disabilities face widespread barriers to accessing surgical care given inaccessible health systems, resulting in poor clinical outcomes and perpetuation of health inequities. One barrier is the lack of education, and therefore awareness, among trainees/providers, of the need for reasonable accommodations for surgical patients with disabilities. METHODS:We conducted a scoping review of the literature on the current state of disabilities curricula in medical education and graduate residency curriculum. RESULTS:While the literature does demonstrate a causal link between reasonable accommodation training and positive patient-provider relationships and improved clinical outcomes, in practice, disability-focused curricula are rare and often limited in time and to awareness-based didactic courses in medical education and surgical training. CONCLUSIONS:The absence of structured curricula to educate on anti-ableism and care for patients with disabilities promotes a system of structural "ableism." Expanding disability curricula for medical students and trainees may be an opportunity to intervene and promote better surgical care for all patients.
PMID: 39504925
ISSN: 1879-1883
CID: 5763982

Navigation Training for Persons With Visual Disability Through Multisensory Assistive Technology: Mixed Methods Experimental Study

Ricci, Fabiana Sofia; Liguori, Lorenzo; Palermo, Eduardo; Rizzo, John-Ross; Porfiri, Maurizio
BACKGROUND:Visual disability is a growing problem for many middle-aged and older adults. Conventional mobility aids, such as white canes and guide dogs, have notable limitations that have led to increasing interest in electronic travel aids (ETAs). Despite remarkable progress, current ETAs lack empirical evidence and realistic testing environments and often focus on the substitution or augmentation of a single sense. OBJECTIVE:This study aims to (1) establish a novel virtual reality (VR) environment to test the efficacy of ETAs in complex urban environments for a simulated visual impairment (VI) and (2) evaluate the impact of haptic and audio feedback, individually and combined, on navigation performance, movement behavior, and perception. Through this study, we aim to address gaps to advance the pragmatic development of assistive technologies (ATs) for persons with VI. METHODS:The VR platform was designed to resemble a subway station environment with the most common challenges faced by persons with VI during navigation. This environment was used to test our multisensory, AT-integrated VR platform among 72 healthy participants performing an obstacle avoidance task while experiencing symptoms of VI. Each participant performed the task 4 times: once with haptic feedback, once with audio feedback, once with both feedback types, and once without any feedback. Data analysis encompassed metrics such as completion time, head and body orientation, and trajectory length and smoothness. To evaluate the effectiveness and interaction of the 2 feedback modalities, we conducted a 2-way repeated measures ANOVA on continuous metrics and a Scheirer-Ray-Hare test on discrete ones. We also conducted a descriptive statistical analysis of participants' answers to a questionnaire, assessing their experience and preference for feedback modalities. RESULTS:Results from our study showed that haptic feedback significantly reduced collisions (P=.05) and the variability of the pitch angle of the head (P=.02). Audio feedback improved trajectory smoothness (P=.006) and mitigated the increase in the trajectory length from haptic feedback alone (P=.04). Participants reported a high level of engagement during the experiment (52/72, 72%) and found it interesting (42/72, 58%). However, when it came to feedback preferences, less than half of the participants (29/72, 40%) favored combined feedback modalities. This indicates that a majority preferred dedicated single modalities over combined ones. CONCLUSIONS:AT is crucial for individuals with VI; however, it often lacks user-centered design principles. Research should prioritize consumer-oriented methodologies, testing devices in a staged manner with progression toward more realistic, ecologically valid settings to ensure safety. Our multisensory, AT-integrated VR system takes a holistic approach, offering a first step toward enhancing users' spatial awareness, promoting safer mobility, and holds potential for applications in medical treatment, training, and rehabilitation. Technological advancements can further refine such devices, significantly improving independence and quality of life for those with VI.
PMID: 39556804
ISSN: 2369-2529
CID: 5758162

The criticality of reasonable accommodations: A scoping review revealing gaps in care for patients with blindness and low vision

Keegan, Grace; Rizzo, John-Ross; Morris, Megan A; Joseph, Kathie-Ann
BACKGROUND:Health and healthcare disparities for surgical patients with blindness and low vision (pBLV) stem from inaccessible healthcare systems that lack universal design principles or, at a minimum, reasonable accommodations (RA). OBJECTIVES/OBJECTIVE:We aimed to identify barriers to developing and implementing RAs in the surgical setting and provide a review of best practices for providing RAs. METHODS:We conducted a search of PubMed for evidence of reasonable accommodations, or lack thereof, in the surgical setting. Articles related to gaps and barriers to providing RAs for pBLV or best practices for supporting RAs were reviewed for the study. RESULTS:Barriers to the implementation of reasonable accommodations, and, accordingly, best practices for achieving equity for pBLV, relate to policies and systems, staff knowledge and attitudes, and materials and technology. CONCLUSIONS:These inequities for pBLV require comprehensive frameworks that offer, maintain, and support education about disability disparities and RAs in the surgical field. Providing RAs for surgical pBLV, and all patients with disabilities is an important and impactful step towards creating a more equitable and anti-ableist health system.
PMID: 39550827
ISSN: 1879-1883
CID: 5757912

Evaluating the efficacy of UNav: A computer vision-based navigation aid for persons with blindness or low vision

Yang, Anbang; Tamkittikhun, Nattachart; Hamilton-Fletcher, Giles; Ramdhanie, Vinay; Vu, Thu; Beheshti, Mahya; Hudson, Todd; Vedanthan, Rajesh; Riewpaiboon, Wachara; Mongkolwat, Pattanasak; Feng, Chen; Rizzo, John-Ross
UNav is a computer-vision-based localization and navigation aid that provides step-by-step route instructions to reach selected destinations without any infrastructure in both indoor and outdoor environments. Despite the initial literature highlighting UNav's potential, clinical efficacy has not yet been rigorously evaluated. Herein, we assess UNav against standard in-person travel directions (SIPTD) for persons with blindness or low vision (PBLV) in an ecologically valid environment using a non-inferiority design. Twenty BLV subjects (age = 38 ± 8.4; nine females) were recruited and asked to navigate to a variety of destinations, over short-range distances (<200 m), in unfamiliar spaces, using either UNav or SIPTD. Navigation performance was assessed with nine dependent variables to assess travel confidence, as well as spatial and temporal performances, including path efficiency, total time, and wrong turns. The results suggest that UNav is not only non-inferior to the standard-of-care in wayfinding (SIPTD) but also superior on 8 out of 9 metrics, as compared to SIPTD. This study highlights the range of benefits computer vision-based aids provide to PBLV in short-range navigation and provides key insights into how users benefit from this systematic form of computer-aided guidance, demonstrating transformative promise for educational attainment, gainful employment, and recreational participation.
PMID: 39137956
ISSN: 1949-3614
CID: 5726822

A Multi-Modal Foundation Model to Assist People with Blindness and Low Vision in Environmental Interaction

Hao, Yu; Yang, Fan; Huang, Hao; Yuan, Shuaihang; Rangan, Sundeep; Rizzo, John-Ross; Wang, Yao; Fang, Yi
People with blindness and low vision (pBLV) encounter substantial challenges when it comes to comprehensive scene recognition and precise object identification in unfamiliar environments. Additionally, due to the vision loss, pBLV have difficulty in accessing and identifying potential tripping hazards independently. Previous assistive technologies for the visually impaired often struggle in real-world scenarios due to the need for constant training and lack of robustness, which limits their effectiveness, especially in dynamic and unfamiliar environments, where accurate and efficient perception is crucial. Therefore, we frame our research question in this paper as: How can we assist pBLV in recognizing scenes, identifying objects, and detecting potential tripping hazards in unfamiliar environments, where existing assistive technologies often falter due to their lack of robustness? We hypothesize that by leveraging large pretrained foundation models and prompt engineering, we can create a system that effectively addresses the challenges faced by pBLV in unfamiliar environments. Motivated by the prevalence of large pretrained foundation models, particularly in assistive robotics applications, due to their accurate perception and robust contextual understanding in real-world scenarios induced by extensive pretraining, we present a pioneering approach that leverages foundation models to enhance visual perception for pBLV, offering detailed and comprehensive descriptions of the surrounding environment and providing warnings about potential risks. Specifically, our method begins by leveraging a large-image tagging model (i.e., Recognize Anything Model (RAM)) to identify all common objects present in the captured images. The recognition results and user query are then integrated into a prompt, tailored specifically for pBLV, using prompt engineering. By combining the prompt and input image, a vision-language foundation model (i.e., InstructBLIP) generates detailed and comprehensive descriptions of the environment and identifies potential risks in the environment by analyzing environmental objects and scenic landmarks, relevant to the prompt. We evaluate our approach through experiments conducted on both indoor and outdoor datasets. Our results demonstrate that our method can recognize objects accurately and provide insightful descriptions and analysis of the environment for pBLV.
PMCID:11122237
PMID: 38786557
ISSN: 2313-433x
CID: 5655102

Disparities in Care for Surgical Patients with Blindness and Low Vision: A Call for Inclusive Wound Care Strategies in the Post-Operative Period

Keegan, Grace; Rizzo, John-Ross; Morris, Megan A; Panarelli, Joseph; Joseph, Kathie-Ann
PMID: 38660799
ISSN: 1528-1140
CID: 5755932

Feasibility and Clinician Perspectives of the Visual Symptoms and Signs Screen: A Multisite Pilot Study

Roberts, Pamela S.; Wertheimer, Jeffrey; Ouellette, Debra; Hreha, Kimberly; Watters, Kelsey; Fielder, Jaimee; Graf, Min Jeong P.; Weden, Kathleen M.; Rizzo, John Ross
Background: The Visual Symptoms and Signs Screen (V-SASS) is a tool to identify vision deficits and facilitate referrals to vision specialists. The study objectives were to determine feasibility and clinician perspectives of the V-SASS. Methods: Prospective, multisite study with 141 new-onset stroke participants. After V-SASS administration, feasibility and predictive success were assessed. Results: The V-SASS identified vision symptoms and signs with high feasibility (>75%). Of those who screened positive, 93.1% had deficits in visual function or functional vision. Conclusions: The V-SASS was found to be feasible in multiple settings and accurately identify vision deficits and appropriately trigger vision referrals.
SCOPUS:85182920425
ISSN: 0882-7524
CID: 5629402