Searched for: in-biosketch:yes
person:hudsot01
Anti-RGS8 Paraneoplastic Neurologic Syndrome Presenting with Skew Deviation and Mild Cerebellar Dysfunction [Case Report]
Jauregui, Ruben; Evens, Andrew M; Zekeridou, Anastasia; Steriade, Claude; Hudson, Todd; Voelbel, Gerald T; Galetta, Steven L; Rucker, Janet C
RGS8-associated paraneoplastic neurologic syndrome (PNS) is a recently-described disorder associated with lymphomas and typically presenting with severe, rapidly-progressing cerebellar dysfunction. We describe a patient who presented with mild signs of cerebellar dysfunction, including ocular motor abnormalities and impaired tandem gait. CSF showed elevated protein and a neural-restricted antibody pattern. Mesenteric lymphadenopathy on abdominal CT was biopsied and diagnosed as follicular B-cell lymphoma. After four years, the previously-detected antibody pattern was identified as RGS8 antibodies. This case describes the first RGS8-PNS patient presenting with a subtle and ocular motor predominant cerebellar syndrome with low-grade lymphoma.
PMID: 40146373
ISSN: 1473-4230
CID: 5816762
Haptics-based, higher-order sensory substitution designed for object negotiation in blindness and low vision: Virtual Whiskers
Feng, Junchi; Hamilton-Fletcher, Giles; Hudson, Todd E; Beheshti, Mahya; Porfiri, Maurizio; Rizzo, John-Ross
PURPOSE/UNASSIGNED:People with blindness and low vision (pBLV) face challenges in navigating. Mobility aids are crucial for enhancing independence and safety. This paper presents an electronic travel aid that leverages a haptic-based, higher-order sensory substitution approach called Virtual Whiskers, designed to help pBLV navigate obstacles effectively, efficiently, and safely. MATERIALS AND METHODS/UNASSIGNED:Virtual Whiskers is equipped with a plurality of modular vibration units that operate independently to deliver haptic feedback to users. Virtual Whiskers features two navigation modes: open path mode and depth mode, each addressing obstacle negotiation from different perspectives. The open path mode detects and delineates a traversable area within an analyzed field of view and then guides the user in the most traversable direction with adaptive vibratory feedback. Depth mode assists users in negotiating obstacles by highlighting spatial areas with prominent obstacles; haptic feedback is generated by re-mapping proximity to vibration intensity. We recruited 10 participants with blindness or low vision for user testing of Virtual Whiskers. RESULTS/UNASSIGNED:Both approaches reduce hesitation time (idle periods) and decrease the number of cane contacts with objects and walls. CONCLUSIONS/UNASSIGNED:Virtual Whiskers is a promising obstacle negotiation strategy that demonstrates great potential to assist with pBLV navigation.
PMID: 39982810
ISSN: 1748-3115
CID: 5801602
Multi-faceted sensory substitution using wearable technology for curb alerting: a pilot investigation with persons with blindness and low vision
Ruan, Ligao; Hamilton-Fletcher, Giles; Beheshti, Mahya; Hudson, Todd E; Porfiri, Maurizio; Rizzo, John-Ross
Curbs separate the edge of raised sidewalks from the street and are crucial to locate in urban environments as they help delineate safe pedestrian zones from dangerous vehicular lanes. However, the curbs themselves are also significant navigation hazards, particularly for people who are blind or have low vision (pBLV). The challenges faced by pBLV in detecting and properly orienting themselves for these abrupt elevation changes can lead to falls and serious injuries. Despite recent advancements in assistive technologies, the detection and early warning of curbs remains a largely unsolved challenge. This paper aims to tackle this gap by introducing a novel, multi-faceted sensory substitution approach hosted on a smart wearable; the platform leverages an RGB camera and an embedded system to capture and segment curbs in real time and provide early warning and orientation information. The system utilizes a YOLOv8 segmentation model which has been trained on our custom curb dataset to interpret camera input. The system output consists of adaptive auditory beeps, abstract sonifications, and speech, which convey curb distance and orientation. Through human-subjects experimentation, we demonstrate the effectiveness of the system as compared to the white cane. Results show that our system can provide advanced warning through a larger safety window than the cane, while offering nearly identical curb orientation information. Future enhancements will focus on expanding our curb segmentation dataset, improving distance estimations through advanced 3D sensors and AI-models, refining system calibration and stability, and developing user-centric sonification methods to cater for a diverse range of visual impairments.
PMID: 39954234
ISSN: 1748-3115
CID: 5794092
Evaluating the efficacy of UNav: A computer vision-based navigation aid for persons with blindness or low vision
Yang, Anbang; Tamkittikhun, Nattachart; Hamilton-Fletcher, Giles; Ramdhanie, Vinay; Vu, Thu; Beheshti, Mahya; Hudson, Todd; Vedanthan, Rajesh; Riewpaiboon, Wachara; Mongkolwat, Pattanasak; Feng, Chen; Rizzo, John-Ross
UNav is a computer-vision-based localization and navigation aid that provides step-by-step route instructions to reach selected destinations without any infrastructure in both indoor and outdoor environments. Despite the initial literature highlighting UNav's potential, clinical efficacy has not yet been rigorously evaluated. Herein, we assess UNav against standard in-person travel directions (SIPTD) for persons with blindness or low vision (PBLV) in an ecologically valid environment using a non-inferiority design. Twenty BLV subjects (age = 38 ± 8.4; nine females) were recruited and asked to navigate to a variety of destinations, over short-range distances (<200 m), in unfamiliar spaces, using either UNav or SIPTD. Navigation performance was assessed with nine dependent variables to assess travel confidence, as well as spatial and temporal performances, including path efficiency, total time, and wrong turns. The results suggest that UNav is not only non-inferior to the standard-of-care in wayfinding (SIPTD) but also superior on 8 out of 9 metrics, as compared to SIPTD. This study highlights the range of benefits computer vision-based aids provide to PBLV in short-range navigation and provides key insights into how users benefit from this systematic form of computer-aided guidance, demonstrating transformative promise for educational attainment, gainful employment, and recreational participation.
PMID: 39137956
ISSN: 1949-3614
CID: 5726822
Accuracy and Usability of Smartphone-Based Distance Estimation Approaches for Visual Assistive Technology Development
Hamilton-Fletcher, Giles; Liu, Mingxin; Sheng, Diwei; Feng, Chen; Hudson, Todd E; Rizzo, John-Ross; Chan, Kevin C
PMCID:10939328
PMID: 38487094
ISSN: 2644-1276
CID: 5737842
Wearables for Persons with Blindness and Low Vision: Form Factor Matters
Han, Yangha Hank; Beheshti, Mahya; Jones, Blake; Hudson, Todd E; Seiple, William H; Rizzo, John-Ross Jr
Based on statistics from the WHO and the International Agency for the Prevention of Blindness, an estimated 43.3 million people have blindness and 295 million have moderate and severe vision impairment globally as of 2020, statistics expected to increase to 61 million and 474 million respectively by 2050, staggering numbers. Blindness and low vision (BLV) stultify many activities of daily living, as sight is beneficial to most functional tasks. Assistive technologies for persons with blindness and low vision (pBLV) consist of a wide range of aids that work in some way to enhance one's functioning and support independence. Although handheld and head-mounted approaches have been primary foci when building new platforms or devices to support function and mobility, this perspective reviews potential shortcomings of these form factors or embodiments and posits that a body-centered approach may overcome many of these limitations.
PMID: 37115821
ISSN: 1949-3614
CID: 5465582
A Smart Service System for Spatial Intelligence and Onboard Navigation for Individuals with Visual Impairment (VIS4ION Thailand): study protocol of a randomized controlled trial of visually impaired students at the Ratchasuda College, Thailand
Beheshti, Mahya; Naeimi, Tahereh; Hudson, Todd E; Feng, Chen; Mongkolwat, Pattanasak; Riewpaiboon, Wachara; Seiple, William; Vedanthan, Rajesh; Rizzo, John-Ross
BACKGROUND:ION (Visually Impaired Smart Service System for Spatial Intelligence and Onboard Navigation), an advanced wearable technology, to enable real-time access to microservices, providing a potential solution to close this gap and deliver consistent and reliable access to critical spatial information needed for mobility and orientation during navigation. METHODS:ION. In addition, we will test another cohort of students for navigational, health, and well-being improvements, comparing weeks 1 to 4. We will also conduct a process evaluation according to the Saunders Framework. Finally, we will extend our computer vision and digital twinning technique to a 12-block spatial grid in Bangkok, providing aid in a more complex environment. DISCUSSION/CONCLUSIONS:Although electronic navigation aids seem like an attractive solution, there are several barriers to their use; chief among them is their dependence on either environmental (sensor-based) infrastructure or WiFi/cell "connectivity" infrastructure or both. These barriers limit their widespread adoption, particularly in low-and-middle-income countries. Here we propose a navigation solution that operates independently of both environmental and Wi-Fi/cell infrastructure. We predict the proposed platform supports spatial cognition in BLV populations, augmenting personal freedom and agency, and promoting health and well-being. TRIAL REGISTRATION/BACKGROUND:ClinicalTrials.gov under the identifier: NCT03174314, Registered 2017.06.02.
PMCID:9990238
PMID: 36879333
ISSN: 1745-6215
CID: 5432642
UNav: An Infrastructure-Independent Vision-Based Navigation System for People with Blindness and Low Vision
Yang, Anbang; Beheshti, Mahya; Hudson, Todd E; Vedanthan, Rajesh; Riewpaiboon, Wachara; Mongkolwat, Pattanasak; Feng, Chen; Rizzo, John-Ross
Vision-based localization approaches now underpin newly emerging navigation pipelines for myriad use cases, from robotics to assistive technologies. Compared to sensor-based solutions, vision-based localization does not require pre-installed sensor infrastructure, which is costly, time-consuming, and/or often infeasible at scale. Herein, we propose a novel vision-based localization pipeline for a specific use case: navigation support for end users with blindness and low vision. Given a query image taken by an end user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database of the target space. The geolocations of these similar images are utilized in a downstream task that employs a weighted-average method to estimate the end user's location. Another downstream task utilizes the perspective-n-point (PnP) algorithm to estimate the end user's direction by exploiting the 2D-3D point correspondences between the query image and the 3D environment, as extracted from matched images in the database. Additionally, this system implements Dijkstra's algorithm to calculate a shortest path based on a navigable map that includes the trip origin and destination. The topometric map used for localization and navigation is built using a customized graphical user interface that projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan. Sequential images used for map construction can be collected in a pre-mapping step or scavenged through public databases/citizen science. The end-to-end system can be installed on any internet-accessible device with a camera that hosts a custom mobile application. For evaluation purposes, mapping and localization were tested in a complex hospital environment. The evaluation results demonstrate that our system can achieve localization with an average error of less than 1 m without knowledge of the camera's intrinsic parameters, such as focal length.
PMCID:9696753
PMID: 36433501
ISSN: 1424-8220
CID: 5382902
Accuracy of clinical versus oculographic detection of pathological saccadic slowing
Grossman, Scott N; Calix, Rachel; Hudson, Todd; Rizzo, John Ross; Selesnick, Ivan; Frucht, Steven; Galetta, Steven L; Balcer, Laura J; Rucker, Janet C
Saccadic slowing as a component of supranuclear saccadic gaze palsy is an important diagnostic sign in multiple neurologic conditions, including degenerative, inflammatory, genetic, or ischemic lesions affecting brainstem structures responsible for saccadic generation. Little attention has been given to the accuracy with which clinicians correctly identify saccadic slowing. We compared clinician (n = 19) judgements of horizontal and vertical saccade speed on video recordings of saccades (from 9 patients with slow saccades, 3 healthy controls) to objective saccade peak velocity measurements from infrared oculographic recordings. Clinician groups included neurology residents, general neurologists, and fellowship-trained neuro-ophthalmologists. Saccades with normal peak velocities on infrared recordings were correctly identified as normal in 57% (91/171; 171 = 9 videos × 19 clinicians) of clinician decisions; saccades determined to be slow on infrared recordings were correctly identified as slow in 84% (224/266; 266 = 14 videos × 19 clinicians) of clinician decisions. Vertical saccades were correctly identified as slow more often than horizontal saccades (94% versus 74% of decisions). No significant differences were identified between clinician training levels. Reliable differentiation between normal and slow saccades is clinically challenging; clinical performance is most accurate for detection of vertical saccade slowing. Quantitative analysis of saccade peak velocities enhances accurate detection and is likely to be especially useful for detection of mild saccadic slowing.
PMID: 36183516
ISSN: 1878-5883
CID: 5359142
MICK (Mobile Integrated Cognitive Kit) app: Feasibility of an accessible tablet-based rapid picture and number naming task for concussion assessment in a division 1 college football cohort
Bell, Carter A; Rice, Lionel; Balcer, Marc J; Pearson, Randolph; Penning, Brett; Alexander, Aubrey; Roskelly, Jensyn; Nogle, Sally; Tomczyk, Chris P; Tracey, Allie J; Loftin, Megan C; Pollard-McGrandy, Alyssa M; Zynda, Aaron J; Covassin, Tracey; Park, George; Rizzo, John-Ross; Hudson, Todd; Rucker, Janet C; Galetta, Steven L; Balcer, Laura; Kaufman, David I; Grossman, Scott N
Although visual symptoms are common following concussion, quantitative measures of visual function are missing from concussion evaluation protocols on the athletic sideline. For the past half century, rapid automatized naming (RAN) tasks have demonstrated promise as quantitative neuro-visual assessment tools in the setting of head trauma and other disorders but have been previously limited in accessibility and scalability. The Mobile Interactive Cognitive Kit (MICK) App is a digital RAN test that can be downloaded on most mobile devices and can therefore provide a quantitative measure of visual function anywhere, including the athletic sideline. This investigation examined the feasibility of MICK App administration in a cohort of Division 1 college football players. Participants (n = 82) from a National Collegiate Athletic Association (NCAA) Division 1 football team underwent baseline testing on the MICK app. Total completion times of RAN tests on the MICK app were recorded; magnitudes of best time scores and between-trial learning effects were determined by paired t-test. Consistent with most timed performance measures, there were significant learning effects between the two baseline trials for both RAN tasks on the MICK app: Mobile Universal Lexicon Evaluation System (MULES) (p < 0.001, paired t-test, mean improvement 13.3 s) and the Staggered Uneven Number (SUN) (p < 0.001, mean improvement 3.3 s). This study demonstrated that the MICK App can be feasibly administered in the setting of pre-season baseline testing in a Division I environment. These data provide a foundation for post-injury sideline testing that will include comparison to baseline in the setting of concussion.
PMID: 36208585
ISSN: 1878-5883
CID: 5351822