Try a new search

Format these results:

Searched for:

person:on272

in-biosketch:yes

Total Results:

119


Matching individual attributes with task types in collaborative citizen science

Nakayama, Shinnosuke; Torre, Marina; Nov, Oded; Porfiri, Maurizio
In citizen science, participants' productivity is imperative to project success. We investigate the feasibility of a collaborative approach to citizen science, within which productivity is enhanced by capitalizing on the diversity of individual attributes among participants. Specifically, we explore the possibility of enhancing productivity by integrating multiple individual attributes to inform the choice of which task should be assigned to which individual. To that end, we collect data in an online citizen science project composed of two task types: (i) filtering images of interest from an image repository in a limited time, and (ii) allocating tags on the object in the filtered images over unlimited time. The first task is assigned to those who have more experience in playing action video games, and the second task to those who have higher intrinsic motivation to participate. While each attribute has weak predictive power on the task performance, we demonstrate a greater increase in productivity when assigning participants to the task based on a combination of these attributes. We acknowledge that such an increase is modest compared to the case where participants are randomly assigned to the tasks, which could offset the effort of implementing our attribute-based task assignment scheme. This study constitutes a first step toward understanding and capitalizing on individual differences in attributes toward enhancing productivity in collaborative citizen science.
PMCID:7924433
PMID: 33816862
ISSN: 2376-5992
CID: 4968892

Rational inattention, competitive supply, and psychometrics

Caplin, Andrew; Csaba, Daniel; Leahy, John; Nov, Oded
Cambridge, MA : National Bureau of Economic Research, November 2018
Extent: 44 p. ; 22 cm
ISBN: n/a
CID: 4347172

Exploring Genetic Data Across Individuals: Design and Evaluation of a Novel Comparative Report Tool

Westendorf, Lauren; Shaer, Orit; Pollalis, Christina; Verish, Clarissa; Nov, Oded; Ball, Mad Price
BACKGROUND:The growth in the availability of personal genomic data to nonexperts poses multiple challenges to human-computer interaction research; data are highly sensitive, complex, and have health implications for individuals and families. However, there has been little research on how nonexpert users explore their genomic data. OBJECTIVE:We focus on how to support nonexperts in exploring and comparing their own personal genomic report with those of other people. We designed and evaluated CrossGenomics, a novel tool for comparing personal genetic reports, which enables exploration of shared and unshared genetic variants. Focusing on communicating comparative impact, rarity, and certainty, we evaluated alternative novel interactive prototypes. METHODS:We conducted 3 user studies. The first focuses on assessing the usability and understandability of a prototype that facilitates the comparison of reports from 2 family members. Following a design iteration, we studied how various prototypes support the comparison of genetic reports of a 4-person family. Finally, we evaluated the needs of early adopters-people who share their genetic reports publicly for comparing their genetic reports with that of others. RESULTS:In the first study, sunburst- and Venn-based comparisons of two genomes led to significantly higher domain comprehension, compared with the linear comparison and with the commonly used tabular format. However, results show gaps between objective and subjective comprehension, as sunburst users reported significantly lower perceived understanding and higher levels of confusion than the users of the tabular report. In the second study, users who were allowed to switch between the different comparison views presented higher comprehension levels, as well as more complex reasoning than users who were limited to a single comparison view. In the third study, 35% (17/49) reported learning something new from comparing their own data with another person's data. Users indicated that filtering and toggling between comparison views were the most useful features. CONCLUSIONS:Our findings (1) highlight features and visualizations that show strengths in facilitating user comprehension of genomic data, (2) demonstrate the value of affording users the flexibility to examine the same report using multiple views, and (3) emphasize users' needs in comparison of genomic data. We conclude with design implications for engaging nonexperts with complex multidimensional genomic data.
PMCID:6231826
PMID: 30249582
ISSN: 1438-8871
CID: 4345732

Social signals as design interventions for enhancing citizen science contributions

Diner, David; Nakayama, Shinnosuke; Nov, Oded; Porfiri, Maurizio
ISI:000427207200008
ISSN: 1369-118x
CID: 4346202

INVESTIGATING THE EFFECT OF SOUND-EVENT LOUDNESS ON CROWDSOURCED AUDIO ANNOTATIONS [Meeting Abstract]

Cartwright, Mark; Salamon, Justin; Seals, Ayanna; Nov, Oded; Bello, Juan Pablo
ISI:000446384600068
ISSN: 1520-6149
CID: 4346222

AI-assisted game debugging with Cicero [Meeting Abstract]

Machado, Tiago; Gopstein, Daniel; Nealen, Andy; Nov, Oded; Togelius, Julian
ISI:000451175500002
ISSN: n/a
CID: 4346232

The Influence of Social Information and Self-expertise on Emergent Task Allocation in Virtual Groups

Nakayama, Shinnosuke; Diner, David; Holland, Jacob G.; Bloch, Guy; Porfiri, Maurizio; Nov, Oded
ISI:000451622700001
ISSN: 2296-701x
CID: 4346242

The Persuasive Power of Algorithmic and Crowdsourced Advice

Gunaratne, Junius; Zalmanson, Lior; Nov, Oded
ISI:000453555300005
ISSN: 0742-1222
CID: 4346252

Eliciting Users' Demand for Interface Features [Meeting Abstract]

Nov, Oded; Su, Han
ISI:000509673103067
ISSN: 2159-6368
CID: 4346352

Seeing sound: Investigating the effects of visualizations and complexity on crowdsourced audio annotations

Cartwright, Mark; Seals, Ayanna; Salamon, Justin; Williams, Alex; Mikloska, Stefanie; MacConnell, Duncan; Law, Edith; Bello, Juan P.; Nov, Oded
Audio annotation is key to developing machine-listening systems; yet, effective ways to accurately and rapidly obtain crowdsourced audio annotations is understudied. In this work, we seek to quantify the reliability/redundancy trade-off in crowdsourced soundscape annotation, investigate how visualizations affect accuracy and efficiency, and characterize how performance varies as a function of audio characteristics. Using a controlled experiment, we varied sound visualizations and the complexity of soundscapes presented to human annotators. Results show that more complex audio scenes result in lower annotator agreement, and spectrogram visualizations are superior in producing higher quality annotations at lower cost of time and human labor. We also found recall is more affected than precision by soundscape complexity, and mistakes can be often attributed to certain sound event characteristics. These findings have implications not only for how we should design annotation tasks and interfaces for audio data, but also how we train and evaluate machine-listening systems.
SCOPUS:85061277980
ISSN: 2573-0142
CID: 4347112