homeresources & publications › crowdsourcing user studies with mechanical turk

TECHNICAL PUBLICATIONS:

Crowdsourcing user studies with Mechanical Turk

 

Collecting user input is important for many aspects of the design process, and includes techniques ranging from informal user surveys to rigorous laboratory studies. However, the costs involved in engaging users for evaluation often requires practitioners to trade off between sample sizes, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user input and measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach.

 
 
read more
 
citation

Kittur, A.; Chi, E. H. ; Suh, B. Crowdsourcing user studies with Mechanical Turk. Proceedings of the 26th Annual ACM Conference on Human Factors in Computing Systems (CHI '08); 2008 April 5-10; Florence, Italy. NY: ACM; 2008; 453-456.

copyright

Copyright © ACM, 2008. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in CHI '08 http://doi.acm.org/10.1145/1357054.1357127