The Effect of Risk Preference on Two Crowdsourcing Mechanisms

The Effect of Risk Preference on Two Crowdsourcing Mechanisms

Kunxiang DongZongxiao Xie Jie Zhen Xiukun Zhao 

School of Management Science & Engineering, Shandong University of Finance and Economics, Jinan 250014, China

China Financial Certification Authority, Beijing 100054, China

School of Business Planning, Chongqing Technology and Business University, Chongqing 400067, China

Business School, Tianjin University of Finance and Economics, Tianjin 300222, China

Corresponding Author Email: 
dkxgood@163.com, xiezongxiao@vip.163.com, zhenjie886@163.com, zhaoxiukun1986@163.com
Page: 
346-359
|
DOI: 
https://doi.org/10.18280/ama_a.540216
Received: 
12 June 2017
| |
Accepted: 
28 June 2017
| | Citation

OPEN ACCESS

Abstract: 

The ubiquity of the Internet has promoted the importance and prevalence of crowdsourcing, an online distributed problem-solving and production model. Crowdsourcing harnesses the collective intelligence of a crowd of web users through an open-call format, and boasts immeasurable potential for government and non-profit applications. However, it is impossible to design an efficient crowdsourcing mechanism without the deep understanding of the optimal participation decisions made by sponsors and solvers. The previous studies on optimal participation decision in crowdsourcing have mainly focused on the impact of task factors, contest forms and individual factors with risk neutral solvers. In reality, however, the decision-making process of solvers is far from risk neutral, but directly affected by risk preference. In light of the problem, this paper explores the impact of rewards, the number of solvers and different risk preferences on decision-making in two crowdsourcing mechanisms: maximizing the total quality (TQ) and maximizing the best quality (BQ) of the task. The all-pay auction model and Stackelberg competition were built to obtain the optimal solutions of sponsors and solvers. Then, our model was validated based on the data extracted from taskcn.com. The results show that: (1) the solvers’ expected utilities increase with rewards and risk preference, but decrease with the increase in the number of solvers; (2) the task quality obtained by sponsors, whether it is measured by the TQ or the BQ, is directly proportional to rewards, the number of solvers and the risk preference. The data of taskcn.com significantly or partly supported the corollaries of the proposed model.

Keywords: 

Crowdsourcing, All-pay action, Risk preference, Optimal decisions.

1. Introduction
2. Literature Review
3. Basic All-Pay Auction Model
4. The Optimal Solutions of Two Mechanisms
5. Empirical Analysis
6. Conclusion
Acknowledgements

We gratefully acknowledge the support of grants from the National Social Science Fund of China (Grant No. 17CGL019), the Natural Science Foundation of Shandong Province of China (Grant No. ZR2017BG010) and the Humanities and Social Sciences Program of Shandong Universities (Grant No. J17RB094).

  References

1. I. Blohm, J.M. Leimeister, H. Krcmar, Crowdsourcing: How to benefit from (too) many great ideas, 2013, MIS Quarterly Executive, vol. 12, no. 4, pp. 199-211.

2. L. Mortara, S.J. Ford, M. Jaeger, Idea Competitions under scrutiny: Acquisition, intelligence or public relations mechanism, 2013, Technological Forecasting and Social Change, vol. 80, no. 8, pp. 1563-1578.

3. A. Afuah, C.L. Tucci, Crowdsourcing as a solution to distant search, 2008, Academy of Management Review, vol. 37, no. 3, pp. 355-375.

4. D. Lüttgens, P. Pollok, D. Antons, et al., Wisdom of the crowd and capabilities of a few: Internal success factors of crowdsourcing for innovation, 2014, Journal of Business Economics, vol. 84, no. 3, pp. 339-374.

5. D.W. Dahl, C. Fuchs, M. Schreier, Why and when consumers prefer products of user-driven firms: A social identification account, 2014, Management Science, vol. 61, no. 8, pp. 1978-1988.

6. F. Wijnhoven, M. Ehrenhard, J. Kuhn, Open government objectives and participation motivations, 2015, Government Information Quarterly, vol. 32, no. 1, pp. 30-42.

7. T.D. LaToza, H.A. van, Crowdsourcing in software engineering: Models, motivations, and challenges, 2016, IEEE software, vol. 33, no. 1, pp. 74-80.

8. H.J. Ye, A. Kankanhalli, Investigating the antecedents of organizational task crowdsourcing, 2015, Information & Management, vol. 52, no.1, pp. 98-110.

9. H. Zheng, D. Li, W. Hou, Task design, motivation, and participation in crowdsourcing contests, 2011, International Journal of Electronic Commerce, vol. 15, no. 4, pp. 57-88.

10. B.L. Bayus, Crowdsourcing new product ideas over time: An analysis of the Dell IdeaStorm community, 2013, Management science, vol. 59, no. 1, pp. 226-244.

11. C. Harris, C. Wu, Using tri-reference point theory to evaluate risk attitude and the effects of financial incentives in a gamified crowdsourcing task, 2014, Journal of Business Economics, vol. 84, no. 3, pp. 281-302.

12. X.L. Shen, M.K. Lee, M.K. Cheung, Exploring online social behavior in crowdsourcing communities: A relationship management perspective, 2014, Computers in Human Behavior, vol. 40, no. 1, pp. 144-151.

13. D.C. Brabham, Motivations for participation in a crowdsourcing application to improve public engagement in transit planning, 2012, Journal of Applied Communication Research, vol. 40, no. 3, pp. 307-328.

14. R.P. Bagozzi, U. M. Dholakia, Intentional social action in virtual communities, 2002, Journal of interactive marketing, vol. 16, no. 2, pp. 2-21.

15. Y. Wang, D.R. Fesenmaier, Towards understanding members’ general participation in and active contribution to an online travel community, 2004, Tourism management, vol. 25, no. 6, pp. 709-722.

16. Y. Sun, Y. Fang, K.H. Lim, Understanding sustained participation in transactional virtual communities, 2012, Decision Support Systems, vol. 53, no. 1, pp. 12-22.

17. C. Terwiesch, Y. Xu, Innovation contests, open innovation, and multiagent problem solving, 2008, Management science, vol. 54, no. 9, pp. 1529-1543.

18. K.J. Boudreau, N. Lacetera, K.R. Lakhani, Incentives and problem uncertainty in innovation contests: An empirical analysis, 2011, Management science, vol. 57, no. 5, pp. 843-863.

19. T.X. Liu, J. Yang, L.A. Adamic, et al., Crowdsourcing with all-pay auctions: A field experiment on taskcn, 2014, Management Science, vol. 60, no. 8, pp. 2020-2037.

20. V. Naroditskiy, N.R. Jennings, P.H. Van, et al., Crowdsourcing contest dilemma, 2014, Journal of The Royal Society Interface, vol. 11, no. 99, pp. 1-8.

21. T. Luo, S.S. Kanhere, S.K. Das, et al., Incentive mechanism design for heterogeneous crowdsourcing using all-pay contests, 2016, IEEE Transactions on Mobile Computing, vol. 5, no. 9, pp. 2234-2246.

22. Y. Chen, T.H. Ho, Y.M. Kim, Knowledge market design: A field experiment at Google Answers, 2010, Journal of Public Economic Theory, vol. 12, no. 4, pp. 641-664.

23. A. Schöttner, Fixed-prize tournaments versus first-price auctions in innovation contests, 2008, Economic Theory, vol. 35, no. 1, pp. 57-71.

24. K.J. Arrow, Utilities, attitudes, choices: A review note, 1958, Econometrica: Journal of the Econometric Society, vol. 26, no. 1, pp. 1-23.

25. S. Antony, Z. Lin, B. Xu, Determinants of escrow service adoption in consumer-to-consumer online auction market: An experimental study, 2006, Decision Support Systems, vol. 42, no. 3, pp. 1889-1900.

26. P.D. Di, M. Vojnovic, Crowdsourcing and all-pay auctions, 2009, Proceedings of the 10th ACM Conference on Electronic commerce, 2009, New York, USA, pp. 119-128.

27. M. Jiang, W.L. McGill, Y. Cao, Participatory risk management: concept and illustration, 2012, International Journal of Social Computing and Cyber-Physical Systems, vol. 1, no. 3, pp. 268-285.