1. Scaling Human Effort in Idea Screening and Content Evaluation
- Author
-
Cathy L. Yang, Artem Timoshenko, Pavel Kireyev, Ecole des Hautes Etudes Commerciales (HEC Paris), and HEC Paris Research Paper Series
- Subjects
Winnow ,050208 finance ,Information retrieval ,Product design ,business.industry ,Information value ,Computer science ,05 social sciences ,Content Evaluation ,Stakeholder ,Crowdsourcing ,Machine Learning ,Ranking ,0502 economics and business ,Content (measure theory) ,A priori and a posteriori ,[SHS.GESTION]Humanities and Social Sciences/Business administration ,050207 economics ,business ,Innovation ,Wisdom of Crowds - Abstract
Brands and advertisers often tap into the crowd to generate ideas for new products and ad creatives by hosting ideation contests. Content evaluators then winnow thousands of submitted ideas before a separate stakeholder, such as a manager or client, decides on a small subset to pursue. We demonstrate the information value of data generated by content evaluators in past contests and propose a proof-of-concept machine learning approach to efficiently surface the best submissions in new contests with less human effort. The approach combines ratings by different evaluators based on their correlation with the past stakeholder choices, controlling for submission characteristics and textual content features. Using field data from a crowdsourcing platform, we demonstrate that the approach improves performance by identifying nonlinear transformations and efficiently reweighting evaluator ratings. Implementing the proposed approach can affect the optimal assignment of internal experts to ideation contests. Two evaluators whose votes were a priori equally correlated with sponsor choices may provide substantially different incremental information to improve the model-based idea ranking. We provide additional support for our findings using simulations based on a product design survey.
- Published
- 2020