Development of a general crowdsourcing maturity model
Desarrollo de un modelo general de madurez del crowdsourcing
Développement d u´n modèle général de maturité de crowdsourcing
Carlos Mario Durango Yepes
Profesor, Investigador Asociado, Facultad de Ciencias Económicas, Administrativas y Contables, Fundación Universitaria Luis Amigó, Medellín, Colombia.
Ingeniero Químico, Magister en Gestión Tecnológica, UPB, Medellín.Grupo de Investigación Goras, categoría B, Funlam, Medellín, Colombia.
Víctor Daniel Gil Vera
Profesor Investigador, Facultad de Ingenierías y Arquitectura, Fundación Universitaria Luis Amigó, Medellín, Colombia.
Ingeniero Administrador, Magíster en Ingeniería de Sistemas, UNAL, Medellín. Grupo de Investigación SISCO, categoría B, Funlam, Medellín, Colombia.
Research article, PUBLINDEX-COLCIENCIAS clasification
Eje temático: administración y organizaciones
The article presents a general model of crowdsourcing maturity (MGMC), focused on measuring the maturity of managerial, behavioral and technological aspects that support the activities of crowdsourcing in organizations. As methodology, it was used a systematic literature review, taking into account the low number of research publications and the low number of literature reviews prescribing practices of Crowdsourcing Maturity Models. It has been developed an assessment tool that accompanies this model to facilitate practical applications. The results of this study indicate that the maturity model developed can serve as a useful tool to describe and guide the efforts to implement such concept, providing a clear description of the current situation, and guidelines to follow. To assess its validity and improve generalization, future research can apply the Crowdsourcing Maturity Model proposal to different contexts.
Keywords: capabilities maturity models, Crowdsourcing, Capabilities Maturity Models, Crowdsourcing
JEL classification: M19, O31, O39
El artículo presenta un modelo general de madurez de crowdsourcing (MGMC), enfocado en la medición de la madurez de los aspectos gerenciales, comportamentales y tecnológicos que apoyan las actividades del crowdsourcing en organizaciones. La metodología utilizada fue la revisión sistemática de literatura, teniendo en cuenta la baja cantidad de publicaciones de investigación y el bajo número de revisiones de la literatura que prescriben las prácticas de los Modelos de Madurez Crowdsourcing. Se ha desarrollado una herramienta de evaluación que acompaña este modelo para facilitar la aplicación práctica. Los resultados de este trabajo indican que el modelo de madurez desarrollado puede servir como una herramienta útil que describe y orienta los esfuerzos de implementación de dicho concepto, proporcionando una clara descripción de la situación actual y las indicaciones a seguir. Para evaluar su validez y mejorar la generalización, la investigación futura puede aplicar el Modelo de Madurez de Crowdsourcing propuesta a diferentes contextos.
Palabras clave: crowdsourcing, medición del Crowdsourcing, modelos de madurez de las capacidades, Medición del Crowdsourcing.
L´ article présente un modèle général de maturité de crowdsourcing (MGMC), axé sur la mesure de la maturité des aspects managériaux, comportementaux et technologiques qui soutiennent les activités de crowdsourcing dans les organisations. La méthodologie appliquée repose sur la révision systématique de littérature, tenant compte de la faible quantité de recherches et le petit nombre de révisions de littérature dans le domaine des Modèles de Maturité de Crowdsourcing. On a développé un outil d´évaluation complémentaire à ce modèle pour faciliter l a´pplication pratique. Les résultats de ce travail indiquent que le modèle de maturité développé peut servir comme un outil pratique qui décrit et oriente les efforts de mise en œuvre du concept, en fournissant une description claire de la situation actuelle et les instructions à suivre. Pour évaluer sa validité et améliorer la généralisation, la recherche future peut appliquer le Modèle de Maturité de Crowdsourcing dans différents contextes.
Mots clef: crowdsourcing, mesure du Crowdsourcing, modèles de maturité des compétences.
Innovation processes motivated by information technology have been the main drivers of collaborative intelligence that allowed connect large groups of people. The term crowdsourcing was coined by Howe (2006); this can be viewed as a method of distributing work to a large number of workers both inside and outside of an organization, for the purpose of improving decision making, completing cumbersome tasks, or co-creating designs and other projects (Chiu, Liang and Turban, 2014). Crowdsourcing is not merely a buzzword, but is instead a strategic model to attract an interested, motivated crowd of individuals capable of providing solutions of better quality and quantity to those that even traditional forms of business can do (Verma and Ruj, 2014). The adaptability of crowdsourcing allows it to be an effective and powerful practice, but makes it diffcult to define and ca- tegorize it (Estellés and González, 2012).
Crowdsourcing has established itself as a mature field and a resource the companies really should begin to consider to use more strategically. For many tasks, the crowd will outperform design agencies in quantity, quality, time and cost. Companies should consider building crowd resources into their stagegate models and linking to their portfolio management strategies (Howard, T., Achiche, S., Özkil A., and McAloone, T. (2012).
Crowdsourcing can be used in industry, businesses and educational institutions. Bücheler and Sieg (2011) conducted a study that analyzes the applicability of crowdsourcing and open innovation on other techniques in the fields of scientific method and basic science. Such processes do not evolve only in business; they are also reflected in sciences, such as Citizen Science 2.0 and research practices.
Maturity models are a simple but effective way to measure the quality of productive processes. Derived from the software engineering, they have expanded the fields of application, and research on them is increasingly important. During the last two decades the number of publications has steadily increased. Literature reviews, such as Wendler's (2012), which has systematically mapped research on maturity models, do not consider any work on crowdsourcing; however, it evaluated 237 articles, which showed at that time that research on maturity models is applicable to more than 20 domains strongly dominated by engineering and software development. To date, no study has been available to summarize the activities and results of the field of research and practice on maturity models of crowdsourcing.
The expected contribution of this study is three-fold. First, as Crowdsourcing implementation involves significant organizational change in process, infrastructure and culture, it is unlikely to be achieved in one giant leap. The proposed General Crowdsourcing Maturity Model (G-CrMM) provides a general understanding and appreciation of gradual and holistic development of Crowdsourcing. It can serve as a roadmap that steers the implementation effort by pro viding a clear description and indications of the way forward. Second, for organizations that have implemented some form of Crowdsourcing, G-CrMM can support the ongoing development of crowdsourcing by systematically analyzing their current level of crowdsourcing maturity. The assessment instrument provided along with G-CrMM can also serve as a diagnostic instrument to pinpoint aspects that necessitate improvement. Third, by integrating the few existing maturity models o f Crowdsourcing and clearly defining important concepts, G-CrMM can potentially serve as a common model t o facilitate communication and t o improve understanding among researchers and practitioners.
Crowdsourcing is now a mature field and a resource the companies should really begin to consider to use more strategically. For many tasks, the crowd will outperform design agencies in quantity, quality, time and cost. Companies should consider building crowd resources into their stage-gate models and linking them to their portfolio management strategies (Howard et al., 2012).
Hosseini, Shahri, Phalp, Taylor, Aliet al. (2015) identified four main pillars of every crowdsourcing activity that were present in the current literature, t hey also identified the building blocks for these four pillars:
• The Crowd: The crowd of people who participate in a crowdsourcing activity have five distinct features. Diversity, which is the state or quality of being different or varied. Unknownness, which is the condition or fact of being anonymous. Largeness, which means consisting of big numbers. Undefinedness, which means not being determined and not having establ ished borders. And suitability, which means suiting a given purpose, occasion, condition, etc.
• The Crowdsourcer: A crowdsourcer might be an individual, an institution, a non-profit organization, or a company that seeks completion of a task through the power of the crowd.
• The Crowdsourced Task: A crowdsourced task is an outsourced activity that is provided by the crowdsourcer and needs to be completed by the crowd. A crowdsourced task may take different forms. For example, it may be in the form of a problem, an innovation model, a data collection issue, or a fundraising scheme. The crowdsourced task usually needs the expertise, experience, ideas, knowledge, skills, technologies, or money of the crowd. After reviewing the current literature, eight aspects for the crowdsourced task were identified.
• The Crowdsourcing Platform: The crowdsourcing platform is where the actual crowdsourcing task happens. While there are examples of real (offline or in-person) crowdsourcing platforms, the crowdsourcing platform is usually a website, or an online venue. After reviewing the current literature, they identified four distinct features for the crowdsourcing platform: crowd-related interactions, crowdsourcer-related interactions, task-related facilities and platform-related facilities.
In summary, crowdsourcing is the act of outsourcing tasks, traditionally performed by an employee or contractor, to an undefined, large group of people or community, through an open call. The task can be done collectively with more than one people if necessary, but most of the time, it is done by one person (Qu, Y., Huang, C., Zhang, P. & Zhang, J. (2011).
Howe (2006) has classified crowdsourcing applications in the following four categories:
An additional type is the micro task. In this type of crowdsourcing, organizations assign small tasks to many workers.
Regarding maturity models, Essmann (2009) mentioned that they have two main purposes. The first is to establish the capability maturity of an organization in terms of a practice in a specific area or domain. The second is based on the results of the first, which helps define the orientation and the direction of the improvement more adaptable to the company and which is in accordance with the best practices prescribed in the area.
To establish capability maturity in terms of a specific domain of practice is an exercise that is critical in understanding the current positioning of an enterprise relative to both its competitors and to successful enterprises in other industries. Furthermore, it is unlikely that the best course for improvement will be established if the current positioning is unknown and not understood. It is therefore critical to benchmark oneself against the best (or as close as possible) or against what is known to be successful, in order to determine the answers to "how much" and "in what direction". Benchmarking is a well-known practice but often presents a problem in that enterprises are reluctant to expose their competitive secrets. Maturity models are, however, available from creators who have expended many resources in establishing best practices for a specific domain. and it is against these best practices that an enterprise should benchmark itself.
Maturity models have been developed for many applications, including Software Development, IT Management, Project Management, Data Management, Business Management, Knowledge Management, etc. (Champlin, 2003), Innovation management (Li, 2007), Technology Management (Junwen and& Xiaoyan,. 2007), among others. The enterprise, thus, has a wide selection from which to choose, not only among applications, but also within each application. The Software Development environment, for instance, had a total of 34 maturity models at its disposal in Champlin (2003). The majority of these models, however, are based on the initial SW-CMM® of the SEI. Today it is an obsolete Model that SEI no longer maintains since 2000, when it was released and integrated into the new CMMI. In the literature, they have identified problems related to crowdsourcing managerial, behavioral and technological aspects.
In the managerial dimension, wages below market are related to business ethics, administratively difficult integration of crowdsourcing into the corporate structure (e-magazine, 2013), with no consideration or inadequate management of intellectual property, confidentiality agreements and written contracts missing, difficulty to resolve retention time throughout the project, which reduce the number of competitors who make efforts for solution (Boudreau, Lacetera and Lakhani, 2011). Some authors claim that open mechanisms for R+D+I, such as crowdsourcing, are not suitable for medium and small enterprises, which requires a combination of the techniques of open innovation and collaboration in a local environment to overcome these barriers (Deutsch, 2013).
In the behavioral dimension, in many organizations the absence of an organizational culture for change and not overcoming the not-invented-here syndrome, resistance generates ideas and knowledge from external sources, too, language barriers worldwide, lack of motivation of participants resulting in low quality work, defective work results by malicious, fraud, manipulation with votes and exploitation of people who have solutions that are not necessarily rewarded. Although in regard to the latter, (Busarovs, 2011) believes that being a voluntary mechanism, crowdsourcing should not be categorized as the slavery of XXI century.
In the technological dimension, limited access to internet and availability of software applications required for the process are presented, there are economic barriers to use intermediaries, such as problems of very high costs of publishing on platforms recognized for the crowdsourcing, such as InnoCentive or NineSigma.
Hillson (2003) evaluated the organizational capacity to manage projects through its Project Management Maturity Model (ProM- MM) to see if the project management processes are adequate. In it, four levels of project management capability are described (naive, novice, standardized and natural), with each level of ProMMM further defined in terms of four attributes, namely, culture, process, experience and application.
The National Health Service (2011) developed the National Infrastructure Maturity Model (NIMM) to assess the Information Technology infrastructure of the National Health Service in the UK (Van Dyk, Schutte and Fortuin, 2012). The use of crowdsouring in clinical research was evaluated to determine levels of maturity tool. These levels are:
• Level 1: Initial, ad hoc process (Basic);
• Level 2: Managed, stable process (Controlled);
• Level 3: Defined, standard process (Standardized);
• Level 4: Measured process (Optimized); and
• Level 5: Optimizing (Innovative).
It was used NIMM in the National Health Service Model to evaluate the maturity of the crowdsourcing (see Table 1), adapted from Essmann (2009).
Table 1. NIMM maturity level characteristics
Source: NHS, 2011.
Birch & Heffernan (2014) evaluated the maturity of crowdsourcing tool in clinical research, using two assessment models together carefully selected: Project Management Maturity Model (ProMMM) and National Infrastructure Maturity Model (NIMM). The first focuses on the ability of professionals to use crowdsourcing in clinical research; the second, on the maturity of clinical research itself.
Chiu et al. (2014) constructed a scheme for organizing crowsourcing research, conceptually similar to that used by Aral,S.; Dellarocas and,C.; Godes, D. (2013), dividing key elements of crowdsourcing in four basic components: the task, the crowd, the process and evaluation.
The literature review can be synthesized in several ways. The most common forms of synthesis include a research agenda, a taxonomy (Doty and& Glick, 1994) an alternative model or conceptual framework and meta-theory (Ritzer, 1992). The way chosen for this work is the alternative model or conceptual framework.
3. Research Design
3.1. Research method and research questions
The aim of this study is to obtain an overview about the area of crowdsourcing maturity model research. Therefore, systematic literature reviews, as proposed by Webster and Watson (2002), are an appropriate approach for gaining comprehensive insights. To get a clear depiction on the concept of Crowdsourcing Maturity and the distribution of research on it, this study will focus on addressing the following research questions:
(RQ1) What are the tasks, the crowd, processes and evaluation of crowdsourcing in the managerial area?
(RQ2) What are the tasks, the crowd, processes and evaluation of crowdsourcing in the behavioral area?
(RQ3) What are the tasks, the crowd, processes and evaluation of crowdsourcing in the technology area?
3.2 Definition of search criteria
3.2.1 Keyword search
A search was carried out in specialized databases, primarily in Scopus, on two thematic axis: Crowdsourcing and Models. The equation used for search was:
Title-Abs-Key (crowdsourcing) AND Title-Abs-Key (models) AND Doctype (OR) And SubjArea (mult OR arts OR busi OR deci OR econ OR psyc OR soci) AND Pubyear >2009 AND [(Limit-To (ExactKeyword, "Crowdsourcing")]) AND ([Exclude (SubjArea , "ARTS")]) AND ( [Exclude (SubArea, "SOCI")].)
An automatic search was carried out by Scopus. It was very helpful that crowdsourcing is a multidisciplinary concept that is binding with many search engines. This includes studies in business, marketing, management, information technology and medicine. The range of publication's dates considered in the review of the state of the art included information from the year 2010 until the present. The meta-analysis produced 51 documents, 22 of which have the word crowdsourcing in the title. Two relevant papers were found: papers of Chiu et al. (2014) and Hosseini et al. (2015).
3.2.2. Search Process
To enhance the rigor of systematic literature reviews, the process of searching and analyzing the literature has to be made as transparent as possible. Hence, the following paragraphs describe the conducted steps of searching, selecting, and analyzing the literature in the study. The complete systematic process is shown in Figure. 1.
3.2.3. Selection of data sources and search strategy
The conducted study was based on electronic databases. An extensive selection of databases was the first step in fulfilling the research aim of a comprehensive overview about research in crowdsourcing maturity models. The selected database was Scopus. This database assured that publications of the most important research domains - -like Information Systems, Software Development, or Business and Management-- were covered. And it was used the popular search engine Google Scholar. Here, two relevant papers were found: papers of Birch and Heffernan (2014) and Wendler (2012).
For all terms, the search strategy was to find the single words, for example (maturity AND model) in the title, abstract, or keywords. This strategy ensured the inclusion of other phrases, such as ''model of maturity''
3.2.4 Exclusion and inclusion criteria
To ensure that only relevant articles entered the pool of papers to be finally analyzed, irrelevant articles were excluded. The criteria for exclusion were twofold: content based and publication based. Furthermore, only articles in the English languages were kept. There were excluded those documents that did not have the word crowdsourcing in the title.
As for the content, articles that did not deal with crowdsourcing as a main focus were excluded. The search term crowdsourcing maturity model had to be excluded because it produced zero results in terms of documents. This indicates that there are no research articles or reviews on the subject. Content-related exclusion of articles took place in steps 3 and 5 of Figure 1.
Figure 1. Search process
Source: Own elaboration.
4. Proposed General-Crowdsourcing Maturity Model (G-CrMM)
Based on the relevant papers, the proposed model is a descriptive model, in that it describes the essential attributes that characterize an organization at a particular crowdsourcing maturity level, by the integrative review. It is also a normative model in that the key practices characterize the types of ideal behavior that would be expected.
Similar to the majority of existing CMM-based and non-CMM-based CrMMs, the G-Cr- MM follows a staged structure and it has three main components, namely maturity levels, KPAs and common characteristics. The literature review reveals that like the CMM, most existing CrMMs identify five levels of maturity. Accordingly, the proposed CrMM adapted the five maturity levels from CMM, and named them initial, aware, defined, optimizing, and innovative, respectively. G-CrMM involves three key process areas: managerial, behavioral, and technological:
• Managerial area: Managerial concerns refer to organizational considerations when crowdsourcing is to be used, such as which task is suitable for crowdsourcing, what kind of crowd needs to be recruited, what kind of crowdsourcing process is more effective, and how to evaluate the process and outcome of crowdsourcing.
• Behavioral area: Behavioral concerns refer to considerations related to the individuals involved in crowdsourcing, such as the impact of crowdsourcing on employees, how the crowd can be motivated, and so on.
• Technological area: Technological concerns refer to technical issues related to the information systems/platforms used for supporting the crowdsourcing process, such as what functions are important for a crowdsourcing platform, how to design useful crowdsourcing models, and how to improve system functionality for more effective communication in crowdsourcing. (See Table 2).
Table 2. Proposed G-CrMM
Source: Author development
The following describes the relationship between the four basic components of crowdsourcing, as Aral et al. (2013); and the three key process areas, both mentioned above. First, the four components in the managerial area are described, Chiu et al. (2014). Then, the same components in the behavioral and technology areas.
4.1. Managerial area
4.1.1. The task component
Organizations may have management problems when choosing crowdsourcing for a task, such as selection, design and management of the task to be presented to the crowd. About the features of tasks, at least the following studies were found: Zheng, Li and Hou el al. (2011); task design, Jain (2010); and task selection included task suitability and task feasibility, Afuah and Tucci (2012).
4.1.2. The crowd component
A key aspect for the success of crowdsourcing is the involvement of a high quality crowd. Hence, the first line of research is about how to recruit, manage and motivate the crowd. Several studies have examined issues related to crowd composition, such as determination of proper crowd size, (Boudreau et al., (2011), Erickson, Petrick and Trauth, 2012); and diversity of the crowd, (Brabham, 2007, 2008; Rosen, 2011). Another important aspect of management is the recruitment of the crowd.
4.1.3. The process component
There are several concerns in the crowdsourcing process management. Three major issues that have been studied are process governance, process design, and legal issues. For example, Dow, Kulkarni, Klemmer and Hartmann (2012) et al. investigated the role of feedback in the crowdsourcing process; and Geiger, Seedorf, Schulze, Nickerson and Schader (2011) et al. discussed the accessibility of peer contributions in crowdsourcing. Several studies have examined issues related to process design for crowdsourcing, such as infrastructure, Agafonovas and Alonderien? (2013); and crowdsourcing mechanisms (Boudreau and Lakhani, 2009, Malone, Laubacher and De- llarocas,et al, 2010. Legal issues include intellectual property (Lieberstein Tucker and Yankovsky, et al, 2012); and privacy protection, Geiger et al. (2011).
4.1.4. The evaluation component
What has been found in the literature management of idea evaluation includes: selection of evaluators, evaluation metrics and quality measurement, (Bonabeau, (2009). The first issue is related to selecting proper experts to evaluate the outcome quality from the crowdsourcing process. The second issue focuses on developing evaluation metrics for various types of crowdsourcing task. For instance, Bonabeau identified several evaluation metrics and suggested that solution quality and output consistency is key metrics for R&D innovation. The third issue concerns the actual criteria for evaluating ideas. For example, Blohm, Riedl, Leimeister and Krcmaret al. (2011) proposed to use four distinct dimensions to measure idea quality, i.e., novelty, feasibility, relevance and elaboration. (See Table 3).
Table 3. Maturity of crowdsourcing in the managerial area
Source: Author development based on Chiu et al. (2014).
4.2. Behavioral area
4.2.1. The task component
Applying crowdsourcing to problem solving is not without resistance. The behavioral area covers issues related to the impact of crowdsourcing on organizational personnel. Two major issues are the impact of crowdsourcing on employees, (Jayanti, (2012), and employees' attitudes toward crowdsourcing.
4.2.2. The crowd component
Because of the importance of exploring the perceptions, motivations and behavior of participants for crowdsourcing, several studies have examined issues related to the crowd's beliefs and attitudes, such as trust, Jain (2010); and the crowd's attitude toward participation, Bakici, Almirall and Wareham (2012). Sample research issues include crowd's task selection behavior, (Yang, Adamic and Ackerman, et al.(2008); and participation intention and behavior, (Zheng et al., (2011).
4.2.3. The process component
It is necessary to consider the improper conduct of the crowd in the process of designing and managing the process of crowdsourcing. Two issues that have been investigated are groupthink, (Rosen, (2011) and cheating in crowdsourcing (Eickhoff, and De Vries, 2012).
4.2.4. The evaluation component
Another important dimension that has been studied is the role of the crowd and its response to the evaluation of results, because they are useful for selecting proper evaluation mechanisms. User participation in the evaluation (Roy, Lykourentzou, Thirumuruganathan, Amer-Yahia, Das, et al, 2013) and the user's attitude toward the rating scale, (Riedl, Blohm, Leimeister and Krcmar, et al. (2013) are two major issues that have been extensively investigated. User participation is one way to do the evaluation. The second issue concerns the effect of rating scales on the contributors' attitudes. Riedl et al. (2013) found that the multi-criteria rating scale is perceived more favorably than the single-criterion scale in the co-creation context (see Table 4).
Table 4. Maturity of crowdsourcing in the behavioral area
Source: Author development based on Chiu et al. (2014).
4.3. Technology area
4.3.1. The task component
The selection of a technological platform for crowdsourcing (Boudreau and Lakhani, (2013), and the system functionalities (Doan, Ramakrishnan and Halevy, et al (2011) are widely studied aspects. A decision on whether the platform should be developed in house for better control and safety, or use a third party solution. The other issue is identifying proper system functionality necessary for handling different tasks. For example, Boudreau and Lakhani (2013), suggested that, if a client firm wants to crowdsource a design task or creative project, a contest-oriented platform should be selected.
4.3.2. The crowd component
Two issues in Technological tools dimension are use of collaboration tools (Antikainen, Mäkipää and Ahonen,et al., 2010, Kit- tur Nickerson, Bernstein, Gerber, Shaw, Zim- merman, Lease, Horton, et al., 2013) and participants' reaction to system functions, (Ipeirotis, (2010). The first issue is related to whether the use of collaboration tools can enhance the quality of crowd's output. Crowdsourcing platforms can provide a wider array of communication channels between the client organization and contributors to support synchronous collaboration and real-time crowd work. The other issue is how crowd's behavior may be affected by system functions. The quality of crowdsourcing is achieved with improved system functionality.
4.3.3. The process component
There have been found three aspects tools and information technologies in the literature to improve the process of idea generation. Support mechanisms, system functions, and use of tools. Supporting mechanisms are process-related functions such as facilitating collaboration among contributors, which can be done by using real-time visualizations of completed tasks, (Dow and Klemmer, (2011) and collecting process data from other participants to help contributors refine their ideas, (Leimeister, Huber, Bretschneider, & Krcmar, 2009).
Another technology issue is system functionality useful for supporting the process of crowdsourcing, which includes system architecture design, Hetmank (2013) and platform usage profiling, Ipeirotis (2010). Finally, regarding the use of tools for crowdsourcing, such as the use of collaboration tools (Blohm et al., 2011; Schweitzer, Buchinger, Gassmann, Obristet al., 2012), social networks.
4.3.4. The evaluation component
Effective evaluation includes methods for evaluating results and using assessment tools idea. Yuen et al. (2011) suggested that the crowdsourcing model embedded into the crowdsourc- ing platform's control and evaluation mechanisms, such as quality control procedures (e.g., peer or specialist review, commenting systems) and competition schemes (e.g., voting, rating or bidding), are useful for enhancing crowdsourcing (see Table 5).
Table 5. Maturity of crowdsourcing in the technology area
Source: Author development based on Chiu et al. (2014).
5. Discussion and Conclusion
The proposed CrMM can be a useful tool for assessing crowdsourcing development and indicating possible improvements. The proposed G-CrMM to accurately reflect the reality, it is important that management do not use it as a tool for disciplining and penalizing units that underperformed. Rather, it should serve as an indication of areas needing more resources and guidance.
The model evaluates the different stages of maturity for each of the key areas of an organization. While this could be considered a complication within the model, this highlights the model's usefulness as a diagnostic tool for performing Crowdsourcing self-assessment in that it identifies the aspects that require improvement for the organization to progress to the next level of Crowdsourcing maturity. It should also be noted that although a single maturity rating for the organization can be obtained by aggregating ratings for the Key Process Areas, the rating distribution should also be reported to avoid loss of constructive information.
The proposed G-CrMM serves more as a descriptive model rather than a prescriptive model. Hence, the conditions for attaining maturity may evolve and serve more like a moving target to encourage continuous learning and improvement rather than a definite end by themselves.
To assess its validity and improve generalizability, future research can apply the proposed Crowdsourcing Maturity Model to different contexts. Another interesting avenue for future research will be to investigate the relative importance of practices in each Key Process Area at different stages of maturity.
Identifying and understanding these dynamics may help organizations better chart their future crowdsourcing development. Longitudinal studies may also be conducted where crowdsourcing development and maturity of organizations are tracked over time. This can provide both researchers and practitioners more in-depth understanding of the growth of an innovative organization.
To all the people who contributed to the development of this work.
Afuah, A., & Tucci, C. (2012). Crowdsourcing as a solution for distant search. Academy Management Review, 37(3), 355-375.
Agafonovas, A., & Alonderien, R. (2013). Value creation in innovations crowdsourcing: example of creative agencies. Organizations and Markets in Emerging Economies, 4(1), 72-103.
Antikainen, M., Mäkipää, M., & Ahonen, M. (2010). Motivating and supporting collaboration in open innovation. European Journal Innovation Management, 13(1), 100-119.
Aral, S., Dellarocas, C., & Godes, D. (2013). Social media and business transformation: a framework for research. Information Systems Research, 24(1), 3-13.
Bakici, T., Almirall, E., & Wareham, J. (2012). Mo- tives for participation in on-line open innovation platforms. Danish Research Unit for Industrial Dynamics, 11-14.
Birch, K., & Heffernan, K. (2014). Crowdsourcing for Clinical Research - An Evaluation of Maturity. Australasian Workshop on Health Informatics and Knowledge Management, 153, 1-11.
Blohm, I., Riedl, C., Leimeister, J. M., & Krcmar, H. (2011, December). Idea Evaluation Mechanisms for Collective Intelligence in Open Innovation Communities: Do Traders outperform Raters? In 32. International Conference on Information Systems, ICIS, Shanghai, China (2011, December).
Bonabeau, E. (2009). Decisions 2.0: the power of collective intelligence. MIT Sloan Management Review, 50(2), 45-52.
Boudreau, K., & Lakhani, K. (2009). How to manage outside innovation. MIT Sloan Management Review, 50(4), 69-76.
Boudreau, K., Lacetera, N., & Lakhani, K. (2011). Incentives and problem uncertainty in innovation contests: An empirical analysis. Management Science, 57(5), 843-863.
Boudreau, K., & Lakhani, K. (2011). Field Experimental Evidence on Sorting, Incentives and Creative Worker Performance. Harvard Business School, Working paper, 11-107.
Brabham, D. (2007). Speakers' corner: diversity in the Crowd. Retrieved from http://crowdsourc-ing.typepad.com/cs/2007/04/speakers_corner.html
Brabham, D. (2008). Moving the crowd at iStockphoto: The composition of the crowd and motivations for participation in a crowdsourcing application. First Monday, 13(6), 1-22.
Brabham, D. (2013). Crowdsourcing (pp. 281-304). Boston, USA: The MIT Press.
Bücheler, T., & Sieg, J. H. (2011). Understanding Science 2.0: Crowdsourcing and Open Innovation in the Scientific Method. Procedia Computer Science, 7, 327-329.
Busarovs, A. (2011). Crowdsourcing as user-driven innovation, new business philosophy's model. Journal of Business Management, 4, 53-60.
Champlin, B. (2003). Toward a Comprehensive Data Management Maturity Model (DM3). Retrieved from http://www.powershow.com/view/1f797ZDcxZ/Toward_a_Comprehensive_Data_Management_Maturity_Model_DM3_powerpoint_ppt_presentation
Chiu, C., Liang, T., & Turban, E. (2014). What can crowdsourcing do for decision support? Decision Support Systems, 65, 40-49.
Deutsch, C. (2013, march). The Seeking Solutions Approach: Solving Challenging Business Problems with Local Open Innovation. Retrieved from http://timreview.ca/sites/default/files/article_PDF/Deutsch_TIMReview_March2013.pdf
Doan, A., Ramakrishnan, R., & Halevy, A. (2011). Crowdsourcing systems on theWorld-Wide Web. Communications of the ACM, 54(4), 86-96.
Doty, D. H., & Glick, W. H. (1994). Typologies as a unique form of theory building: Toward improved understanding and modeling. Academy of Management Review, 19(2), 230-251.
Dow, S., & Klemmer, S. (2011). Shepherding the crowd: an approach to more creative crowd work. Retrieved from http://wwwcgi.cs.cmu. edu/afs/cs.cmu.edu/Web/People/spdow/files/ Crowds-Shepherd-ws-CHI11.pdf
Dow, S., Kulkarni, A., Klemmer, S., & Hartmann, B. (2012). Shepherding the crowd yields better work (pp. 1013-1022). New York, USA: ACM.
E-magazine. (2013). Crowdsourcing Goes Mainstream. Trends Magazine, 121, 20-25.
Eickhoff, C., & De Vries, A. (2012). Increasing cheat robustness of crowdsourcing tasks. Information Retrieval, 16(2), 21-137.
Erickson, L., Petrick, I., & Trauth, E. (2012, August). Hanging with the right crowd: matching crowdsourcing need to crowd characteristic (pp.1-9). Proceedings of the Eighteenth Americas Conference on Information Systems, AMCIS, Seattle, USA.
Essmann, H. E. (2009). Toward Innovation Capability Maturity Dissertation. (Doctor of Philosophy). Stellenbosch University, Department of Industrial Engineering, Stellenbosch University, Stellenbosch, South Africa.
Estellés, E., & González, F. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2), 189-200.
Geiger, D., Seedorf, S., Schulze, T., Nickerson, C., & Schader, M. (2011). Managing the crowd: towards a taxonomy of crowdsourcing processes. Thirty Third International Conference on Information Systems. Orlando, USA.
Hetmank, L. (2013). Components and functions of crowdsourcing systems-a systematic literature review, 11th International Conference on Wirtschaftsinformatik. Lipsia, Alemania.
Hillson, D. (2003). Assessing organisational project management capability. Journal of Facilities Management, 2(3), 298- 311.
Hosseini, M., Shahri, A., Phalp, K., Taylor, J., & Ali, R. (2015). Crowdsourcing: A taxonomy and systematic mapping study. Computer Science Review, 17, 43-69.
Howe, J. (2006). The Rise of Crowdsourcing. Retrieved from http://www.wired. com/wired/archive/14.06/crowds.html?p- g=1&topic=crowds&topic_set=
Howard, T., Achiche, S., Özkil, A., & McAloone, T. (2012, May). Open Design and Crowdsourcing: Maturity. Methodology and Business Models, International design conference, Dubrovnik, Croatia.
Ipeirotis, P. (2010). Analyzing the Amazon mechanical turk marketplace, XRDS: Crossroads. ACM Magazine for Students, 17(2), 16-21.
Jain, R. (2010). Investigation of Governance Mechanisms for Crowdsourcing Initiatives. AMCIS 2010 Proceedings, (557).
Jayanti, E. (2012). Open sourced organizational learning: implications and challenges of crowdsourcing for human resource development (HRD) practitioners. Human Resources Development in Institutions of Higher, 15(3), 375-384.
Junwen, F., & Xiaoyan, L. (2007, Aug). Enterprise Technology Management Maturity Model and Application. Conference on Management of Engineering and Technology, Portland International Center, Portland, USA.
Kaufmann, N., & Schulze, T. (2011, Aug). Worker motivation in crowdsourcing- a study on mechanical turk. Proceedings of the seventeenth Americas conference on information systems, Detroit, USA.
Kittur, A., Nickerson, J., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., & Horton, J. (2013). The future of crowd work. Proceedings of the ACM Conference on Computer Supported Cooperative Work, (pp. 1301-1317). New York, USA: ACM. doi: 10.1145/2441776.2441923.
Li, J. (2007, Sep). Application of CMMI in Innovation Management (pp. 4966-4969). International Conference on Wireless Communications, Networking and Mobile Computing (WiCom), Shanghai, China (2007, Sep)
Leimeister, J., Huber, M., Bretschneider, U., & Krcmar, H. (2009). Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition. Journal of Management In- formation Systems, 26(1), 197-224.
Malone, T., Laubacher, R., & Dellarocas, C. (2010). The collective intelligence genome. MIT Sloan Management Review, 51(3), 21-31.
NHS. (2011). National Infrastructure Maturity Model, Department of Health. Retrieved from http://systems.hscic.gov.uk/nimm/overview
Pautasso, M. (2013). Ten Simple Rules for Writing a Literature Review. PLoS Computational Biology, 9(7), e1003149. doi:10.1371/journal. pcbi.1003149
Pisano, G., & Verganti, R. (2008). Which kind of collaboration is right for you? Harvard Bussines Review, 86(12), 78-86.
Qu, Y., Huang, C., Zhang, P., & Zhang, J. (2011). Harnessing Social Media in Response to Major Disasters. CSCW 2011 Workshop: Designing Social and Collaborative Systems for China. Hangzhou, China.
Riedl, C., Blohm, I., Leimeister, J., & Krcmar, H. (2013). The effect of rating scales on decisión quality and user attitudes in online innovation communities. International Journal of Electronic Commerce, 17(3), 7-36.
Ritzer, G. (1992). Metatheorizing. Newbury Park, USA: Sage.
Rosen, P. (2011). Crowdsourcing lessons for organizations. Journal of Decision Systems, 20(3), 309-324.
Roy, S., Lykourentzou, I., Thirumuruganathan, S., Amer-Yahia, S., & Das, G. (2013). Crowds, not drones: modeling human factors in interactive crowdsourcing. Proceedings of DB Crowd, 1025, 39-42.
Schweitzer, F., Buchinger, W., Gassmann, O., & Obrist, M. (2012). Crowdsourcing leveraging innovation through online idea competitions. Research-Technology Management, 55(3), 32-38.
Van Dyk, L., Schutte, C., & Fortuin, J. (2012). A Maturity Model for Telemedicine Implementation. E-Telemed 2012 The Fourth International Conference on eHealth, Telemedicine, and Social Medicine. doi: 10.5772/56116
Wendler, R. (2012). The maturity of maturity model research: A systematic mapping study. Information and Software Technology, 54, 1317-1339.
Yang, J., Adamic, L., & Ackerman, M. (2008, July). Crowdsourcing and knowledge sharing: strategic user behavior on taskcn. Proceedings of the 9th ACM conference on Electronic commerce, (pp. 246-255). New York, USA: ACM.
Zheng, L.D., & Hou, W. (2011). Task design, motivation and participation in crowdsourcing contests. International Journal of Electronic Commerce, 15(4), 57-88.
¿Cómo citar este artículo? - How to quote this article?
Durango-Yepes. C. M., & Gil-Vera, V. D. (2016, July). Development of a general crowdsourcing maturity model Cuadernos de Administración, 32 (55), 72-86.
Cuadernos de Administración journal by Universidad del Valle is under licence Creative Commons Atribución-No-Comercial-CompartirIgual 2.5 Colombia. Based in http://cuadernosdeadministracion.univalle.edu.co/