- 1Department of Developmental and Educational Psychology, University of Murcia, Murcia, Spain
- 2Department of Didactics of Language and Literature, University of Murcia, Murcia, Spain
Introduction: Creativity is a fundamental competence that manifests itself in various domains of knowledge, including verbal creativity. The main aim of this study was to identify indicators of verbal creativity for the assessment of three writing tasks.
Methods: Sixteen multidisciplinary and international creativity experts participated in a two-stage Delphi panel. The administered questionnaire asked about the measurement or non-measurement of eight indicators of verbal creative thinking in three tasks: problem posing, creative idea generation, and idea improvement. Originality is the most important indicator of creativity. The indicators identified in the first task were fluency, flexibility, originality, elaboration, and sensitivity to problems. The second task measures flexibility, originality, elaboration, opacity, and dynamic integration. In the third task, fluency, flexibility, originality, elaboration, dynamic integration, and refinement of ideas are considered.
Results: The results of this study are key to progress in the field of measuring verbal creative thinking.
Discussion: The identification of indicators of the construct called verbal creativity allows the determination of its components in order to be able to estimate the creative potential in this specific domain.
1 Introduction
Creativity is an element that requires potential originality and effectiveness in a dynamic context (Corazza et al., 2022), which has become increasingly important in recent years. Creative thinking not only underpins some of society’s most important innovations but is also a universal and democratic phenomenon (OECD, 2020). Currently, the concern to assess creativity has led to its inclusion by the OECD in the Program for International Student Assessment (PISA). Creative thinking has gained the status of a competency that helps individuals achieve better outcomes in more difficult and challenging contexts (OECD, 2020).
Traditionally, creativity measurement has been approached from a domain-general perspective (Guilford, 1950; Torrance, 1974). However, the trend has changed nowadays, as most authors (Kaufman et al., 2017; Baer, 2019; Zyga et al., 2022) understand creativity from the point of view of domain-specificity, and therefore, stating that someone is creative without alluding to the domain of knowledge in which he or she is creative would be an incorrect statement.
Many authors (Amabile, 2018; Lorca Garrido et al., 2021; Krieger-Redwood et al., 2023) agree that verbal creativity is one of the specific domains of creativity, because they do not find correlations between this specific domain and the rest of the specific domains of creativity. Therefore, the measurement of verbal creativity is the domain of interest of the present research, and it is defined as the value of divergent thinking in solving problems with verbal content (Portnova et al., 2020). For their part, López-Martínez et al. (2018) understand verbal creativity as a linguistic act composed of the activation of creative thinking and a process of written reflection that allows the elaboration of a narrative with textual harmony, metaphors, originality and imagination.
The domain in which creativity is manifested is an important moderator of the relationships between creativity and factors such as personality or communication. For this reason, more efforts are needed to develop an instrument capable of assessing creativity, understood from the point of view of the specificity of the domain, since its conceptualization and explanatory theories differ from most of the validated instruments.
Furthermore, it should be noted that creativity, as a psychological construct, is not directly observable, so in the various existing tests for measuring creativity, a set of observable indicators has already been delimited, which allowed to accurately determine the level of creative thinking manifested by each individual (Runco and Acar, 2019). In this sense, Guilford (1950) already established that the indicators that structure creative thinking are fluency, flexibility, originality, redefinition, penetration, and elaboration. Nevertheless, in tests of divergent thinking, only the scores of fluency, flexibility, originality, and elaboration are usually used to estimate the creative potential of individuals (Runco and Acar, 2019; Corbalán, 2022).
First, fluency is considered to be a characteristic of creative minds that consists of the spontaneous flow of ideas and images (Kasirer and Mashal, 2018). Therefore, the minds of creative individuals have a greater ability to generate multiple ideas when approaching a task. Second, flexibility is characterized by the categorization of ideas into different categories, which is a sign of the variety and diversity of ideas contributed by an individual (Acar et al., 2019). Therefore, ideas contributed to different categories are valued more from a creative perspective than those belonging to the same category from an operational perspective (Guilford, 1956).
Third, originality is the most important indicator of creativity. In fact, creativity is understood as intentional originality (Pichot et al., 2022). This indicator refers to the uniqueness and unusualness of the given ideas compared to those of other participants (Forthmann et al., 2021). Currently, one of the methods used to objectively measure originality is latent semantic analysis in text mining, as it provides a computerized originality score based on the cosine of the angle formed between the stimulus word and the word given by the individual (Beaty and Johnson, 2021; Dumas et al., 2021).
Fourthly, elaboration refers to the details that are pointed out when stating an idea, which embellish the idea (Acar et al., 2019). This ability also refers to the degree of difficulty achieved by the individual in defining conceptual structures (Guilford and Hoepfner, 1971). One of the most widely used methods for interpreting elaboration is the unweighted word count, where elaboration is understood as the number of words in each of the verbal responses provided by the subject in a given verbal divergent thinking task (Maio et al., 2020).
Guilford (1968) significantly expanded the above indicators and introduced problem sensitivity as a person’s ability to identify problems that not everyone perceives, an indicator also considered by Violant (2006) and Romo et al. (2016). Recently, the OECD (2020) introduced a new indicator to measure creative thinking, which is the refinement of ideas, which consists in the evaluation and improvement of an already completed task, with the aim of facilitating the development of a more appropriate and original response than the one initially posed.
In this sense, Desrosiers (1978) argues that in order to include the divergent aspects of communicative written expression, it is necessary to use figurative language, which is constituted by imagination, originality, flexibility, dynamic integration and a certain degree of opacity. The author alludes to the divergent thinking in this field from the use of an embellished language as it is usually done in poetry. In fact, he raises the indicators of opacity for the use of a language in which the relationship between signifier and signified is diluted and dynamic integration as the factor that alludes to the uniqueness of a text, focusing attention on linguistic aspects.
Currently, tests of divergent thinking are based on creativity from a general point of view, as evidenced by the fact that the indicators of creativity they evaluate are fluency, flexibility, and originality, which are general indicators of creativity. These dimensions are evaluated in the Creative Imagination Test for Children (PIC-N) by Artola et al. (2004) and in the Torrance Test Creative Thinking (TTCT) by Torrance (1974), even in the tasks of these tests, which are considered verbal, the elaboration is left aside, probably because it implies an additional difficulty for its evaluation. Other tests, such as the CREA (Creative Intelligence test) by Corbalán et al. (2003), only measure fluency based on the number of questions elaborated by the students. Likewise, all the tests are aimed at ideational productivity, which is a limitation of the tests because of the absence of the critical and evaluative dimension of ideas (Runco, 2008), so they lack a task in which the evaluation and improvement of ideas is considered.
Later, the Verbal Creativity Test (PCV in Spanish or VCT in English) (López-Martínez et al., 2018) emerged to try to provide an answer to the problem of the lack of consideration of verbal creativity in research on creative thinking. However, its main objection is the subjectivity of its indicators and the small sample with which it has been validated.
In addition, with respect to the context and age to which the creativity tests are addressed, the current research requires tests intended for a psychoeducational context of application and for a population of primary school students who use Spanish as their native language. Also, in order to respond to American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (2018), the objectives of the creativity test to be designed are to identify talents in the field of linguistics, to guide the educational attention of students according to their characteristics and to assess the specific verbal domain of creativity, as well as to determine the starting point of an intervention and monitor their progress. Furthermore, in order to overcome the inconveniences of other creativity tests, the test to be designed must be aimed at assessing verbal creative thinking based on the classical indicators of creativity aimed at narrative and linguistic aspects, as well as others related to the verbal component.
In this test, the tasks must consider the generation of ideas, the elaboration of creative ideas, and the improvement and evaluation of ideas (OECD, 2020; Matheson et al., 2023), as well as considering problem posing as an important starting point of creativity (Abdulla Alabbasi et al., 2021), in which questions are the backbone of creative thinking (Corbalán et al., 2003). Likewise, it should be taken into account that in a psychometric test to assess verbal creativity, texts must play a fundamental role (López-Martínez et al., 2018), since they can be both the stimulus for a creative activity and the expression of the creative performance in the form of a creative product.
Regarding the universal definition of the construct, the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (2018) indicated the need to provide an operational, semantic, and syntactic definition of the construct. In this sense, the definitions of the construct proposed in the present study are given below. First, verbal creative thinking operationally can be understood as the observable quantity of the presence of the following indicators of creativity in its verbal domain, which are fluency, sensitivity to problems, flexibility, originality, elaboration, opacity, dynamic integration, and refinement of ideas. Second, the semantic definition of verbal creative thinking is characterized by the following facets, which are problem posing, creative idea generation, and evaluation and improvement of ideas within the verbal domain of creativity from different stimuli in text form. Third, verbal creative thinking in syntactic form is a construct that shows relationships with reading comprehension, intelligence, personality, and emotions.
Finally, following Muñiz and Fonseca-Pedrero (2019), the characteristics of the tasks that make up the test should be specified, that is, the number, length, content, order, and response format. With regard to the tasks, it should be indicated that they correspond to the type called essay, since, due to the nature of the construct of verbal creative thinking, they are those that allow the individual to express themselves freely in order to show their creative capacity (Muñiz, 2018). It should be mentioned that their evaluation involves greater difficulty than the other types of tasks, since the subject is not limited by response alternatives, in addition to the greater effort that must be made to measure verbal creative thinking.
However, it is more correct to refer to the term as reagents or stimuli, instructions, indicators of verbal creative thinking, and examples that are presented in the context of a task or game. In total, there are three tasks with the following order and content: (a) Task 1: problem posing; (b) Task 2: generating creative ideas; (c) Task 3: evaluating and improving ideas.
In addition, the following eight observable indicators of verbal creative thinking are used for evaluation: fluency, flexibility, originality, elaboration, sensitivity to problems, opacity, dynamic integration, and refinement of ideas. The identification of these indicators in each of the tasks could be carried out by consulting the experts who are the most knowledgeable on the subject, by using the Delphi method as the appropriate methodology.
The Delphi method is based on a communicative process in which a group of experts gives an answer to a research problem through an iterative process (López-Gómez, 2018). This process takes place anonymously, iterative questionnaire-results-questionnaire, which leads to feedback from the experts so that they can reach the highest possible consensus among them to achieve the proposed objective (Plans and León, 2003).
The value of this technique for the development creativity is that it makes it possible to coordinate the different preferences, currents, trends, and lines contributed by experts in the field. In this sense, it was decided to use the Delphi method, as it was considered the most appropriate to achieve the objective of this study: to determine which indicators of verbal creativity could be measured in each of the tasks.
2 Materials and methods
A Delphi method was used to reach a consensus among creativity experts in order to determine the indicators of verbal creative thinking.
2.1 Participants
For the selection of the participants, a non-probabilistic purposive sampling was used (Jorrín et al., 2021), since the experts who met the following criteria were selected: availability and interest in participating in this study, at least 5 years of experience in the study of creativity and research activity, accredited by publications of impact (JCR or SJR).
Based on these criteria, 40 experts were invited to participate in a Delphi panel on indicators of verbal creative thinking. Twenty agreed to participate, but only 16 completed the two phases of the Delphi method, resulting in an attrition rate of 20% (n = 4). Despite this, the ideal number of experts to participate in the Delphi method was met (López-Gómez, 2018). Table 1 shows the profile of the experts who participated in the present study.
This Delphi panel included experts from nine different fields of knowledge, with 25% of the sample being foreigners. In addition, the percentage of women (62.5%) and the fact that 43.75% of the experts had more than 25 years of experience stood out.
To determine the quality of the expert panel, the Competence (K) index (Cabero and Barroso, 2013) was calculated, which is the sum of the knowledge coefficient (Kc) and the argumentation coefficient (Ka) divided by 2. Both the Kc (M = 0.84; SD = 0.09) and Ka (M = 0.81; SD = 0.18) scores were high, which means that K is high (M = 0.825; SD = 0.1). With an Expert Competence score above 0.8, the Delphi panel is considered to be of high quality for addressing the research problem (Table 2).
2.2 Instruments
To collect information from the participants, we used an ad hoc questionnaire developed by our research group. This questionnaire was divided into four parts. The first part was used to collect socio-demographic data that determined the characteristics of the Delphi panel. The rest of the questionnaire was composed of 24 items corresponding to the indicators to be measured in each of task. The latter included a dichotomous scale in which the experts had to indicate whether they would or would not measure these indicators in Game 1 of problem posing, in Game 2 of generating creative ideas, and in Game 3 of improving ideas. The indicators were: fluency, flexibility, originality, elaboration, sensitivity to problems, opacity, dynamic integration, and refinement of ideas.
2.3 Procedure
In the first phase of the Delphi method, the experts received an invitation by e-mail together with the digital version of the questionnaire and were given a period of time to complete it. Once all the questionnaires had been returned, the results of this first phase were grouped into percentages and these results, together with the questionnaire, were sent to carry out the second phase of the Delphi method. In this phase, the experts considered the responses from phase 1 to answer the questionnaire, with the aim of reaching a minimum consensus of 80% for each of the indicators. These experts could not interact with each other, and their anonymity was guaranteed in both phase 1 and phase 2.
2.4 Data analysis
Data analysis was performed using IBM SPSS Statistics version 28.0.1. Descriptive statistics and frequencies were used to describe the panel of experts. For each indicator, participants responded with the following options: 1 = Yes and 2 = No. This dichotomous scale was used to reduce ambiguity, for its objectivity and clarity, and to allow the experts to focus more on the relevance of the indicator. Once the degree of agreement at the percentage level among the experts is known through the use of frequencies, it becomes necessary to check whether the agreements reached are statistically significant. Frequencies were used to determine the percentages of expert responses. The Fleiss kappa test was also used to determine the coefficient of concordance between the experts’ responses in each of the games and phases of the Delphi method. To interpret the results of Fleiss’ kappa coefficient, the Landis and Koch (1977) scale was used to determine the strength of agreement of the experts. This scale has the following values: poor (0), slight (0.1–0.2), acceptable (0.21–0.4), moderate (0.41–0.6), substantial (0.61–0.8), and near perfect (0.81–1). Participants’ responses were analyzed in both rounds because they could keep or change their responses.
3 Results
Table 3 shows the indicators that the experts considered relevant for measuring the construct of verbal creative thinking in each of the three games in phase 1 of the Delphi method. For the indicators of the first game in phase 1, the experts considered that indicators such as fluency (100%), flexibility (81.25%), and sensitivity to problems (93.75%) should be measured, while refinement of ideas (81.25%) should not be measured. The overall agreement achieved by the experts in Game 1 was acceptable and statistically significant (Kappa = 0.35, Z = 10.853, p < 0.01) with a 95% confidence interval (0.287–0.414).
For the second set of indicators, the experts determined that originality (100%), elaboration (81.25%), and dynamic integration (87.5%) should be evaluated. The overall agreement among the experts was lower because it was manifested at a low level, although statistically significant (Kappa = 0.178, Z = 5.517, p < 0.01) with a 95% confidence interval (0.115–0.241).
In the third game, it was agreed to measure the indicators of originality (87.5%), elaboration (81.25%), dynamic integration (87.5%), and refinement of ideas (100%). Fleiss’ kappa statistic shows a slight and statistically significant agreement among the experts in game 3 (Kappa = 0.114, Z = 3.518, p < 0.01) with a 95% confidence interval (0.05–0.177), which is the lowest of the three games in this first phase.
Considering all the games simultaneously, that is, all the answers given by the experts in Phase 1, an acceptable and statistically significant level of global agreement was obtained (Kappa = 0.232, Z = 12.444, p < 0.01) with a 95% confidence interval (0.195–0.268). Although statistically significant agreements were obtained in Fleiss’ kappa statistic, these agreements are low and acceptable, which means that they are not sufficient to conclude the Delphi method. For this reason, it was necessary to conduct a second round of Delphi to achieve a higher level of agreement.
In phase 2 of the Delphi method, the experts were able to consider the results of phase 1 in order to modify or maintain their answers. In this second phase, higher levels of agreement were expressed for each of the games, as shown in Table 4. 100% of the experts indicated that the indicators of fluency, flexibility and sensitivity to problems should be measured in Game 1, just as 100% of the experts indicated that the indicators of opacity and dynamic integration should not be measured. In Game 1, the overall agreement among the experts was almost perfect and statistically significant (Kappa = 0.821, Z = 25.425, p < 0.01) with a 95% confidence interval (0.757–0.884), meaning that it was the highest level of agreement among all the games.
In Game 2, the experts showed 100% agreement in judging originality, elaboration, and dynamic integration. Fleiss’ kappa statistic shows considerable and statistically significant agreement among the experts in Game 2 (Kappa = 0.688, Z = 21.311, p < 0.01) with a 95% confidence interval (0.625–0.751).
In addition, 100% inter-rater agreement was achieved in Game 3 for the measures of originality, elaboration, dynamic integration, and refinement of ideas. The overall agreement reached by the experts in Game 3 was considerable and statistically significant (Kappa = 0.628, Z = 19.451, p < 0.01) with a 95% confidence interval (0.565–0.691), being the game with the lowest level of agreement of the three games, as was the case in the first phase of the Delphi method.
In the whole phase 2 of the Delphi method, a considerable and statistically significant level of global agreement was obtained (Kappa = 0.723, Z = 38.824, p < 0.01) with a confidence interval of 95% (0.687–0.76). The percentage agreement of 80% was obtained for each of the indicators of verbal creative thinking, which is considered key for the measurement or not of each of them in the different games. Likewise, Fleiss’ kappa statistic showed considerable and almost perfect degrees of agreement in this second phase, significantly increasing the levels of agreement that had occurred in the first phase. Therefore, the Delphi method was stopped in this second phase.
4 Discussion
The results obtained show that the experts consulted consider that the main indicator of verbal creative thinking is originality (Forthmann et al., 2021; Pichot et al., 2022), understanding that the original ideas contributed by the subjects in the proposed tasks are distant in meaning from the task stimuli (Beaty and Johnson, 2021; Dumas et al., 2021).
To originality are added elaboration and flexibility, since their measurement is considered in the three games, appreciating a clear tendency of the experts for the classic indicators of creativity. Also considered, although to a lesser extent since it is only evaluated in three games, is fluency, which is another of the classic indicators on which estimates of the creative thinking of individuals are based (Runco and Acar, 2019; Corbalán, 2022). In fact, most creativity tests usually measure fluency, flexibility, and originality in their verbal tasks (Torrance, 1974; Corbalán et al., 2003; Artola et al., 2004; López-Martínez, et al., 2018). However, the indicator that is not usually measured in verbal tasks is elaboration, which is usually considered in figurative tasks.
The indicators used to assess verbal creative thinking in a problem-solving task were fluency, flexibility, originality, elaboration, and sensitivity to problems. The indicator that was added to those already described was sensitivity to problems (Guilford, 1968). This indicator is considered to be the starting point of creative thinking, since problem posing, and problem finding are responsible for the emergence of creativity.
In addition, flexibility, originality, elaboration, opacity, and dynamic integration are the key indicators to be measured in a creative idea generation task in the evaluation of the construct. This type of task is usually carried out in creativity tests, although in this case there is the specificity of including opacity, in which non-literal language is sought, and dynamic integration, which focuses on the unity of the elaborated text (Desrosiers, 1978; López-Martínez, et al., 2018). The incorporation of these indicators implies the transition from general creativity to verbal creative thinking.
In Game 3 regarding the improvement of ideas, the measurement of fluency, flexibility, originality, elaboration, dynamic integration, and refinement of ideas is considered crucial. In addition to the indicators described above, the refinement of ideas is a key indicator in a task where the goal is to improve a given product with the aim of making it suitable for solving the task at hand and original compared to the rest of the individuals (OECD, 2020; Matheson et al., 2023). This task is the last step of verbal creative thinking. It should be mentioned that usually only tasks of idea generation are given to the detriment of tasks of problem solving and improvement of ideas.
In conclusion, it should be noted that the detection of verbal creative thinking is key to its development in the school context. The tools used to measure the verbal area of creativity are not sufficient because they do not consider all the indicators identified in this study that make it possible to measure this thinking (Guilford, 1968; Desrosiers, 1978; Kasirer and Mashal, 2018; Acar et al., 2019; Beaty and Johnson, 2021; Forthmann et al., 2021). For this reason, this information is valuable for the development of a tool that assesses verbal creative thinking in an appropriate manner, using texts that have key relevance in this verbal domain.
Measuring it is a step forward in this facet, as creativity is one of the 21st century competencies that allows society to adapt and solve problems (OECD, 2020; Corazza et al., 2022). Teachers play a fundamental role in its improvement, as they oversee the implementation of educational practices that allow its progress through creative intervention programs and the development of a creative curriculum. They are also key to identifying talent in the area of knowledge most closely related to language, writing and reading.
The main limitation of this study was the choice of creativity indicators. This did not allow the experts to consider indicators other than those previously established for measurement in each of the games. However, the selection of indicators was made after a thorough review of the scientific literature. The intention of this selection was that the experts would consider both traditional indicators of creativity, which are widely known, as well as specific indicators of verbal creativity, which have not been considered as much in the evaluation of this thought in the various tests of creativity. In fact, it was the combination of both in the different tasks that made it possible to measure the construct called verbal creative thinking.
Once the indicators of Verbal Creative Thinking have been identified in the present study, the main line of future research is to develop a test to estimate the verbal creative potential, which includes the content validity of the test and the development of a pilot study with the intended participants of the test, individuals in primary education. Also, this test will serve not only to work on aspects of reading comprehension but also to prevent language development disorders.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving humans were approved by Comisión de Ética de Investigación – Universidad de Murcia. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
OL: Writing – original draft, Writing – review & editing. AL: Writing – original draft, Writing – review & editing. MV-Y: Writing – original draft, Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This research was funded by the R+D+i project “Verbal creativity as a domain: design, validation and application of a test based on text comprehension” [“Creatividad verbal como dominio: diseño, validación y aplicación de una prueba fundamentada en la comprensión de textos”] PID2020-113731GB-I00 funded by MICIU/AEI/10.13039/501100011033.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Abdulla Alabbasi, A. M., Reiter-Palmon, R., Sultan, Z. M., and Ayoub, A. E. A. (2021). Which divergent thinking index is more associated with problem finding ability? The role of flexibility and task nature. Front. Psychol. 12:671146. doi: 10.3389/fpsyg.2021.671146
Acar, S., Abdulla Alabbasi, A. M., Runco, M. A., and Beketayev, K. (2019). Latency as a predictor of originality in divergent thinking. Think. Skills Creat. 33:100574. doi: 10.1016/j.tsc.2019.100574
Amabile, T. M. (2018). Creativity in context: update to the social psychology of creativity. New York: Routledge.
American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (2018). Estándares para pruebas educativas y psicológicas. Washington: American Educational Research Association.
Artola, T., Ancillo, I., Mosteiro, P., and Barraca, J. (2004). PIC-N. Prueba de Imaginación Creativa para niños. Madrid: TEA Ediciones.
Baer, J. (2019). “Theory in creativity research: the pernicious impact of domain generality” in Creativity under duress in education? Creativity theory and action in education. ed. C. A. Mullen (New York: Springer), 119–135.
Beaty, R., and Johnson, D. R. (2021). Automating creativity assessment with SemDis: an open platform for computing semantic distance. Behav. Research Methods 53, 757–780. doi: 10.3758/s13428-020-01453-w
Cabero, J., and Barroso, J. (2013). La utilización del juicio de experto para la evaluación de tic: el coeficiente de competencia experta. Bordón. Rev. Pedag. 65, 25–38. doi: 10.13042/brp.2013.65202
Corazza, G. E., Agnoli, S., and Mastria, S. (2022). The dynamic creativity framework: theoretical and empirical investigations. Eur. Psychol. 27, 191–206. doi: 10.1027/1016-9040/a000473
Corbalán, F. J., Martínez Zaragoza, F., Donolo, D., Alonso Monreal, C., Tejerina, M., and Limiñana, M. R. (2003). CREA. Inteligencia Creativa. Una medida Cognitiva de la Creatividad. Madrid: TEA Ediciones.
Dumas, D., Organisciak, P., and Doherty, M. (2021). Measuring divergent thinking originality with human raters and text-mining models: a psychometric comparison of methods. Psychol. Aesthet. Creat. Arts 15, 645–663. doi: 10.1037/aca0000319
Forthmann, B., Szardenings, C., and Dumas, D. (2021). Testing equal odds in creativity research. Psychol. Aesthet. Creat. Arts 15, 324–339. doi: 10.1037/aca0000294
Guilford, J. P. (1956). The structure of intellect. Psychol. Bull. 53, 267–293. doi: 10.1037/h0040755
Guilford, J. P. (1968). Intelligence, creativity, and their educational implications. San Diego: Robert R. Knapp.
Kasirer, A., and Mashal, N. (2018). Fluency or similarities? Cognitive abilities that contribute to creative metaphor generation. Creat. Res. J. 30, 205–211. doi: 10.1080/10400419.2018.1446747
Kaufman, J. C., Glǎveanu, V. P., and Baer, J. (2017). “Creativity across different domains: an expansive approach” in The Cambridge handbook of creativity across domains. eds. J. C. Kaufman, V. P. Glăveanu, and J. Baer (Cambridge: Cambridge University Press), 3–7.
Krieger-Redwood, K., Steward, A., Gao, Z., Wang, X., Halai, A., Smallwood, J., et al. (2023). Creativity in verbal associations is linked to semantic control. Cerebr. Cortex 33, 5135–5147. doi: 10.1093/cercor/bhac405
Landis, J. R., and Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometr. 33, 159–174. doi: 10.2307/2529310
López-Gómez, E. (2018). El método Delphi en la investigación actual en educación: una revisión teórica y metodológica. Education 21, 17–40. doi: 10.5944/educxx1.20169
López-Martínez, O., Cuesta Sáez de Tejada, J. D., and Sandoval Lentisco, C. (2018). Propiedades métricas y estructura dimensional de un instrumento para evaluar la creatividad verbal en alumnos de Educación Primaria. Rev. Inves. Inves. Educ. 16, 153–169. Available at: http://webs.uvigo.es/reined/.
Lorca Garrido, A. J., López-Martínez, O., and Vicente-Yagüe, M. I. (2021). Latent inhibition as a biological basis of creative capacity in individuals aged nine to 12. Front. Psychol. 12:650541. doi: 10.3389/fpsyg.2021.650541
Maio, S., Dumas, D., Organisciak, P., and Runco, M. A. (2020). Is the reliability of objective originality scores confounded by elaboration? Creat. Res. J. 32, 201–205. doi: 10.1080/10400419.2020.1818492
Matheson, H. E., Kenett, Y. N., Gerver, C., and Beaty, R. E. (2023). Representing creative thought: a representational similarity analysis of creative idea generation and evaluation. Neuropsych. 187:108587. doi: 10.1016/j.neuropsychologia.2023.108587
Muñiz, J., and Fonseca-Pedrero, E. (2019). Diez pasos para la construcción de un test. Psicothema 31, 7–16. doi: 10.7334/psicothema2018.291
Pichot, N., Bonetto, E., Pavani, J.-B., Arciszewski, T., Bonnardel, N., and Weisberg, R. (2022). The construct validity of creativity: empirical arguments in favor of novelty as the basis for creativity. Creat. Res. J. 34, 2–13. doi: 10.1080/10400419.2021.1997176
Plans, B., and León, O. G. (2003). ¿Cómo debe ser el Doctorando ideal en Psicología? Contesta el Oráculo de Delfos. Psicothema. 15, 610–614. Available at: https://www.psicothema.com/pdf/1114.pdf.
Portnova, T., Ortega-Martín, J. L., Zurita-Ortega, F., and González-Valero, G. (2020). The educational interrelation of narrative creativity and written expression dimensions as an innovative and didactic process in learning a foreign language. Sustain. For. 12:7274. doi: 10.3390/su12187274
Romo, M., Alfonso-Benlliure, V., and Sánchez-Ruiz, M. J. (2016). El test de creatividad infantil (TCI): evaluando la creatividad mediante una tarea de encontrar problemas. Psicol. Educ. 22, 93–101. doi: 10.1016/j.pse.2016.01.005
Runco, M. A. (2008). Commentary: divergent thinking is not synonymous with creativity. Psychol. Aesthet. Creat. Arts 2, 93–96. doi: 10.1037/1931-3896.2.2.93
Runco, M. A., and Acar, S. (2019). “Divergent thinking” in The Cambridge handbook of creativity. eds. J. C. Kaufman and R. J. Sternberg (Cambridge: Cambridge University Press), 224–254.
Torrance, E. P. (1974). The Torrance test of creative thinking – norms – technical manual research edition – verbal test, forms a and B – figural test forms a and B. Princeton: Personnel Press.
Violant, V. (2006). Indicadores clásicos en la evaluación de la creatividad, in Comprender y evaluar la creatividad, eds. S. TorreDe la and V. Violant (Málaga: Aljibe), 169–179.
Keywords: verbal creativity, children, fluency, flexibility, originality
Citation: López Martínez O, Lorca Garrido AJ and de Vicente-Yagüe Jara MI (2024) Indicators of verbal creative thinking: results of a Delphi panel. Front. Psychol. 15:1397861. doi: 10.3389/fpsyg.2024.1397861
Edited by:
Jesus de la Fuente, University of Navarra, SpainReviewed by:
Xindong Ye, Wenzhou University, ChinaRaquel Gilar-Corbi, University of Alicante, Spain
Copyright © 2024 López Martínez, Lorca Garrido and de Vicente-Yagüe Jara. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Olivia López Martínez, olivia@um.es