Skip to main content

ORIGINAL RESEARCH article

Front. Educ., 07 December 2021
Sec. Assessment, Testing and Applied Measurement

Social Well-Being at School: Development and Validation of a Scale for Primary Education Students

  • 1Department of Pedagogy, University of Jaume I, Castelló de la Plana, Spain
  • 2Department of Education, University of Jaume I, Castelló de la Plana, Spain

This study presents the development and validation of a scale for Primary Education students that measures social well-being. A seven-factor structure was defined, with the factors being: achievement, cooperation, cohesion, coexistence, attitude towards school, attitude towards diversity and solidarity. 14 experts from independent European universities participated in the validation process of the scale. The 38-item scale showed considerable reliability (Cronbach’s alpha =0.91). The confirmatory factor analysis confirmed the original seven-factor structure with consistent goodness and badness of fit indexes. The promising results in this study suggest that this scale may be suitable for an international audience.

1 Background

Students’ social well-being may be defined as the extent to which they feel a sense of belonging and social inclusion in their academic environment (Pang, 2018). The importance of social well-being has been cited by different authors as one of the key factors in students’ school success (Chen et al., 2018; Wrigley, 2019). They claim that it is a variable that influences students’ physical and psychological health, academic performance and personal development, among others. Kokka (2019) and Alanko and Lund (2019) state that a connected student is a supported student at school. Hence, given its importance, it is essential to develop instruments that quantify and determine the degree of students’ social well-being (Ryff, 2018). Moreover, authors such as Wrigley (2012), Niclasen et al. (2018) and Mowat (2019) state that Primary Education (ages 6–12) is arguably the most important educational stage in terms of students’ social well-being because their experiences at this educational level may affect their development in future stages such as middle school, high school and college.

Several authors have examined the complex and multi-model structure of students’ social well-being across different educational levels and countries. In this vein, a comparison between Turkish and California adolescents by Telef and Furlong (2017) concluded that factors associated with students’ well-being may be difficult to determine and may vary across cultures or countries. In this sense, Maor and Mitchem (2018) believe that determining and measuring social well-being among schoolchildren is a difficult and complex task, as many different kinds of factors must be considered. One of the first factors linked to well-being at school that arises in the literature is achievement. Achievement may be defined as a student’s competence in a given content area as a result of both intellectual and non-intellectual variables (Centeio et al., 2019). However, some authors state that while the association between social well-being and achievement is important, it is often wrongly ignored (Beard, 2018). That is not the case for other variables such as cooperation, cohesion and coexistence, as many authors in the field mention them as strongly linked to social well-being (Orkibi and Tuaf, 2017). Although several definitions of these variables may be found in the literature, cooperation may be defined as the presence of deliberate relations among autonomous students to jointly accomplish individual goals (Lu and Hallinger, 2018). One definition of coexistence is the way in which students live together with a series of shared rules and environments (Mayorga and Picower, 2018). Likewise, Mikulyuk and Braddock (2018) define cohesion as students’ tendency to stick together and remain united as they pursue instrumental objectives and meet their affective needs. Cultural, gender, economic and physical differences among students must also be addressed when analysing their social well-being (Longobardi et al., 2019). Solidarity and attitudes towards diversity appear to be two of the most important variables (Dell’Anna et al., 2019; Piwoni, 2019). Solidarity can be defined as a concern for peers that implies making sacrifices based on a feeling of unity, while attitude towards diversity may be defined as the positive acceptance of other individuals regardless of their age, physical attributes, gender, culture, race, ethnicity, sexual orientation or economic or social class (Solhaug and Osler, 2018). Finally, institutional settings should also be considered when addressing students’ well-being, with attitude towards school being a relevant factor (Albuquerque et al., 2019; Nikula et al., 2020). It can be defined as a student’s feelings about their attendance at their school from an institutional perspective. Although many other variables linked to students’ social well-being may be found in the literature, the ones mentioned above are the most common.

As stated by Lamb and Land (2013) and Bethell et al. (2017) assessing children well-being is many times very complex because not only school, but also family, health and other factors may influence this variable. Most recently, research on the effects of covid-19 on students’ well-being has been documented (Cusinato et al., 2020; Morelli et al., 2020) and new directions have been taken towards the concept of well-being and the assessment of the factors that may define it. In this sense, a higher increase on the influence of screen exposure, which was a factor that was not considered as important some years ago, has now increased again given its capital importance (Bruggeman et al., 2019: Garcia-Hermoso et al., 2020). Hence, the dynamic nature of the concept of well-being makes it arguably one of the most difficult variables in educational contexts to be analysed. In this sense, differences among countries, cultures and even inside the same countries must also be considered (Unicef, 2012; Rees and Dinisman, 2015; Migliorini et al., 2019; Robayo-Abril and Millan, 2019).

Taking all of the above into consideration, two main objectives are defined for this study. The first is to develop and validate a scale for Primary Education students that measures their social well-being at school. Researchers, practitioners and other professionals in education and social sciences are provided with a simple tool which may be used to assess students’ social well-being at their schools using. Through this assessment, specific needs in students’ social well-being may be detected and interventions on teachers’ training, students’ behaviours or school policies may be arranged. In this sense, the instrument is intended to be suitable for an international audience so that comparisons between different contexts and countries may be possible. Hence, differences in results by gender, school and age will be analysed. The second objective is to determine the possible relationships among the factors in the scale, in order to contribute to a better understanding of the dimensions of students’ social well-being at Primary Education. In this sense, understanding the relationship between these dimensions may also facilitate professionals’ interventions on schools which are aimed at increasing students’ social well-being.

2 Methods

2.1 Participants

On the one hand, 14 volunteer experts from independent European universities participated in the validation process. All of them were full-time professors employed by their universities, and they participated on a voluntary basis with no conflicts of interests or monetary compensation. Nine came from public universities and 5 from private universities. Their ages ranged from 30 to 62, with an average age of 48.73 years old and a standard deviation of 15.3 years. Eight were male and 6 were female. More information on their selection process can be found in the Design and Procedure section of this manuscript.

On the other hand, 486 Primary Education students from five Spanish schools participated in the processes following the validation. 51.85% were female and 49.15% were male. All of them were enrolled in grades 4 to 6 (ages 10–12), with the average age being 10.81 years old and a standard deviation of 1.89 years. Of the total, 287 (59%) were Hispanic, 97 (20%) were Rumanian, 83 (17%) were African, 15 (3%) were Asian and the other 1% were from other ethnic groups. The socio-economic background of their families was average. Legal consent to perform this research was obtained from both the educational institutions and the students’ parents, and the ethical standards provided by the Spanish Ministry of Education were followed.

2.2 Representativity of the Sample

According to the Spanish Statistical Institute, in 2018 there were about 2,000,000 students in grades 4 to 6 (ages 10–12) enrolled in public and private schools in Spain. Using Cochran’s formula for large populations with a confidence coefficient of 95% (p < 0.05), at least 386 responses were needed in order to make that sample representative of the whole population. Hence, according to these authors, the sample was representative for Spanish Primary Education students from grades 4 to 6. The sampling procedure used in this study was stratified random sampling. Hence, Spanish schools were taken as strata for this purpose. The size of the sample, gender, ethnicity and ages in each stratum was intended to proportionately represent the size of the population.

2.3 Design and Procedure

Guidelines provided by the International Test Commission (International Test Commission, 2019) were followed regarding the creation, testing, translation and validation of the scale.

2.3.1 Phase 1: Scale Design

Buntins et al. (2017) state that there is no universally-accepted method to fully ensure the validation process of a scale. Nevertheless, a series of common guidelines accepted by different educational organizations may be followed (Long, 2017). Hence, in accordance with SAGE model of social psychological research (Power et al., 2018), the validation process that was used in this research is described below.

First, a literature review was carried out on the existing scales which addressed the aforementioned variables. As indicated by Walliman (2017), broad knowledge of the area is needed before formulating the scale items. Then, an initial scale was designed following the recommendations of Wolf et al. (2016). Based on this initial design, researchers drafted the scale items. The Delphi method was used in order to bring validity to the scale (Belton et al., 2019). Dillman’s Total Design Method (Axford et al., 1997) was used to develop the questions.

2.3.2 Phase 2: Validation

After the initial design of the instrument, independent experts judged it. As indicated by Steedle and Ferrara (2016), expert judgment allows the appropriateness of the items to be assessed and determines whether they belong to the defined factors or constructs. The scale was developed to be used by an international audience. To this end, the guidelines suggested by Arafat et al. (2016) were followed. A search for experts from different European universities was conducted. Their academic and professional background, their relationship to the field of study and their international experience were considered the selection criteria. As Jaeger (1991) indicates, the selection must be careful and the experts varied. Once the selection was finished, the experts were contacted via email. As Jann and Hinz (2016) state, conducting a survey using means other than email does not ensure greater validity or reliability of results. The guidelines by Whittaker and Worthington (2016) were followed to present the scale and its instructions to the experts. The experts’ contributions were anonymous. According to Roberts and Allen (2015), anonymity gives the study rigor while also decreasing the possibilities that opinions may be affected by others. In order to carry out the validation, experts used Lawshe’s method (Baghestani et al., 2019) to quantify the need to include each scale item. This method means that the experts had to grade each item depending on its suitability for inclusion in the scale. A grade of 1 indicates that the item is not necessary, a grade of 2 indicates that the item is useful but not essential and a grade of 3 indicates that the item is essential. Their answers were collected using Google Forms. Only items showing a significance level of p < 0.05 were included in the final version of the scale. The CVR (content validity ratio) for each item was reported. Furthermore, the inter-expert degree of agreement was analysed. As Kane (2016) indicate, addressing the individual validity of each item is not sufficient, and instead a good inter-expert degree of agreement is needed to ensure validity. Hence, taking into account the type of quantitative answers provided by the experts, Fleiss’ Kappa coefficient and the intra-class correlation coefficient were used to measure the degree of agreement between experts. According to Vanbelle (2019), these are the most appropriate means when analysing ordinal responses. SPSS version 25 was used to calculate these coefficients.

2.3.3 Phase 3: Reliability analysis

The reliability of the scale was tested after the validation process. To this end, Cronbach’s alpha, Spearman-Brown & Guttman split-half coefficient were used to assess the overall reliability of the scale, and if the item was deleted the Cronbach’s alpha was also used (Raykov and Marcoulides, 2019). SPSS version 25 software was used to conduct these analyses.

2.3.4 Phase 4: Exploratory Factor Analysis

An EFA was conducted after the validation and reliability analyses (Zhang et al., 2019). Following guidelines given Watkins (2018), first, a testing for normality of distribution using Kolmogorv-Smirnov parameter was carried out. Once normality of the distribution was guaranteed, responses from 200 students were used to conduct the EFA. Previosly, its feasibility was analysed using Kaiser-Meyer-Olkin and Bartlett’s sphericity tests (Beavers et al., 2013). After assuring the feasibility of carrying out a factor analysis, the Kaiser method with varimax rotation (Osborne, 2015) was used. According to this method, the final version of the scale should have the same number of factors as eigenvalues greater than 1, and confirmatory factor analysis trials should be based on these results. SPSS version 25 software was used to conduct the EFA and the aforementioned preliminary tests.

2.3.5 Phase 5: Confirmatory Factor Analysis

A CFA was carried out after considering the results from the EFA (Schreiber et al., 2006). According to Jorgensen et al. (2018) at least 250 subjects are necessary to conduct a CFA. Hence, responses from another 286 students were used in this phase. Taking into account the number of eigenvalues greater than 1 shown by the EFA, several trials for different dimensional structures were performed until the goodness-of-fit indices showed suitable values (Pendergast et al., 2017). For this purpose, Bentler’s comparative fit index (CFI) and Joreskog-Sorbom’s goodness of fit index (GFI) were used. Furthermore, chi-squared divided by degrees of freedom (χ2/df) and root mean square residual (RMR) were used as badness-of-fit indices. Values for CFI and GFI should be 0.9 or greater (the higher the better), RMR should be 0.06 or lower and χ2/df should be lower than 4 (the lower the better) in order to achieve a suitable factor structure. EQS 6.2 software was used to perform the CFA.

2.3.6 Phase 6: Descriptive and Inferential Statistical Analysis

After the CFA, quantitative descriptive and inferential statistical analyses were carried out. Descriptive results included calculation of average scores, standard deviations and Cronbach’s alpha for each factor and the scale as a whole. Responses were analysed by gender, age and school overall and by factors. Student’s t-tests were conducted for the gender and school variables (De Winter, 2013). Moreover, one-way ANOVA tests (Wetzels et al., 2012) were carried out for the age and factor variables. Finally, the Pearson correlation coefficient was used to measure the correlation among factors (De Winter et al., 2016) taking into account the cut-off value suggested by Dancey and Reidy (2007). According to these authors, in educational and psychological studies, Pearson correlation coefficients greater than 0.4 indicate strong correlation. SPSS version 25 software was used to conduct calculations in this phase.

3 Results

3.1 Phase 1: Scale Design

After performing the literature review and following the procedures and guidelines indicated in the section above, the original version of the scale was designed. That original version had 56 items distributed across seven factors: achievement, cooperation, cohesion, coexistence, attitude towards school, attitude towards diversity and solidarity. Factors were determined after a thorough literature review (see Background section) and taking into account the latest studies in the field that addressed which factors moderate students’ social well-being (Bücker et al., 2018; Storli and Hansen Sandseter, 2019).

3.2 Phase 2: Validation

A total of 14 responses were obtained from 14 different experts. In Table 1, expert responses are summarised, and the number of assessments received for each item is shown. Furthermore, CVR and its significance level for the two-tailed test were also included.

TABLE 1
www.frontiersin.org

TABLE 1. Expert scores and CVR for each item.

As shown in Table 1, items rated as “essential” by fewer than 10 of 14 experts were excluded, as they revealed a CVR that is not significant. The rest of the items revealed a high significance level and were kept in the scale.

Upon analysing the degree of agreement between the experts, an intra-class correlation value for the coefficient of 0.83 was found. Furthermore, a Fleiss’ kappa value of 0.74 was also found, showing substantial agreement among the experts (Vanbelle, 2019). The end result of this phase is a 38-item scale.

3.3 Phase 3: Reliability Analysis

For this phase, we relied on a sample of 221 students who responded to the scale. A Cronbach’s alpha of 0.91, a Spearman-Brown coefficient of 0.70 and a Guttman split-half coefficient of 0.67 were obtained. According to Vaske et al. (2017), these values can be considered as consistent.

Apart from the aforementioned reliability analyses, the Cronbach’s alpha was also calculated for each item according to whether the item is deleted. Based on the results, it was observed that eliminating items did not improve the overall reliability of the scale but instead worsened it. Therefore, all 38 items were kept for use in the following phase. The final version of the scale that was used for the EFA and CFA is provided as Supplementary Material.

3.4 Phase 4: Exploratory Factor Analysis

The Kaiser Meyer Olkin value (0.76) and Bartlett’s sphericity test (1,239.05 p < 0.00) revealed satisfactory results, indicating that it was appropriate to conduct an EFA. The results of the Kaiser method are shown in Table 2 (only components with eigenvalues greater than 1 are shown).

TABLE 2
www.frontiersin.org

TABLE 2. Global information from the Kaiser method.

3.5 Phase 5: Confirmatory Factor Analysis

A CFA was carried out, taking the results of the EFA into consideration. Analysing the seven eigenvalues from Table 2 reveals that the distance between the highest (2.57) and the other eigenvalues is not considerable. Hence, according to Chatfield (2018), all eigenvalues in this study are equally important and none should be considered “small.” Furthermore, as Auerswald and Moshagen (2019) state, only considering eigenvalues greater than one has a statistical basis, assuring that at least a major part of the variance is explained by these eigenvalues. Hence, a CFA for the original seven-factor structure was conducted. The analysis revealed the following values: CFI = 0.92; GFI = 0.90; χ2/df = 2.13 and RMR = 0.05. As these consistent values of goodness and badness of fit indices belonged to the original seven factor structure, no more confirmatory factor analysis were performed. Factor loadings, which also seemed consistent for factors and items are provided in Table 3.

TABLE 3
www.frontiersin.org

TABLE 3. Factor and item loading for the confirmatory factor analysis.

3.6. Phase 6: Descriptive and Inferential Statistical Analysis

The analysis by gender did not show statistically significant differences in either overall scores on the scale (t = 1.19, p = 0.24) or by factors. The analysis by grades did not show statistically significant differences either with the overall scores on the scale (F = 0.91, p = 0.44) or analysing the different factors separately. Likewise, no differences were detected among schools (F = 1.42, p = 0.23) using either the overall scores or factors. The descriptive results are shown in Table 4 with N, X, SD and α representing the number of factors, average score, standard deviation and Cronbach’s alpha for each factor.

TABLE 4
www.frontiersin.org

TABLE 4. Overall results by factors.

Table 5 shows the analysis of correlations between factors. In cases in which the correlation is statistically significant (p < 0.01) and the Pearson r-value is greater than 0.4, the box is shaded in grey and there is an asterisk (*) next to the value.

TABLE 5
www.frontiersin.org

TABLE 5. Analysis of correlations.

4 Discussion

In this study, strong relationships between the factors were detected, as well as a single factor that could not be linked to any other. Thus, it is justified that the factors of cooperation and solidarity showed a high correlation given that they are strongly related from a theoretical standpoint. It should be borne in mind that the concept of cooperation is an attitudinal manifestation of solidarity towards solving common needs. Cooperation and participation in the classroom foster the development of positive attitudes such as solidarity. Students help each other to learn, share ideas and resources. Thus, the best way to educate in solidarity is practising it through cooperation (Koomen et al., 2020). The same holds true with the factors of solidarity and coexistence, with the research by authors such as Markham (2019) and Valero et al. (2018) showing a strong interaction between the two. Solidarity is fundamental to achieve coexistence; it is an attitude that is necessary not only in emergencies caused by war, hunger, drought, etc., but also when dealing with the different problems that affect other people: the next ones to us, those of our street, neighbourhood or city, school and also many others whom we do not know but who need our help, so we can help them according to our capability of solving these problems (Migliorini et al., 2019). In the same way, the strong similarities and correlations between the factors of coexistence, cohesion, attitude towards school and academic achievement have previously been documented by authors such as Putwain et al. (2019) and Burman and Miles (2018). One of the cornerstones of peaceful coexistence is group cohesion (Twenge, 2019). For a group to function well, it must be cohesive, and its members must feel part of it and proud to belong to it. This scenario enables the conditions for students to feel satisfaction attending the school, feel integrated into the group and behave properly because of the climate of coexistence that is generated.

On the contrary, the fact that attitude towards diversity does not show statistical similarities or correlations with any other factor is due to the uniqueness of this factor. Recent studies carried out by Juvonen, Kogachi and Graham (2018) and Azorín and Ainscow (2020) indicate that it is difficult to relate attitude towards diversity with other attitudinal, emotional or academic factors. Like any attitude, attitude toward diversity is a concept that is based on three elements (Heintzelman and Diener, 2019): feelings or affections, what we like or dislike in terms of the feeling it provokes in us; beliefs and knowledge, the belief that something is good or bad according to the degree of knowledge, opinion or beliefs we have about it; and behaviours or actions, the idea that something is favourable or unfavourable for us based on the behaviour that we manifest when confronted with it. Attitude is not behaviour itself; it is a precursor (Tilleczek et al., 2019), and given its enormous complexity, it is an entity unto itself and, as mentioned above, is complex to relate to other factors analysed here.

The fact that no differences were detected by gender, age or schools, as well as the participation of independent international experts, are promising results for this scale in that it is meant to be used by an international audience and different cultures (Volk et al., 2018; Wu et al., 2019).

4.1 Practical and Research Implications

The main practical implication of the developed scale in this research is the possibility of assessing students’ well-being in primary education students. Moreover, it gives researchers the chance to assess some factors such as cohesion, coexistence or attitude towards school that have not been so broadly studied in the literature. In this sense, it is quite difficult to find validate scales that assess one or several of these variables, so the potentiality of this scale must also be considered regarding this fact. From the research implication, researchers in the field may use it as a base to create and validate their specific instruments, questionnaires or scales adapting it for their specific contexts, that is, regarding the cultural, economic, academic, religious or contextual factors in which they want to carry out their research. In this regard, it can also serve as a guide for researchers in the field for other educational levels such as middle-school, high school or college, since many of the factors defined in this scale should also be considered in these educational levels when analysing students’ well-being.

4.2 Limitations

Although the results and conclusions of this study are very promising, there are a number of limitations that must be taken into account when analysing them, which necessitate a cautious interpretation. First, the number of experts involved in the validation, though not limited, is far from ideal (Baghestani et al., 2019), which could compromise the universal validity of the scale (Sahin, 2017). The fact that all of them came from European countries must also be taken into account. Similarly, although the degree of agreement among the experts is adequate, it cannot be considered ideal (Penfield and Miller, 2004). Likewise, the reliability of scales, though not negligible, is considered questionable by various authors in the field of education (Vaske et al., 2017; Lisawadi et al., 2018). The goodness and badness of fit indices should also be considered. Although some authors (Reise et al., 1993) consider them suitable, they may seem insufficient for the some of the more demanding authors in this field (Heene et al., 2011). The sample size and its limited representativeness must also be considered (Krejcie and Morgan, 1970).

Regarding future directions, the scale has not yet been tested with 1st, 2nd, and 3rd grade students. In this sense, administering scales to students under the age of 9 poses many technical difficulties in terms of both their reading comprehension levels and their limited cognitive development (Simonds et al., 2007). It is not clear that the scale is suitable for students of these courses. In fact, the 4th grade students already experienced certain difficulties and required more help and time than the higher grades. A study with students of these ages or an adaptation of the scale would be interesting further research. Finally, although this study aspires to produce a scale that can be used by an international audience, it must be borne in mind that it has only been tested with a representative sample in Spain. Its use in different cultures and countries will determine whether it is indeed a valid scale to be used universally (Wrigley, 2018; Sin et al., 2019). In this sense, studies using this scale in different countries and contexts may be considered worthwhile future research.

5 Conclusion

The results from the scale show a strong relationship between different factors. Thus, it can be concluded that the factors of cooperation and solidarity, as well as the variables of solidarity and coexistence, are directly related. In the same way, these conclusions can be extended to the variables of cohesion, attitude toward school, coexistence and academic achievement, which are strongly related. On the contrary, attitude toward diversity is unique in that it is not directly related to and does not depend on any of the aforementioned factors.

This study presents a validated scale that allows Primary Education students’ social well-being at school to be measured. Its considerable reliability and its confirmed factor structure also allow it to quantify the factors of cooperation, cohesion, coexistence, attitude towards school, attitude towards diversity, solidarity and academic achievement. It was designed and validated so it can be used by an international audience, and the promising results from this research suggest that it could be administered regardless of the students’ ages or gender or the school that they attend. It is an instrument that provides a quick diagnosis of students’ social well-being with a low number of items and a survey time of between 20 and 25 min. Based on the results, educational actions can be proposed by researchers or practitioners in the field to try to address those factors in which students show lower results.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author.

Ethics Statement

The studies involving human participants were reviewed and approved by the Conselleria de Educación de la Comunidad Valenciana. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.

Author Contributions

LM: Literature review and discussion; FA: Methodology and statistical analizes; AC-M and OC: Discussion and Conclusion.

Funding

This research has been funded by the Ministry of Science and Innovation of Spain and by the State Agency of Research (AEI) through the project entitled ¡Mueve la Música! Análisis de un programa escolar sociocomunitario de Aprendizaje-Servicio a partir de la práctica musical y la expresión corporal (MOVEMUS). PID2020-116198GB-I00.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2021.800248/full#supplementary-material

References

Alanko, K., and Lund, H. (2019). Transgender Youth and Social Support: A Survey Study on the Effects of Good Relationships on Well-Being and Mental Health. YoUnG 28, 199–216. doi:10.1177/1103308819850039

CrossRef Full Text | Google Scholar

Arafat, S., Chowdhury, H., Qusar, M., and Hafez, M. (2016). Cross Cultural Adaptation and Psychometric Validation of Research Instruments: a Methodological Review. J. Behav. Health 5 (3), 129–136. doi:10.5455/jbh.20160615121755

CrossRef Full Text | Google Scholar

Auerswald, M., and Moshagen, M. (2019). How to Determine the Number of Factors to Retain in Exploratory Factor Analysis: A Comparison of Extraction Methods under Realistic Conditions. Psychol. Methods 24 (4), 468–491. doi:10.1037/met0000200

PubMed Abstract | CrossRef Full Text | Google Scholar

Axford, R., Carter, B., and Grunwald, G. (1997). Enhancing Dillman's Total Design Method for Mailed/Telephone Surveys Using Current Technology to Maximise Cost-Benefit Ratios. Aust. New Zealand J. Sociol. 33 (3), 387–393. doi:10.1177/144078339703300307

CrossRef Full Text | Google Scholar

Azorín, C., and Ainscow, M. (2020). Guiding Schools on Their Journey towards Inclusion. Int. J. Inclusive Edu. 24 (1), 58–76. doi:10.1080/13603116.2018.1450900

CrossRef Full Text | Google Scholar

Baghestani, A. R., Ahmadi, F., Tanha, A., and Meshkat, M. (2019). Bayesian Critical Values for Lawshe's Content Validity Ratio. Meas. Eval. Couns. Dev. 52 (1), 69–73. doi:10.1080/07481756.2017.1308227

CrossRef Full Text | Google Scholar

Beard, K. S. (2018). Standing in the gap: Theory and Practice Impacting Educational Opportunity and Achievement Gaps. Urban Edu. 53 (5), 668–696. doi:10.1177/0042085915613553

CrossRef Full Text | Google Scholar

Beavers, A. S., Lounsbury, J. W., Richards, J. K., Huck, S. W., Skolits, G. J., and Esquivel, S. L. (2013). Practical Considerations for Using Exploratory Factor Analysis in Educational Research. Pract. Assess. Res. Eval. 18 (6), 1–13. doi:10.7275/qv2q-rk76

CrossRef Full Text | Google Scholar

Belton, I., MacDonald, A., Wright, G., and Hamlin, I. (2019). Improving the Practical Application of the Delphi Method in Group-Based Judgment: a Six-step Prescription for a Well-Founded and Defensible Process. Technol. Forecast. Soc. Change 147 (1), 72–82. doi:10.1016/j.techfore.2019.07.002

CrossRef Full Text | Google Scholar

Bethell, C. D., Carle, A., Hudziak, J., Gombojav, N., Powers, K., Wade, R., et al. (2017). Methods to Assess Adverse Childhood Experiences of Children and Families: toward Approaches to Promote Child Well-Being in Policy and Practice. Acad. Pediatr. 17 (7), S51–S69. doi:10.1016/j.acap.2017.04.161

PubMed Abstract | CrossRef Full Text | Google Scholar

Bruggeman, H., Van Hiel, A., Van Hal, G., and Van Dongen, S. (2019). Does the Use of Digital media Affect Psychological Well-Being? an Empirical Test Among Children Aged 9 to 12. Comput. Hum. Behav. 101, 104–113. doi:10.1016/j.chb.2019.07.015

CrossRef Full Text | Google Scholar

Bücker, S., Nuraydin, S., Simonsmeier, B. A., Schneider, M., and Luhmann, M. (2018). Subjective Well-Being and Academic Achievement: A Meta-Analysis. J. Res. Personal. 74, 83–94. doi:10.1016/j.jrp.2018.02.007

CrossRef Full Text | Google Scholar

Buntins, M., Buntins, K., and Eggert, F. (2017). Clarifying the Concept of Validity: From Measurement to Everyday Language. Theor. Psychol. 27 (5), 703–710. doi:10.1177/0959354317702256

CrossRef Full Text | Google Scholar

Burman, E., and Miles, S. (2018). Deconstructing Supplementary Education: from the Pedagogy of the Supplement to the Unsettling of the Mainstream. Educ. Rev. 72, 3–22. doi:10.1080/00131911.2018.1480475

CrossRef Full Text | Google Scholar

Centeio, E. E., Somers, C. L., Moore, E. W. G., Garn, A., Kulik, N., Martin, J., et al. (2019). Considering Physical Well-Being, Self-Perceptions, and Support Variables in Understanding Youth Academic Achievement. The J. Early Adolescence 40, 134–157. doi:10.1177/0272431619833493

CrossRef Full Text | Google Scholar

Chatfield, C. (2018). Introduction to Multivariate Analysis. New York, NY: Routledge.

Google Scholar

Chen, X., Fan, X., Cheung, H. Y., and Wu, J. (2018). The Subjective Well-Being of Academically Gifted Students in the Chinese Cultural Context. Sch. Psychol. Int. 39 (3), 291–311. doi:10.1177/0143034318773788

CrossRef Full Text | Google Scholar

Cusinato, M., Iannattone, S., Spoto, A., Poli, M., Moretti, C., Gatta, M., et al. (2020). Stress, Resilience, and Well-Being in Italian Children and Their Parents during the COVID-19 Pandemic. Int. J. Environ. Res. Public Health 17 (22), 8297. doi:10.3390/ijerph17228297

PubMed Abstract | CrossRef Full Text | Google Scholar

C. Wolf, D. Joye, T. W. Smith, and Y. C. Fu (Editors) (2016). The SAGE Handbook of Survey Methodology (NY, USA: Sage).

Google Scholar

Dancey, C. P., and Reidy, J. (2007). Statistics without Maths for Psychology. London, UK: Pearson Education.

Google Scholar

De Winter, J. C., Gosling, S. D., and Potter, J. (2016). Comparing the Pearson and Spearman Correlation Coefficients across Distributions and Sample Sizes: A Tutorial Using Simulations and Empirical Data. Psychol. Methods 21 (3), 273–290. doi:10.1037/met0000079

PubMed Abstract | CrossRef Full Text | Google Scholar

De Winter, J. C. (2013). Using the Student's T-Test with Extremely Small Sample Sizes. Pract. Assess. Res. Eval. 18 (10), 1–12. doi:10.7275/e4r6-dj05

CrossRef Full Text | Google Scholar

Dell’Anna, S., Pellegrini, M., and Ianes, D. (2019). Experiences and Learning Outcomes of Students without Special Educational Needs in Inclusive Settings: a Systematic Review. Int. J. Inclusive Edu. 25 (1), 944–959. doi:10.1080/13603116.2019.1592248

CrossRef Full Text | Google Scholar

García-Hermoso, A., Hormazábal-Aguayo, I., Fernández-Vergara, O., Olivares, P. R., and Oriol-Granado, X. (2020). Physical Activity, Screen Time and Subjective Well-Being Among Children. Int. J. Clin. Health Psychol. 20 (2), 126–134. doi:10.1016/j.ijchp.2020.03.001

CrossRef Full Text | Google Scholar

Heene, M., Hilbert, S., Draxler, C., Ziegler, M., and Bühner, M. (2011). Masking Misfit in Confirmatory Factor Analysis by Increasing Unique Variances: A Cautionary Note on the Usefulness of Cutoff Values of Fit Indices. Psychol. Methods 16 (3), 319–336. doi:10.1037/a0024917

PubMed Abstract | CrossRef Full Text | Google Scholar

Heintzelman, S. J., and Diener, E. (2019). Subjective Well-Being, Social Interpretation, and Relationship Thriving. J. Res. Personal. 78, 93–105. doi:10.1016/j.jrp.2018.11.007

CrossRef Full Text | Google Scholar

International Test Commission (ITC) (2019). ITC Guidelines for the Large-Scale Assessment of Linguistically and Culturally Diverse Populations. Int. J. Test. 19 (4), 301–336. doi:10.1080/15305058.2019.1631024

CrossRef Full Text | Google Scholar

Jaeger, R. M. (1991). Selection of Judges for Standard-Setting. Educ. Measure: Issues Pract. 10 (2), 3–14. doi:10.1111/j.1745-3992.1991.tb00185.x

CrossRef Full Text | Google Scholar

Jann, B., and Hinz, T. (2016). The SAGE Handbook of Survey Methodology. NY, USA: Sage.

Google Scholar

Jorgensen, T. D., Kite, B. A., Chen, P. Y., and Short, S. D. (2018). Permutation Randomization Methods for Testing Measurement Equivalence and Detecting Differential Item Functioning in Multiple-Group Confirmatory Factor Analysis. Psychol. Methods 23 (4), 708–728. doi:10.1037/met0000152

PubMed Abstract | CrossRef Full Text | Google Scholar

Juvonen, J., Kogachi, K., and Graham, S. (2018). When and How Do Students Benefit from Ethnic Diversity in Middle School? Child. Dev. 89 (4), 1268–1282. doi:10.1111/cdev.12834

PubMed Abstract | CrossRef Full Text | Google Scholar

Kane, M. T. (2016). Explicating Validity. Assess. Educ. Principles, Pol. Pract. 23 (2), 198–211. doi:10.1080/0969594X.2015.1060192

CrossRef Full Text | Google Scholar

Kokka, K. (2019). Healing-Informed Social Justice Mathematics: Promoting Students' Sociopolitical Consciousness and Well-Being in Mathematics Class. Urban Edu. 54 (9), 1179–1209. doi:10.1177/0042085918806947

CrossRef Full Text | Google Scholar

Koomen, R., Grueneisen, S., and Herrmann, E. (2020). Children Delay Gratification for Cooperative Ends. Psychol. Sci. 31, 139–148. doi:10.1177/0956797619894205

PubMed Abstract | CrossRef Full Text | Google Scholar

Krejcie, R. V., and Morgan, D. W. (1970). Determining Sample Size for Research Activities. Educ. Psychol. Meas. 30 (3), 607–610. doi:10.1177/001316447003000308

CrossRef Full Text | Google Scholar

Lamb, V. L., and Land, K. C. (2013). “Methodologies Used in the Construction of Composite Child Well-Being Indices,” in The Handbook of Child Well-Being—Theories, Methods and Policies in Global Perspective (Dordrecht: Springer), 2739–2755.

Google Scholar

Lisawadi, S., Ahmed, S. E., Reangsephet, O., and Shah, M. K. A. (2018). Simultaneous Estimation of Cronbach's Alpha Coefficients. Commun. Stat. - Theor. Methods 48 (13), 3236–3257. doi:10.1080/03610926.2018.1473882

CrossRef Full Text | Google Scholar

Long, H. (2017). Validity in Mixed Methods Research in Education: the Application of Habermas' Critical Theory. Int. J. Res. Method Edu. 40 (2), 201–213. doi:10.1080/1743727X.2015.1088518

CrossRef Full Text | Google Scholar

Longobardi, C., Prino, L. E., Fabris, M. A., and Settanni, M. (2019). Violence in School: An Investigation of Physical, Psychological, and Sexual Victimization Reported by Italian Adolescents. J. Sch. violence 18 (1), 49–61. doi:10.1080/15388220.2017.1387128

CrossRef Full Text | Google Scholar

Lu, J., and Hallinger, P. (2018). A Mirroring Process: From School Management Team Cooperation to Teacher Collaboration. Leadersh. Pol. Schools 17 (2), 238–263. doi:10.1080/15700763.2016.1278242

CrossRef Full Text | Google Scholar

Maor, D., and Mitchem, K. (2018). Hospitalized Adolescents' Use of Mobile Technologies for Learning, Communication, and Well-Being. J. Adolesc. Res. 35, 225–247. doi:10.1177/0743558417753953

CrossRef Full Text | Google Scholar

Markham, T. (2019). Affective Solidarity and Mediated Distant Suffering: In Defence of Mere Feltness. Int. J. Cult. Stud. 22 (4), 467–480. doi:10.1177/1367877918810616

CrossRef Full Text | Google Scholar

Mayorga, E., and Picower, B. (2018). Active Solidarity: Centering the Demands and Vision of the Black Lives Matter Movement in Teacher Education. Urban Edu. 53 (2), 212–230. doi:10.1177/0042085917747117

CrossRef Full Text | Google Scholar

Migliorini, L., Tassara, T., and Rania, N. (2019). A Study of Subjective Well-Being and Life Satisfaction in Italy: How Are Children Doing at 8 Years of Age? Child. Ind. Res. 12 (1), 49–69. doi:10.1007/s12187-017-9514-3

CrossRef Full Text | Google Scholar

Mikulyuk, A. B., and Braddock, J. H. (2018). K-12 School Diversity and Social Cohesion: Evidence in Support of a Compelling State Interest. Edu. Urban Soc. 50 (1), 5–37. doi:10.1177/0013124516678045

CrossRef Full Text | Google Scholar

Morelli, M., Cattelino, E., Baiocco, R., Trumello, C., Babore, A., Candelori, C., et al. (2020). Parents and Children during the COVID-19 Lockdown: The Influence of Parenting Distress and Parenting Self-Efficacy on Children's Emotional Well-Being. Front. Psychol. 11, 584645. doi:10.3389/fpsyg.2020.584645

PubMed Abstract | CrossRef Full Text | Google Scholar

Mowat, J. G. (2019). Supporting the Socio-Emotional Aspects of the Primary-Secondary Transition for Pupils with Social, Emotional and Behavioural Needs: Affordances and Constraints. Improving Schools 22 (1), 4–28. doi:10.1177/1542305018817850

CrossRef Full Text | Google Scholar

Niclasen, J., Keilow, M., and Obel, C. (2018). Psychometric Properties of the Danish Student Well-Being Questionnaire Assessed in >250,000 Student Responders. Scand. J. Public Health 46 (8), 877–885. doi:10.1177/1403494818772645

CrossRef Full Text | Google Scholar

Nikula, E., Järvinen, T., and Laiho, A. (2020). The Contradictory Role of Technology in Finnish Young People's Images of Future Schools. YoUnG 28, 465–484. doi:10.1177/1103308819894806

CrossRef Full Text | Google Scholar

Orkibi, H., and Tuaf, H. (2017). School Engagement Mediates Well-Being Differences in Students Attending Specialized versus Regular Classes. J. Educ. Res. 110 (6), 675–682. doi:10.1080/00220671.2016.1175408

CrossRef Full Text | Google Scholar

Osborne, J. W. (2015). What Is Rotating in Exploratory Factor Analysis. Pract. Assess. Res. Eval. 20 (2), 1–7. doi:10.7275/hb2g-m060

CrossRef Full Text | Google Scholar

P. Albuquerque, C., G. Pinto, I. I., and Ferrari, L. (2019). Attitudes of Parents of Typically Developing Children towards School Inclusion: the Role of Personality Variables and Positive Descriptions. Eur. J. Spec. Needs Edu. 34 (3), 369–382. doi:10.1080/08856257.2018.1520496

CrossRef Full Text | Google Scholar

Pang, H. (2018). WeChat Use Is Significantly Correlated with College Students' Quality of Friendships but Not with Perceived Well-Being. Heliyon 4 (11), e00967. doi:10.1016/j.heliyon.2018.e00967

PubMed Abstract | CrossRef Full Text | Google Scholar

Pendergast, L. L., von der Embse, N., Kilgus, S. P., and Eklund, K. R. (2017). Measurement Equivalence: A Non-technical Primer on Categorical Multi-Group Confirmatory Factor Analysis in School Psychology. J. Sch. Psychol. 60, 65–82. doi:10.1016/j.jsp.2016.11.002

CrossRef Full Text | Google Scholar

Penfield, R. D., and Miller, J. M. (2004). Improving Content Validation Studies Using an Asymmetric Confidence Interval for the Mean of Expert Ratings. Appl. Meas. Edu. 17 (4), 359–370. doi:10.1207/s15324818ame1704_2

CrossRef Full Text | Google Scholar

Piwoni, E. (2019). Giving Back to the World, the Nation and the Family: Cosmopolitan Meaning-Making and Notions of Solidarity Among Young Elite Students. YoUnG 27 (5), 520–536. doi:10.1177/1103308818817633

CrossRef Full Text | Google Scholar

Power, S. A., Velez, G., Qadafi, A., and Tennant, J. (2018). The SAGE Model of Social Psychological Research. Perspect. Psychol. Sci. 13 (3), 359–372. doi:10.1177/1745691617734863

PubMed Abstract | CrossRef Full Text | Google Scholar

Putwain, D. W., Gallard, D., and Beaumont, J. (2019). A Multi-Component Wellbeing Programme for Upper Secondary Students: Effects on Wellbeing, Buoyancy, and Adaptability. Sch. Psychol. Int. 40 (1), 49–65. doi:10.1177/0143034318806546

CrossRef Full Text | Google Scholar

Raykov, T., and Marcoulides, G. A. (2019). Thanks Coefficient Alpha, We Still Need You!. Educ. Psychol. Meas. 79 (1), 200–210. doi:10.1177/0013164417725127

PubMed Abstract | CrossRef Full Text | Google Scholar

Rees, G., and Dinisman, T. (2015). Comparing Children's Experiences and Evaluations of Their Lives in 11 Different Countries. Child. Ind. Res. 8 (1), 5–31. doi:10.1007/s12187-014-9291-1

CrossRef Full Text | Google Scholar

Reise, S. P., Widaman, K. F., and Pugh, R. H. (1993). Confirmatory Factor Analysis and Item Response Theory: Two Approaches for Exploring Measurement Invariance. Psychol. Bull. 114 (3), 552–566. doi:10.1037/0033-2909.114.3.552

PubMed Abstract | CrossRef Full Text | Google Scholar

Robayo-Abril, M., and Millan, N. (2019). Breaking the Cycle of Roma Exclusion in the Western Balkans (No. 31393). Washington, D.C., USA: The World Bank.

Google Scholar

Roberts, L. D., and Allen, P. J. (2015). Exploring Ethical Issues Associated with Using Online Surveys in Educational Research. Educ. Res. Eval. 21 (2), 95–108. doi:10.1080/13803611.2015.1024421

CrossRef Full Text | Google Scholar

Ryff, C. D. (2018). Well-being with Soul: Science in Pursuit of Human Potential. Perspect. Psychol. Sci. 13 (2), 242–248. doi:10.1177/1745691617699836

PubMed Abstract | CrossRef Full Text | Google Scholar

Şahin, M. G. (2017). Comparison of Objective and Subjective Methods on Determination of Differential Item Functioning. ujer 5 (9), 1435–1446. doi:10.13189/ujer.2017.050901

CrossRef Full Text | Google Scholar

Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., and King, J. (2006). Reporting Structural Equation Modeling and Confirmatory Factor Analysis Results: A Review. J. Educ. Res. 99 (6), 323–338. doi:10.3200/JOER.99.6.323-338

CrossRef Full Text | Google Scholar

Simonds, J., Kieras, J. E., Rueda, M. R., and Rothbart, M. K. (2007). Effortful Control, Executive Attention, and Emotional Regulation in 7–10-Year-Old Children. Cogn. Dev. 22 (4), 474–488. doi:10.1016/j.cogdev.2007.08.009

CrossRef Full Text | Google Scholar

Sin, I. L., Leung, M. W. H., and Waters, J. L. (2019). Degrees of Value: Comparing the Contextual Complexities of UK Transnational Education in Malaysia and Hong Kong. Compare: A J. Comp. Int. Edu. 49 (1), 132–148. doi:10.1080/03057925.2017.1390663

CrossRef Full Text | Google Scholar

Solhaug, T., and Osler, A. (2018). Intercultural Empathy Among Norwegian Students: an Inclusive Citizenship Perspective. Int. J. Inclusive Edu. 22 (1), 89–110. doi:10.1080/13603116.2017.1357768

CrossRef Full Text | Google Scholar

Steedle, J. T., and Ferrara, S. (2016). Evaluating Comparative Judgment as an Approach to Essay Scoring. Appl. Meas. Edu. 29 (3), 211–223. doi:10.1080/08957347.2016.1171769

CrossRef Full Text | Google Scholar

Storli, R., and Hansen Sandseter, E. B. (2019). Children's Play, Well-Being and Involvement: How Children Play Indoors and Outdoors in Norwegian Early Childhood Education and Care Institutions. Int. J. Play 8 (1), 65–78. doi:10.1080/21594937.2019.1580338

CrossRef Full Text | Google Scholar

Telef, B. B., and Furlong, M. J. (2017). Social and Emotional Psychological Factors Associated with Subjective Well-Being: A Comparison of Turkish and California Adolescents. Cross-Cultural Res. 51 (5), 491–520. doi:10.1177/1069397117694815

CrossRef Full Text | Google Scholar

Tilleczek, K. C., Bell, B. L., and Munro, M. (2019). “Youth Well-Being and Digital media,” in Youth in the Digital Age (Oxfordshire, UK: Routledge), 39–59. doi:10.4324/9780429464751-3

CrossRef Full Text | Google Scholar

Twenge, J. M. (2019). More Time on Technology, Less Happiness? Associations between Digital-media Use and Psychological Well-Being. Curr. Dir. Psychol. Sci. 28 (4), 372–379. doi:10.1177/0963721419838244

CrossRef Full Text | Google Scholar

Unicef (2012). The Structural Determinants of Child Well-Being. An Expert Consultation Hosted by the UNICEF Office of Research. Florence: Innocenti Publications, 22–23.

Google Scholar

Valero, D., Redondo-Sama, G., and Elboj, C. (2018). Interactive Groups for Immigrant Students: a Factor for success in the Path of Immigrant Students. Int. J. Inclusive Edu. 22 (7), 787–802. doi:10.1080/13603116.2017.1408712

CrossRef Full Text | Google Scholar

Vanbelle, S. (2019). Asymptotic Variability of (Multilevel) Multirater Kappa Coefficients. Stat. Methods Med. Res. 28 (10-11), 3012–3026. doi:10.1177/0962280218794733

PubMed Abstract | CrossRef Full Text | Google Scholar

Vaske, J. J., Beaman, J., and Sponarski, C. C. (2017). Rethinking Internal Consistency in Cronbach's Alpha. Leis. Sci. 39 (2), 163–173. doi:10.1080/01490400.2015.1127189

CrossRef Full Text | Google Scholar

Volk, A. A., Schiralli, K., Xia, X., Zhao, J., and Dane, A. V. (2018). Adolescent Bullying and Personality: A Cross-Cultural Approach. Personal. Individual Differences 125, 126–132. doi:10.1016/j.paid.2018.01.012

CrossRef Full Text | Google Scholar

Walliman, N. (2017). Research Methods: The Basics. London, UK: Routledge. doi:10.4324/9781315529011

CrossRef Full Text | Google Scholar

Watkins, M. W. (2018). Exploratory Factor Analysis: A Guide to Best Practice. J. Black Psychol. 44 (3), 219–246. doi:10.1177/0095798418771807

CrossRef Full Text | Google Scholar

Wetzels, R., Grasman, R. P. P. P., and Wagenmakers, E.-J. (2012). A Default Bayesian Hypothesis Test for ANOVA Designs. The Am. Statistician 66 (2), 104–111. doi:10.1080/00031305.2012.695956

CrossRef Full Text | Google Scholar

Whittaker, T. A., and Worthington, R. L. (2016). Item Response Theory in Scale Development Research: A Critical Analysis. Couns. Psychol. 44 (2), 216–225. doi:10.1177/0011000015626273

CrossRef Full Text | Google Scholar

Wrigley, T. (2012). School Policy in England and the USA: a Review Essay. Crit. Stud. Edu. 53 (1), 109–117. doi:10.1080/17508487.2012.635667

CrossRef Full Text | Google Scholar

Wrigley, T. (2019). Student Well-Being as a Key Strand of School Development. Improving Schools 22 (1), 3. doi:10.1177/1365480219829537

CrossRef Full Text | Google Scholar

Wrigley, T. (2018). The Power of 'evidence': Reliable Science or a Set of blunt Tools? Br. Educ. Res. J. 44 (3), 359–376. doi:10.1002/berj.3338

CrossRef Full Text | Google Scholar

Wu, Z., Hu, B. Y., and Fan, X. (2019). Cross-cultural Validity of Preschool Learning Behavior Scale in Chinese Cultural Context. J. Psychoeducational Assess. 37 (1), 125–130. doi:10.1177/0734282916651538

CrossRef Full Text | Google Scholar

Zhang, G., Preacher, K. J., Hattori, M., Jiang, G., and Trichtinger, L. A. (2019). A Sandwich Standard Error Estimator for Exploratory Factor Analysis with Nonnormal Data and Imperfect Models. Appl. Psychol. Meas. 43 (5), 360–373. doi:10.1177/0146621618798669

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: social well-being, scale, primary education, students, validation

Citation: Moliner L, Alegre F, Cabedo-Mas A and Chiva-Bartoll O (2021) Social Well-Being at School: Development and Validation of a Scale for Primary Education Students. Front. Educ. 6:800248. doi: 10.3389/feduc.2021.800248

Received: 22 October 2021; Accepted: 18 November 2021;
Published: 07 December 2021.

Edited by:

Robbert Smit, St. Gallen University of Teacher Education, Switzerland

Reviewed by:

Nadia Rania, University of Genoa, Italy
Pamela Vaccari, University of Concepcion, Chile

Copyright © 2021 Moliner, Alegre, Cabedo-Mas and Chiva-Bartoll. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lidon Moliner, mmoliner@uji.es

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.