Skip to main content

ORIGINAL RESEARCH article

Front. Sustain., 05 June 2023
Sec. Sustainable Organizations

Exploring sustainability literacy: developing and assessing a bottom-up measure of what students know about sustainability

\r\nColin Kuehl
Colin Kuehl1*Aaron C. SparksAaron C. Sparks2Heather HodgesHeather Hodges3Eric R. A. N. SmithEric R. A. N. Smith4
  • 1Department of Political Science and Institute for the Study of Environment, Sustainability and Energy, Northern Illinois University, Dekalb, IL, United States
  • 2Political Science and Policy Studies, Elon University, Elon, NC, United States
  • 3Bren School of Environmental Science and Management, University of California, Santa Barbara, Santa Barbara, CA, United States
  • 4Department of Political Science, University of California, Santa Barbara, Santa Barbara, CA, United States

With many organizations, particularly higher education institutions, placing a priority on sustainability education it is important to have a measure of sustainability knowledge to assess growth over time. There have been several attempts using differing approaches to develop a valid assessment tool. However, given wide-ranging conceptual definitions of sustainability and diverse instructional techniques, we are skeptical that sustainability is a concept that can adequately be measured. The existing measures were developed using a top-down approach to question inclusion the questionnaire. As an alternative, in this paper we develop a new measure, using a bottom-up approach. In Study 1 with a sample from the University of California, Santa Barbara, we test the 44 item instrument with a large student sample. In Study 2, with a sample from Northern Illinois University, we test a shortened 10 item instrument in a different student population. Across both studies, we find little evidence for a coherent structure to sustainability knowledge. Yet, the 10 item measure correlates highly with the longer version and may be suitable to other research applications.

1. Introduction

The assumption that a sustainable future requires a populace with knowledge of sustainability has led organizations across a wide variety of domains to focus efforts on educating individuals to increase their understanding of sustainability. In recent years, international organizations, cities, private businesses, and, in particular, higher education institutions have continued to emphasize the importance of knowledge of sustainability-related topics. The priority placed on sustainability education has been accompanied by a renewed interest in building a reliable measure of sustainability knowledge and literacy.

The demand for measures of sustainability knowledge has been driven by significant increases in sustainability education at every level of the educational spectrum (Haigh, 2007; Arnaud et al., 2009; Lad and Akerlof, 2022). In the United States, for example, the American Association for the Advancement of Sustainability in Higher Education (AASHE) Sustainability Tracking Assessment Rating System (STARS) was released in 2010 and had over a thousand participating institutions by 2022. In fact, rating systems, which rank the “greenest” colleges and universities in the United States, reward those institutions that either offer more courses that address sustainability issues or even require students to take at least one sustainability course in order to satisfy a general education requirement (Bullock and Wilder, 2016; Findler et al., 2018; GreenMetric, 2022). The AASHE STARS rankings even explicitly rewards institutions that have a campus a sustainability survey; however, other rankings do not measure what their students actually know about sustainability1 Issues of sustainability are embedded across higher education institutions systems from their research and outreach to the impacts of campus operations (Lozano, 2006; Lozano et al., 2015). Intuitively, educating students is central to university sustainability efforts as it is understood that the students of today will soon be the leaders making critical sustainability decisions tomorrow (Cortese, 2003).

Internationally, increased emphasis placed on sustainability knowledge is contained in the 2030 Sustainable Development Goals (SDGs). The SDGs are an internationally negotiated list of 17 goals and subsequent targets to “transform our world” (United Nations General Assembly, 2015). Included in these is target 4.7 which reads; “by 2030 ensure all learners acquire knowledge and skills needed to promote sustainable development, including among others through education for sustainable development and sustainable lifestyles”. In conjunction, a point of emphasis across all the SDGs is the importance of developing indicators for each of the targets. These educational initiatives lead to an obvious question, what do people know about sustainability? Thus, sustainability researchers have sought to develop valid and reliable instruments to measure sustainability knowledge with individuals.

Recognition of the importance of a valid assessment tool has led to the development of a growing handful of measures of sustainability knowledge and literacy across domains, but particularly among higher education institutions (Horvath et al., 2013; Zwickle et al., 2014; Décamps et al., 2017; Zwickle and Jones, 2018; Leiva-Brondo et al., 2022). These measures all attempt to capture respondents understanding of sustainability. Yet, in doing so, broader questions about the concept of sustainability, including the cohesiveness of the concept broadly construed, and what aspects included under the umbrella of sustainability, are most important have come to the fore (Kuehl et al., 2021). Accordingly, differential conceptions of sustainability have led to a variety of approaches to building an appropriate measure.

Due to a lack of conceptual clarity of what sustainability is and is not, skepticism over the validity of any measure of sustainability knowledge is warranted. As discussed in detail below, authors of two common instruments, the SuliTest and the Assessment of Sustainability Knowledge (ASK) employed a top-down method to construct their questionnaire (Zwickle et al., 2014; Décamps et al., 2017). To test an alternative method of instrument development, in this paper, we develop a new measure of college students' knowledge of sustainability using a bottom-up approach. We then subject our measure to a number of statistical tests. In particular, exploratory factor analysis is used to identify otherwise unobservable underlying dimensions of sustainability knowledge based on student responses. Both the full measure and a shorter 10-item scale include questions on a range of topics related to sustainability and have some evidence of face validity. However, the results add to questions about the underlying structure of sustainability knowledge as a cohesive concept and the presence of underlying dimensions that are often assumed. It also adds to discussions about the underlying assumptions of different techniques used to construct measures of complex concepts, like sustainability. We offer this new questionnaire for researchers to further test alongside other existing and new measures as well as to further discussions on the most appropriate approach to the measurement of sustainability.

The paper proceeds accordingly: First, we provide a background on sustainability knowledge and existing attempts to measure it. Our analysis is divided into two distinct studies, one with a sample from the University of California, Santa Barbara and a second sample from Northern Illinois University. In Study 1, we describe our process for developing a new measure of sustainability knowledge and analyze the results with difference of means tests and exploratory factor analysis. Next, in Study 2, we test a 10-item shortened instrument in a separate student population which may be useful to the broader sustainability research community. The paper concludes with a discussion of the meaning of the results for how we understand sustainability and how it can be best measured.

2. Literature review

2.1. The importance of sustainability knowledge

Scholars and political observers have long recognized the importance of the public's knowledge of complex social and environmental issues. Knowledge can affect people's attitudes, opinions, and behaviors (Kraus, 1995; Dahm et al., 2009). Subsequently people need to possess a certain level of understanding of a given topic in order to participate effectively in finding solutions (Gardner and Stern, 1996). Nonetheless, knowledge also has its limitations in promoting particular attitudes and behaviors. Additional knowledge may have little effect of attitudes and opinions when met with stable deeply held beliefs or world views (Kahan et al., 2012). The link between knowledge and behavior is potentially even more attenuated, with a number of studies suggesting that knowledge is just one of many determinants for behavior (Bamberg and Möser, 2007; Heeren et al., 2016; Ehret et al., 2019). Others argue it is less a question of how much people know about sustainability that influences their behavior, but more how the issue is framed (Nisbet, 2009; Maibach et al., 2010; Corner and Groves, 2014).

Yet, despite the ongoing debate about when and how knowledge may be important in influencing behavior, the reality, as suggested above, is that a significant number of institutions have placed considerable importance on promoting and fostering knowledge of sustainability as part of comprehensive plans toward improved sustainability. This alone suggests a demand for a measure of sustainability knowledge.

There has been a good deal of research on the civic scientific literacy of the general public (Miller, 1983, 2004; Shamos, 1995), studies of general environmental knowledge (Coyle, 2005; Leiserowitz et al., 2010) and studies of knowledge about specific environmental topics such as climate change, energy production, and nanotechnology (Bord et al., 2000; Smith, 2002; Klick and Smith, 2009; Lorenzoni and Hulme, 2009). Knowledge of sustainability as a measurable concept was ignored by scholars until recently (Zwickle et al., 2014). This may seem to be a small niche, but it is one of growing importance. In addition, in conjunction with the United Nations, Sulitest—a sustainability literacy test—has been administered in 63 countries around the world (Décamps et al., 2017; Kuehl et al., 2021).

Although there is significant disagreement about how best to define sustainability (Vallance et al., 2011), the most oft-quoted understanding of sustainability is development that “meets the needs of the present, without compromising the ability of future generations to meet their own needs”. From this, sustainability is sometimes understood to have three core domains: the environment, the economy, and society (World Commission on Environment and Development, 1987; Purvis et al., 2019). However, as others have pointed out, one's notion of sustainability is often contextual, and understandings of the term are frequently contested and context dependent (Brown et al., 1987; Kidd, 1992; Portney, 2015; Boyer et al., 2016; Farley and Smith, 2020). This conceptual vagueness, and often unjustified overlap with the term sustainable development (Mebratu, 1998), makes agreement on precisely what knowledge of sustainability should entail unlikely. Accordingly, we assume a broad understanding of what sustainability means.

2.2. Measuring sustainability knowledge

Accordingly, a measure of sustainability knowledge should address key concepts across a variety of domains from economics and business to environmental science and social justice. For example, we assume an individual with a high level of sustainability knowledge understands supplies of some natural resources such as petroleum are finite, so at some future time, production will diminish and finally end. However, other natural resources such as fish can be managed so that they produce a sustainable supply of fish forever. They understand that over-fishing or over-hunting can destroy populations of animals, even driving some to extinction. They know that wealth is unequally distributed both in the U.S. and around the world. Moreover, various environmentally related risks such as droughts and storms can have very unequal impacts on people with different levels of wealth.

In building our measure, our starting assumption is that sustainability knowledge is an unobserved or latent attribute that individuals possess in varying degrees, and thus is a characteristic that can be objectively measured. The challenge is to develop a series of questions that best constitutes a valid measure of what students know about this broad construct. By focusing on the knowledge component, we are not claiming to be measuring students' in-depth understanding, metacognition, or other higher-level thinking about sustainability. While these concepts are useful in themselves, particularly as they relate to actual behavior as is the case for the connectedness to nature scale (Mayer and Frantz, 2004) and the nature relatedness scale (Nisbet, 2009), our focus is on what students know about sustainability, not on how they use that knowledge. In the same vein, the measure does not capture willingness to engage in sustainable behavior, nor attitudes like the New Ecological Paradigm (NEP) scale (Stern et al., 1995). Similarly, our focus is on sustainability, rather than related concepts like ecological literacy (Orr, 1992), or environmental literacy (McBride et al., 2013). These alternative measures and scales are certainly related to sustainability, but we contend sustainability knowledge, as commonly understood, is a distinct concept.

Given the expansive understanding of what makes up sustainability, a measure that accurately assesses sustainability knowledge must contain questions that reflect this extensive scope. In addition, individuals possess varying depths of sustainability knowledge. In other words, knowledge is a matter of degree. Therefore, questions with differing difficulties are necessary. In order to account for this breadth and depth of sustainability knowledge, our goal, to the extent possible, focused on using student responses to create our measure. Beginning with an expansive list of possible questions, we systematically reduced the number of questions to arrive at a more manageable, but still representative 46-question measure that is the core of our analysis. This was then further narrowed to a 10-question battery. This bottom-up approach emphasizing students' responses is a novel method, as far as we know, for building an accurate measure of sustainability knowledge.

The alternative approach taken by the Assessment of Sustainability Knowledge (Zwickle et al., 2014) and other measures of sustainability literacy (Décamps et al., 2017) can be described as more top-down. For example, using input from faculty and textbooks, Zwickle et al. developed their questions explicitly to map onto each of the three domains of sustainability—economics, environment, and justice- and then used faculty and graduate students' experts to narrow down their list of questions. Similarly, Sulitest is explicitly divided into 4 themes [Sustainable humanity and ecosystems on planet Earth, Global and local human-constructed systems, Transition toward sustainability, We each have roles to play to create and maintain individual and systemic changes] and 15 knowledge subjects [i.e., ecosystems, sustainable development, systems change]created by an international group of contributors (Sulitest, 2020).

This expert driven technique uses ex-ante expectations of the structure of sustainability knowledge. However, as Kuehl et al. (2021) point out, one disadvantage of using this approach is that, at least in the case of Sulitest, factor analysis reveals that actual underlying structure of student knowledge does not meaningfully coincide with the ex-ante categories determined by the researchers. In other words, students actual understanding of sustainability did not fit in the categories designed by the Sulitest team. In contrast, our approach in this paper focuses on using student responses to derive differing questions included in the measure. In short, we used iterative pre-tests to derive existing latent categories of knowledge rather than using experts to pre-define categories and sets of questions.

This seemingly subtle difference reflects two different understandings of the underlying structure of sustainability knowledge. The approach we use assumes knowledge of sustainability is fairly general and that those with knowledge in one area are also more likely to have knowledge in others. The technique used elsewhere assumes sustainability knowledge is multi-dimensional. In other words, individuals may have knowledge in some areas, but not in others. This necessitates specialized questions for those areas, or in other words, separate measures (and questions) for each underlying component, such as environment and economics. The two understandings of how sustainability knowledge is structured require two different approaches for building a measure. As discussed in the conclusion, we do not claim that one method is better than the other. Rather, the more useful approach depends on the needs, and understanding of sustainability, of the researcher.

3. Study 1

3.1. Materials and methods

3.1.1. Survey design

Before collecting our full sample for Study 1, our questionnaire went through three rounds of pilot studies to narrow down the total number of questions. Initial question development focused on creating a large list of questions that would tap into a variety of understandings of sustainability. These were further divided into broad concepts such as natural resources, ecology, economics, social equity, climate change, public policies, college policies, and others deemed to be within the broader umbrella of sustainability. Again, the purpose was to develop a questionnaire that would assess what college students know about sustainability, not what they should know. Therefore, the questionnaire was made as broad as possible at the outset. Across multiple sessions in a working group setting, we solicited potential questions from a diverse group of faculty and sustainability professionals on campus. This process netted a total of 143 multiple-choice and true/false questions. We then removed questions that were repetitive, poorly worded, or otherwise not suitable.

This left a pool of 80 questions which were then narrowed using a series of pre-tests with student samples. Three subsets of questions were administered to three courses (each course was generally related to sustainability) in different departments across the campus (n = 239 respondents). For each, instructors used class time for students to complete the questionnaire. After each subsequent round, questions were eliminated that were too easy (all correct) or too difficult (none correct). In addition, factor analysis was used to identify questions that were capturing the same underlying concept and to remove replicates (Clark and Watson, 1995; Worthington and Whittaker, 2006; DeVellis, 2012). Accordingly, in each round we narrowed the number of questions while still aiming to retain the depth and breadth of the concept of sustainability. Narrowing the number of questions was not driven by expert analysis or the researchers, instead, to the extent possible, it was determined by the responses from each round of pre-tests. This process resulted in the final 46-item questionnaire distributed throughout campus.

3.1.2. Participants

Requests to answer the questionnaire, plus 11 additional demographic and opinion questions, were sent through email to all first and fourth-year, by credits completed, undergraduate students enrolled at UC Santa Barbara. No incentives to participate were provided. Two additional reminders were subsequently sent out. Ultimately, 605 students participated in the survey, which accounts for a 6.4% response rate.

Respondents were evenly distributed in terms of class level and ethnicity based on the sample population. Seniors made up 52% of the respondents and 48% were first-years. The majority of the respondents identified as white (46%), with 17% Asian, 14% Hispanic 4% African American, and 4% Filipino also included. Most respondents (64%) identified as female. Perhaps unsurprisingly, pre-biology and environmental studies make up 21% of the responses with the rest mostly distributed between physics, political science, psychology, global studies, sociology, English, communication, and pre-chemistry. Very low responses came from mathematics, statistics, art, and the various languages. Additional demographic information is available in the Supplementary material.

3.2. Results

3.2.1. Evidence of face validity

The total possible raw score for our measure of sustainability knowledge is 46, found by summing the number of questions answered correctly. Raw scores are, admittedly, a crude indicator of sustainability knowledge. The campus sample scored a 33, on average, with a normal distribution of scores. This works out to approximately 72% correct, which is good enough for a passing C grade in most American university level classes. As an indicator of internal consistency, or reliability, we report a Cronbach's alpha of 0.73 for the questionnaire which is below the desired 0.90 but within the range that is acceptable in social science.

Comparing means of subgroups is a basic approach for assessing the validity of our measure. Dividing the sample by class level allows for some initial tests of validity. Our expectation is that fourth-year students would score higher on the knowledge measure than would first-year students. We expect this for three reasons. First, fourth-year students are more likely to have completed coursework that pertains, at least in some way, to sustainability. Second, fourth-year students may have learned strategies to maximize their success on multiple choice exams. Third, they are more likely to live off campus and would be more likely to be aware of the economic benefits of more sustainable behavior, such as water conservation. In our sample, first-year students scored a 32.1 on average. This compares to a 33.8 for fourth-year students in our sample. This amounts to a difference of 1.7 points or about 3.7% of the total. While this difference is not large substantively, it is statistically significant (p < 0.001). Fourth-year students score statistically significantly higher on the measure than first-year students.

A second reasonable expectation that the data allow us to test is that students who major in Environmental Studies (ES) should score higher than non-ES major students. The primary reason for this is that ES majors would have considerably more coursework that engages with issues related to sustainability. The data provides support for this hypothesis as well. ES majors scored an average of 37.1, while non-ES majors scored 4.6 points lower at a 32.5 (p < 0.001).

These results provide evidence of face validity for the questionnaire being a reasonable measure of sustainability knowledge. Had the differences been in the reverse direction or not been significant, it would be difficult to argue that our questionnaire is actually measuring sustainability knowledge. Additional analyses below provide further evidence of the quality of our measure.

3.2.2. Factor analysis

Exploratory factor analysis provides an additional test of validity, assessing whether knowledge of sustainability can be grouped into distinct underlying concepts (Tabachnick et al., 2019). Factor analysis assumes that one or more latent variables cause the observed likelihood of respondents correctly answering questions. Accordingly, it can identify how strongly or weakly questions align with expected underlying concepts. A student who possesses greater knowledge of a given topic should be able to answer related questions in a similarly successful way and these questions will then load onto a single factor.

For the purposes of this research, similar to Kuehl et al. (2021) factor analysis allows us to test the extent to which sustainability knowledge as a construct is made up of clearly defined sub-dimensions (Tabachnick et al., 2019). For example, testing whether sustainability knowledge consists of knowledge pertaining to the three commonly referenced pillars of sustainability—economics, environment, and society. If these sub-dimensions of sustainability knowledge exist and are captured by our measure, factor analysis would identify them as three distinct factors.

The first step in our analysis was to determine the number of factors present in the data. One common way of doing this is by conducting a parallel analysis. Before doing this, because the data is binary (answer is either correct or incorrect), we generated a tetrachoric correlation matrix. This matrix was then used in the parallel analysis and subsequent exploratory factor analysis. Figure 1 shows the scree plot created in the parallel analysis with Eigenvalues on the y-axis and the number of factors (component number) on the x-axis. Parallel analysis shows a 16-factor structure, or 16 sub-dimensions of sustainability, present in the data. Factors are considered significant if their Eigenvalues are >1 (Kaiser, 1960). The figure suggests four factors, but given the slope of the curve, there could be to 15 factors that are present. The large number of factors, for only 46 questions, suggests a widely divergent set of underlying factors shaping an individual's knowledge of sustainability. However, because there are only four factors above the inflection point of the plot, which is another common way to determine the number of factors.

FIGURE 1
www.frontiersin.org

Figure 1. Scree plot showing presence of up to 16 factors.

Next, we estimated a four-factor exploratory factor analysis using maximum likelihood estimation. We used oblique rotation because the factors are likely correlated. Exploratory factor analysis is appropriate in this context because we want to see empirically which items can be grouped together. Factor loadings are presented in Table 1, with loadings above 0.3 in bold. There is no clear pattern to explain the factor loadings based on the content of the question. We might expect to see three main factors based on knowledge in the areas relating to environmental, economic, and social sustainability, but this is not clearly delineated. Simply put, exploratory factor analysis does not reveal the presence of four strong factors or evidence for three pillars of knowledge about sustainability.

TABLE 1
www.frontiersin.org

Table 1. Question concept and factor loadings.

3.3. Discussion

Generally, factor analysis produces low factor loadings and no coherent pattern. For those questions that do load on the same factor, they stem from what would be considered different sub-dimensions of sustainability knowledge. Conversely, seemingly similar questions, like those asking what resources are renewable, questions 24–31, load on different factors. This suggests that sustainability knowledge does not cleanly load onto the assumed three domains of sustainability. Indeed, with our data, a coherent structure of sustainability knowledge does not exist. Given the inherent complexity and the widely divergent areas of knowledge within a concept like sustainability, this is not particularly surprising. As has been suggested elsewhere (Kuehl et al., 2021), this result likely does not reflect a poor measure, but rather, may suggest a lack of coherence in the underlying conceptualization of what is understood to be sustainability.

It is worth noting that factor analysis can only identify latent variables present in the data itself (Fabrigar et al., 1999). While our approach to question inclusion was designed to be broadly encompassing it is limited by the questions we asked. In addition, it is possible that the lack of coherence is a product of the obtained sample. These limitations point to the value of employing factor analysis across the differing measures of sustainability knowledge.

4. Study 2

4.1. Materials and methods

4.1.1. Survey design: 10-item sustainability knowledge measure

Given that researchers may want to use sustainability knowledge as part of broader studies of environmental behavior and attitudes, we offer a shortened version of the measure. A 46-item measure of sustainability knowledge may not be feasible across a number of research settings, especially at times when sustainability is just one of many variables of interest. In consideration of practical demands, we developed a ten-item battery of questions. A problem with developing any shortened measure of knowledge about sustainability is that we do not know exactly what items should be included to cover the entire field of sustainability. This problem is potentially mitigated by the fact that the items we include in the final measure do not have to be fully representative of the entirety of the field. Rather the items should be regarded as proxies or indicators of knowledge of the field. Consider the premise behind Delli Carpini and Keeter's (1996) measure of political knowledge. Their recommended five-item scale narrowly focuses on simple facts about the federal government. Their scale works because it predicts knowledge in a wide range of other areas—U.S. foreign policy, social welfare policy, environmental policy, state and local politics, and more. If one can answer Delli Carpini and Keeter's five questions, one is likely to be quite knowledgeable about a wide range of political issues; if one cannot answer any of them, one is likely not to know much about any political issue. As a result, their measure is an excellent example of a knowledge measure and is used across a number of studies.

Likewise, using results from factor analysis and trying to incorporate a range of topics in sustainability as well varying degrees of difficulty we developed a 10-item measure of sustainability knowledge included below in Table 2.

TABLE 2
www.frontiersin.org

Table 2. 10 item measure of sustainability knowledge.

The questions included are based off the results factor analysis, looking at how difficult the items were based on how many were rightly or wrongly, and subjectively achieving a balance in terms of topics covered. The measure is necessarily a bit subjective as trade-offs were made amongst competing requirements and, if desired, researchers may want to adapt to their preferences.

4.1.2. Participants

As an additional test, the shortened measured was included as part of a campus-wide survey at Northern Illinois University. Northern Illinois, while also a public university, is located in a rural part of the American Midwest, has a larger non-traditional student population than Santa Barbara, and is designated R2 research university (high research productivity) vs. Santa Barbara which is an R1 (very high research activity). In addition, Northern Illinois does not have a reputation for a focus on environmental studies that UC Santa Barbara does.

The 10 item measure was included as part of campus wide sustainability survey. All students were invited to take part in the survey through two rounds of campus wide surveys as well as signs around campus inviting participation. Ultimately, 652 students participated in the survey, which accounts for a 5.2% response rate.

4.2. Results

This provides an initial test of the generalizability of the shortened measure. Across the sample of 652 undergraduate students at the average was 6 out of 10. Again, Environmental Studies majors scored higher, at 7.3, than other majors at 5.9 (p < 0.001). In addition, seniors, 6.1, outscored first-year students at 5.7 (p < 0.01). These results provide strong initial evidence of the value measure across differing student populations. The Cronbach's alpha in this sample is 0.55.

As shown in Table 3, we can compare that to the UC Santa Barbara sample by only looking at those 10 items from that study. The 48 and 10-item measures are highly correlated (0.78) and perform similarly within the study population, the average score being 71% vs. 72%. Seniors were again significantly more likely to score better than first-years (p < 0.01) and environmental studies majors scored significantly higher than other majors (p < 0.001). The Cronbach's alpha in this sample is 0.50.

TABLE 3
www.frontiersin.org

Table 3. 10 item measure sample comparison.

4.3. Discussion

It is worth nothing that the shorter measure has a Cronbach's alpha of 0.50 among the Santa Barbara sample and 0.55 within the Northern Illinois sample. Though this is lower than the original, it is not surprising given that we purposefully chose items from different factors and the lack of cohesion displayed in the factor analysis. Given that Cronbach's alpha is a measure of reliability we do not expect these items to be highly correlated with each other. In fact, given the wide-ranging theoretical understandings of sustainability and lack of unity in the data, a higher Cronbach's alpha would indicate an artificially limited measure of sustainability knowledge. Despite the challenges of measuring a complex concept like sustainability with a shortened measure, its rigorous development and significant statistical testing warrant its consideration among other similar measures of sustainability knowledge. Like the Delli Carpini and Keeter measure, additional applications of this measure will be helpful in determining its ultimate validity across differing populations and contexts.

5. Conclusion

This research developed and tested a measure of sustainability knowledge. Through multiple iterations, we developed a 46-item questionnaire that analyses suggest is a face-valid measure of sustainability knowledge based on clear differences between class level and major. The full questionnaire and 10-question shortened scale are correlated with one another and more advanced students and environmental studies major perform better on both compared to first year students and non-environmental studies majors.

5.1. Implications

However, factor analysis demonstrates that there is essentially no predictable underlying structure of sustainability knowledge among our universities' students. Contrary to theoretical expectations, the students' knowledge of sustainability does not neatly map onto sub-dimensions of sustainability. Indeed, no coherent factor structure exists. This suggests that efforts to measure sustainability knowledge by grouping questions into discrete domains may inadvertently undermine the validity of the measure. Our results suggest that student's knowledge does not map cleanly onto these domains as other measures, like the Sulitest and Assessment of Sustainability Knowledge assume they should. This likely reflects as much of an ambiguity in what is understood as sustainability as it does insufficient curriculum, instruction, or issues of measurement. This finding is in line with prior observations made by Herremans and Reid (2002), that the three dimensions of sustainability may at times actually conflict with one another and feeds into broader discussions of competing understandings of sustainability. Indeed, building measures based on discrete domains may be the best way to measure sustainability knowledge, but researchers should be cognizant that in doing so they are only creating valid measure of these domains, and should admit as much, rather than a measure of sustainability knowledge as a whole.

Our final reduced 10-item measure, which varies in question difficulty and covers the range of sustainability topics, uses analysis of what students' responses group together, rather than how experts think they cluster. If researchers or professionals wish to describe a broad understanding of sustainability knowledge, our measure seems to be an appropriate one. Alternatively, for those who will want to know how knowledgeable students are with regard to specific standards that were chosen by experts in the field, the most appropriate approach is to develop measures top-down, based on expertise like the Sulitest or Assessment of Sustainability Knowledge. Consequently, it is necessary to define what sustainability knowledge means to each institution, for example whether it is appropriate to divide it into distinct categories or not, and measure it accordingly.

5.2. Future directions

Moving forward, as different measures continue to emerge from various institutions, it is necessary to evaluate their performance across a range of populations. In this case, our measure was developed using one university's students, but of course, measuring sustainability knowledge is important far beyond a single campus or just college and university students. Additionally, our measure suggests that any attempt to use or assess a measure of sustainability knowledge necessarily requires rigorous statistical analyses.

With a range of higher education institutions and global society in general, placing ever more weight on individuals' understanding of sustainability, valid measures of sustainability knowledge will become increasingly important. Our measure adds to this growing conversation by employing an alternative approach to questionnaire creation and the use of additional statistical analysis. We hope that better measures of sustainability knowledge will contribute to better sustainability education and, in the longer term, to a population with more sustainability literacy.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The study was approved by the Human Subjects Committee of the University of California, Santa Barbara (protocol 14-0996). The patients/participants provided their written informed consent to participate in this study.

Author contributions

Conceptualization, survey design, and writing: CK, AS, HH, and ES. Data analysis: CK and AS. Funding acquisition: ES. All authors have read and agreed to the published version of the manuscript.

Funding

Financial support for this project was provided by the Chancellors Sustainability Graduate Research Grant.

Acknowledgments

The authors wish to thank Katie Maynard, Lisa Berry, John Foran, Eve Darian-Smith, and the members of the UC Santa Barbara Campus Sustainability Committee for their help in the early stages of this project.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frsus.2023.1167041/full#supplementary-material

Footnotes

1. ^For example, the criteria for GreenMetric can be reviewed at: https://greenmetric.ui.ac.id/about/criteria-indicator.

References

Arnaud, B., Smarr, L., Sheehan, J., and DeFanti, T. (2009). Campuses as living laboratories for the greener future. Educause Rev. 44, 14–16.

Google Scholar

Bamberg, S., and Möser, G. M. (2007). Twenty years after Hines, Hungerford, and Tomera: a new meta-analysis of psycho-social determinants of pro-environmental behaviour. J. Environ. Psychol. 27, 14–25. doi: 10.1016/j.jenvp.2006.12.002

CrossRef Full Text | Google Scholar

Bord, R. J., O'Connor, R. E., and Fisher, A. (2000). In what sense does the public need to understand global climate change? Public Understand. Sci. 9, 205–218. doi: 10.1088/0963-6625/9/3/301

CrossRef Full Text | Google Scholar

Boyer, R., Peterson, N., Arora, P., and Caldwell, K. (2016). Five approaches to social sustainability and an integrated way forward. Sustainability. 8, 878. doi: 10.3390/su8090878

CrossRef Full Text | Google Scholar

Brown, B. J., Hanson, M. E., Liverman, D. M., and Merideth, R. W. (1987). Global sustainability: toward definition. Environ. Manage. 11, 713–719. doi: 10.1007/BF01867238

CrossRef Full Text | Google Scholar

Bullock, G., and Wilder, N. (2016). The comprehensiveness of competing higher education sustainability assessments. Int. J. Sustainab. Higher Educ. 17, 282–304. doi: 10.1108/IJSHE-05-2014-0078

CrossRef Full Text | Google Scholar

Clark, L. A., and Watson, D. (1995). Constructing validity: basic issues in objective scale development. Psychol. Assess. 7, 309–319. doi: 10.1037/1040-3590.7.3.309

CrossRef Full Text | Google Scholar

Corner, A., and Groves, C. (2014). Breaking the climate change communication deadlock. Nat. Clim. Chang. 4, 743–745. doi: 10.1038/nclimate2348

CrossRef Full Text | Google Scholar

Cortese, A. D. (2003). The critical role of higher education in creating a sustainable future. Plann. High. Educ. 31, 15–22.

Google Scholar

Coyle, K. (2005). Environmental literacy in America: What ten years of NEETF/Roper research and related studies say about environmental literacy in the US. Washington, D.C: National Environmental Education & Training Foundation.

Google Scholar

Dahm, M. J., Samonte, A. V., and Shows, A. R. (2009). Organic foods: do eco-friendly attitudes predict eco-friendly behaviors? J. Am. College Health. 58, 195–202. doi: 10.1080/07448480903295292

PubMed Abstract | CrossRef Full Text | Google Scholar

Décamps, A., Barbat, G., Carteron, J. C., Hands, V., and Parkes, C. (2017). Sulitest: A collaborative initiative to support and assess sustainability literacy in higher education. Int. J. Manag. Educ. 15, 138–152. doi: 10.1016/j.ijme.2017.02.006

CrossRef Full Text | Google Scholar

Delli Carpini, M. X., and Keeter, S. (1996). What Americans Know About Politics and Why it Matters. Yale: Yale University Press.

PubMed Abstract | Google Scholar

DeVellis, R. F. (2012). Scale Development: Theory and Applications, 3rd ed, Applied social research methods series. Thousand Oaks, California: SAGE.

Google Scholar

Ehret, P. J., Hodges, H. E., Kuehl, C., Brick, C., Mueller, S., and Anderson, S. E. (2019). Systematic review of household water conservation interventions using the information–motivation–behavioral skills model. Environm. Behav. 53, 5. doi: 10.31234/osf.io/4s85b

CrossRef Full Text | Google Scholar

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., and Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychol. Methods 4, 272–299. doi: 10.1037/1082-989X.4.3.272

PubMed Abstract | CrossRef Full Text | Google Scholar

Farley, H. M., and Smith, Z. A. (2020). “Sustainability: if it's everything, is it nothing?, Second edition,” in Critical Issues in Global Politics. New York, NY: Routledge, Abingdon, Oxon.

Google Scholar

Findler, F., Schönherr, N., Lozano, R., and Stacherl, B. (2018). Assessing the impacts of higher education institutions on sustainable development—an analysis of tools and indicators. Sustainability. 11, 59. doi: 10.3390/su11010059

CrossRef Full Text | Google Scholar

Gardner, G. T., and Stern, P. C. (1996). Environmental Problems and Human Behavior. Boston: Allyn and Bacon.

Google Scholar

GreenMetric (2022). UI GreenMetric World University Ranking (2022). Indonesia: Universitas Indonesia.

Google Scholar

Haigh, M. (2007). Greening the university curriculum: appraising an international movement. J. Geography Higher Educ. 29, 31–48. doi: 10.1080/03098260500030355

CrossRef Full Text | Google Scholar

Heeren, A. J., Singh, A. S., Zwickle, A., Koontz, T. M., Slagle, K. M., and McCreery, A. C. (2016). Is sustainability knowledge half the battle?: An examination of sustainability knowledge, attitudes, norms, and efficacy to understand sustainable behaviours. Int. J. Sustainab. Higher Educ. 17, 5. doi: 10.1108/IJSHE-02-2015-0014

CrossRef Full Text | Google Scholar

Herremans, I. M., and Reid, R. E. (2002). Developing awareness of the sustainability concept. J. Environ. Educ. 34, 16–20. doi: 10.1080/00958960209603477

CrossRef Full Text | Google Scholar

Horvath, N., Stewart, M., and Shea, M. (2013). Toward Instruments of Assessing Sustainability Knowledge: assessment development, process, and results from a pilot survey at the University of Maryland. J. Sustainab. Educ. 5, 311–320.

Google Scholar

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., et al. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nat. Clim. Change. 2, 732–735. doi: 10.1038/nclimate1547

CrossRef Full Text | Google Scholar

Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educ. Psychol. Meas. 20, 141–151. doi: 10.1177/001316446002000116

CrossRef Full Text | Google Scholar

Kidd, C. V. (1992). The evolution of sustainability. J. Agri. Environm. Ethics. 5, 1–26. doi: 10.1007/BF01965413

CrossRef Full Text | Google Scholar

Klick, H., and Smith, E. R. A. N. (2009). Public understanding of and support for wind power in the United States. Renewable Energy. 35, 1585–1591. doi: 10.1016/j.renene.2009.11.028

PubMed Abstract | CrossRef Full Text | Google Scholar

Kraus, S. J. (1995). Attitudes and the prediction of behavior: a meta-analysis of the empirical literature. Pers. Soc. Psychol. Bull. 21, 58–75. doi: 10.1177/0146167295211007

CrossRef Full Text | Google Scholar

Kuehl, C., Sparks, A. C., Hodges, H., and Smith, E. R. A. N. (2021). The incoherence of sustainability literacy assessed with the Sulitest. Nat. Sustainab. 4, 555–560. doi: 10.1038/s41893-021-00687-6

CrossRef Full Text | Google Scholar

Lad, N., and Akerlof, K. (2022). Assessing campus sustainability literacy and culture: how are universities doing it and to what end? Front. Sustain. 3, 927294. doi: 10.3389/frsus.2022.927294

CrossRef Full Text | Google Scholar

Leiserowitz, A. A., Smith, N., and Marlon, J. R. (2010). Americans' Knowledge of Climate Change. Yale: Yale Project on Climate Change Communication.

Google Scholar

Leiva-Brondo, M., Lajara-Camilleri, N., Vidal-Meló, A., Atarés, A., and Lull, C. (2022). Spanish university students' awareness and perception of sustainable development goals and sustainability literacy. Sustainability. 14, 4552. doi: 10.3390/su14084552

PubMed Abstract | CrossRef Full Text | Google Scholar

Lorenzoni, I., and Hulme, M. (2009). Believing is seeing: laypeople's views of future socio-economic and climate change in England and in Italy. Public Understand. Sci. 18, 383–400. doi: 10.1177/0963662508089540

CrossRef Full Text | Google Scholar

Lozano, R. (2006). Incorporation and institutionalization of SD into universities: breaking through barriers to change. J. Clean. Prod. 14, 787–796. doi: 10.1016/j.jclepro.2005.12.010

CrossRef Full Text | Google Scholar

Lozano, R., Ceulemans, K., Alonso-Almeida, M., Huisingh, D., Lozano, F. J., Waas, T., et al. (2015). A review of commitment and implementation of sustainable development in higher education: results from a worldwide survey. J. Clean. Prod. 108, 1–18. doi: 10.1016/j.jclepro.2014.09.048

CrossRef Full Text | Google Scholar

Maibach, E. W., Nisbet, M., Baldwin, P., Akerlof, K., and Diao, G. (2010). Reframing climate change as a public health issue: an exploratory study of public reactions. BMC Public Health 10, 299. doi: 10.1186/1471-2458-10-299

PubMed Abstract | CrossRef Full Text | Google Scholar

Mayer, F. S., and Frantz, C. M. (2004). The connectedness to nature scale: a measure of individuals' feeling in community with nature. J. Environ. Psychol. 24, 503–515. doi: 10.1016/j.jenvp.2004.10.001

CrossRef Full Text | Google Scholar

McBride, B. B., Brewer, C. A., Berkowitz, A. R., and Borrie, W. T. (2013). Environmental literacy, ecological literacy, ecoliteracy: what do we mean and how did we get here? Ecosphere 4, 1–20. doi: 10.1890/ES13-00075.1

CrossRef Full Text | Google Scholar

Mebratu, D. (1998). Sustainability and sustainable development: historical and conceptual review. Environ. Impact Assess. Rev. 18, 493–520. doi: 10.1016/S0195-9255(98)00019-5

CrossRef Full Text | Google Scholar

Miller, J. D. (1983). Scientific literacy: a conceptual and empirical review. Daedalus. 29–48.

Google Scholar

Miller, J. D. (2004). Public understanding of, and attitudes toward, scientific research: What we know and what we need to know. Public Underst. Sci. 13, 273–294. doi: 10.1177/0963662504044908

CrossRef Full Text | Google Scholar

Nisbet, M. C. (2009). Communicating climate change: why frames matter for public engagement. Environment. 51, 12–23. doi: 10.3200/ENVT.51.2.12-23

CrossRef Full Text | Google Scholar

Orr, D. W. (1992). Ecological Literacy: Education and the Transition to a Postmodern World. New York: State University of New York Press.

Google Scholar

Portney, K. (2015). Sustainability. Cambridge, MA: MIT Press.

Google Scholar

Purvis, B., Mao, Y., and Robinson, D. (2019). Three pillars of sustainability: in search of conceptual origins. Sustainability Sci. 14, 681–695. doi: 10.1007/s11625-018-0627-5

CrossRef Full Text | Google Scholar

Shamos, M. H. (1995). The Myth of Scientific Literacy. Rutgers University Press.

Google Scholar

Smith, E. R. A. N. (2002). Energy, the Environment, and Public Opinion. Lanham, MD: Rowman & Littlefield.

Google Scholar

Stern, P. C., Dietz, T., and Guagnano, G. A. (1995). The new ecological paradigm in social-psychological context. Environ. Behav. 27, 723–743. doi: 10.1177/0013916595276001

PubMed Abstract | CrossRef Full Text | Google Scholar

Sulitest (2020). Raising & Mapping Awareness of the Global Goals: 2020 Update. United Nations.

Google Scholar

Tabachnick, B. G., Fidell, L. S., and Ullman, J. B. (2019). Using multivariate statistics, Seventh edition. New York, NY: Pearson.

Google Scholar

United Nations General Assembly. (2015). Transforming Our World: The 2030 Agenda for Sustainable Development. New York, NY: United Nations.

Google Scholar

Vallance, S., Perkins, H. C., and Dixon, J. E. (2011). What is social sustainability? A clarification of concepts. Geoforum. 42, 342–348. doi: 10.1016/j.geoforum.2011.01.002

CrossRef Full Text | Google Scholar

World Commission on Environment and Development. (1987). Our Common Future. Oxford University Press.

Google Scholar

Worthington, R. L., and Whittaker, T. A. (2006). Scale development research: a content analysis and recommendations for best practices. Couns. Psychol. 34, 806–838. doi: 10.1177/0011000006288127

CrossRef Full Text | Google Scholar

Zwickle, A., and Jones, K. (2018). “Sustainability Knowledge and Attitudes—Assessing Latent Constructs,” in Handbook of Sustainability and Social Science Research. Amsterdam: Springer. p. 435–451. doi: 10.1007/978-3-319-67122-2_25

CrossRef Full Text | Google Scholar

Zwickle, A.M, Koontz, T.M, Slagle, K.T, and Bruskotter, J. T. (2014). Assessing sustainability knowledge of a student population. Int. J. Sustainab. Higher Educ. 15, 4. doi: 10.1108/IJSHE-01-2013-0008

CrossRef Full Text | Google Scholar

Keywords: sustainability, education for sustainable development, sustainability literacy, higher education institutions, factor analysis

Citation: Kuehl C, Sparks AC, Hodges H and Smith ERAN (2023) Exploring sustainability literacy: developing and assessing a bottom-up measure of what students know about sustainability. Front. Sustain. 4:1167041. doi: 10.3389/frsus.2023.1167041

Received: 15 February 2023; Accepted: 19 May 2023;
Published: 05 June 2023.

Edited by:

Maria Jesus Muñoz-Torres, University of Jaume I, Spain

Reviewed by:

Núria Bautista-Puig, University of Gävle, Sweden
Idoya Ferrero-Ferrero, University of Jaume I, Spain

Copyright © 2023 Kuehl, Sparks, Hodges and Smith. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Colin Kuehl, ckuehl@niu.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.