Skip to main content

PERSPECTIVE article

Front. Astron. Space Sci., 06 December 2022
Sec. Space Physics
This article is part of the Research Topic Driving Towards a More Diverse Space Physics Research Community – Perspectives, Initiatives, Strategies, and Actions View all 20 articles

Thoughts from a past AGU SPA fellows committee

  • 1Goddard Space Flight Center, NASA, Greenbelt, MD, United States
  • 2Naval Research Laboratory, Washington DC, DC, United States
  • 3Space Science Application Laboratory, The Aerospace Corporation, El Segundo, CA, United States
  • 4Institute for Astrophysics, Georg-August-University of Göttingen, Göttingen, Germany
  • 5SPACE Research Centre, RMIT University, Melbourne, VIC, Australia
  • 6Harvard-Smithsonian Center for Astrophysics, Cambridge, MA, United States
  • 7High Altitude Observatory, National Center for Atmospheric Research, Boulder, CO, United States
  • 8CIRES, University of Colorado at Boulder Boulder, Boulder, CO, United States
  • 9National Institute for Space Research—INPE, São José dos Campos, Brazil

Community honours, such as those bestowed by professional scientific societies like the American Geophysical Union (AGU) are an important element of both individual career advancement and contributes to the historical record of scientific progress. The process by which honours are bestowed is not widely shared amongst the community. The purpose of this article is to share the recent experiences of several members of the AGU Space Physics and Aeronomy (SPA) Fellows committee. We outline the criteria for selection, the evaluation process, difficulties encountered by the committee, and steps taken to mitigate these difficulties. Of particular note is the impact of implicit bias in the award system. Steps could be taken by the awarding scientific societies to reduce the impact of these biases, but in the meantime individual award committees can employ some of the strategies we outline in this article. By sharing our experiences, we hope to improve the process of granting awards and honours for the scientists putting together award nominations, future committee members, and the scientific societies granting these awards.

1 Introduction from Dr. Halford, previous chair

I served on the AGU Space Physics and Aeronomy (SPA) Fellows committee from 2017–2020, chairing it in 2019 and 2020. Like many, I knew that I did not fully understand the award process. Today, I recognise that each section and committee work differently, and the interpretation of the award criteria changes each year as members cycle in and out. I believe this subjectivity, along with the obfuscation of the definitions and interpretations of the award criteria, leads to confusion about why some nomination packages succeed while others fail. Here, we aim to shed light on how our committee approached this task, increase the transparency of the process, and detail the steps we took to mitigate systemic biases. We hope that future committees will continue to improve transparency and that this will encourage everyone to submit nomination packages.

A constant in each section’s committee from year to year is the solemnity that members bring to table. All members show the highest respect for each nominee’s contribution to the field. However, each committee does, and must, work differently. Factors contributing to this include number of packages, which can vary significantly, and geographic distribution of members. The SPA section typically receives 20–30 packages to evaluate within a month. This quantity of packages falls roughly in the middle of other AGU sections. The time constraint means that each SPA package receives, ∼12 min of group discussion. This does not include the time invested by individual committee members, who (during my leadership) read over all individual packages and delve in-depth into 3—5 packages. Leading and participating with such dedicated committee members has been an honour.

I want to applaud and acknowledge my fellow committee members. Our committee comprised 12 individuals from across the world and SPA disciplines. They were asked to tackle a substantial workload in a short period. They did so with complete professionalism and diligence. Committee members made great efforts to attend meetings while at conferences and on travel. Many went above and beyond, meeting at times well outside reasonable work hours. As the chair, I am incredibly thankful for their dedication to this voluntary commitment, not least because many of the hours they dedicated to this committee came from their personal time. The rest of this paper is from the entirety of the co-authors and work done by all committee members - thus the pronoun we will be used.

2 Committee criteria for selection

AGU has laid out three criteria for nominating an AGU member (https://www.agu.org/Honor-and-Recognize/Honors/Union-Fellows AGU (2021)):

1 Breakthrough and/or discovery,

2 Innovation in disciplinary science, cross-disciplinary science, instrument development, or methods development, and

3 Sustained scientific impact.

Our committee did not prioritise one category over another, nor did we systematically consider whether or not a candidate met more than one category. As these criteria can be subjective, our committee established a common interpretation of these criteria and how to handle different evaluation metrics through a group discussion before viewing the nomination packages. These interpretations will likely change from committee to committee. One example of our committee’s standard is how we handled the h-index, an optional metric that may be included in the nomination package AGU (2021). By listing the h-index as an option on the AGU website, its perceived value as a shortcut metric is elevated above other evaluation criteria. However, well-known biases are associated with the h-index, including biases that affect women and non-binary researchers, minorities, and fields or sub-fields that publish at different rates Rørstad and Aksnes (2015); Cameron et al. (2016); Tahamtan et al. (2016); Leydesdorff et al. (2019); Chapman et al. (2019); Pico et al. (2020). Given the well-documented biases of the h-index that cause it to poorly reflect on the quality of the research, we excluded it from consideration in our committee and strongly recommend others do so as well. Examples of other optional metrics that have been used, and have their own issues, include the number of successful Ph.D. students and the number of instruments built and flown.

2.1 Defining and interpreting the evaluation criteria

Our committee decided that there should not be any predetermined order or weight to the itemised definitions or criteria. Each evaluation criteria provided by AGU are defined in detail below.

2.1.1 Breakthrough or discovery

An idea that once accepted, allows others to frame ideas or approach problems differently and more effectively than before.

2.1.2 Innovation in disciplinary science, cross-disciplinary science, instrument development, or methods development

• Enabling collaborations across many sub-fields.

• Development of new instruments that have been successful in the field and lead to new* understandings.

• Development of new* methods that other scientists have adopted and have led to new* understandings within the field.

• Produced a data product or a method that is used on a routine basis even if not correctly cited (Has an open data/code policy and has become so routine, people have forgotten that this is either produced by someone or was not a standard product previously).

∗New: something that deviates enough from ‘standard understandings’ in any one field in the presented form, even if the process to arrive at ‘new’ happened through a series of gradual improvements or advancements.

2.1.3 Sustained scientific impact

• Something that has changed the way other scientists approach a problem, perhaps on a smaller scope but cumulatively changes people’s perceptions over time.

• Enabled long-lasting collaborations leading to significant impact within the field.

• Mentor a significant number of collaborators/scientists/students, enabling their development as researchers.

• Produced continued excellent research over the course of their career.

The SPA committee definitions and interpretations are still general, and perhaps not fully inclusive. We used this to establish a lingua franca within the committee, aiding discussions throughout the evaluation process. For instance, within the sustained scientific impact discussions often discussions included information on service activities and other best research practices and metrics such as those discussed in the Australian code for the responsible conduct of research or the Danish Code of Conduct for Research Integrity National Health and Medical Research Council (2018); Ministry of Higher Education and Science (2022).

2.2 Evaluation process

After creating consensus on the evaluation criteria, our committee began by considering previous failures of the process: the SPA section has continuously failed to equitably recognise all portion of our community (e.g., gender, race, or ethnicity) Jaynes et al. (2019). We acknowledged that each of us held similar implicit biases as members of our own cultures and research sub-fields. The first step the chair took, with the help of the residing SPA president, was to mitigate the impact of our implicit biases by constructing a balanced committee. For the last few years, our SPA Fellows nomination committee was approximately gender balanced and comprised of nearly equal representation from the solar, interplanetary, magnetosphere, and ionosphere/atmosphere communities (the major sub-fields within SPA). We also included representation from across the globe and career levels. Dr. Halford was the most early-career committee member (in 2019, 7 years post Ph.D.), with others among the most senior ranks of our field. This committee construction aimed to gather people with contrasting implicit biases so that the impact, on average, could be mitigated. Our individual rankings showed that we still held implicit bias for our sub-fields. However, our diverse committee mitigated the impact, resulting in an equal distribution of each subfield within the final rankings. For example, if we had had a persistent magnetospheric bias in our committee, we suspect that more magnetospheric nominations would have been put forward to the union committee.

The broad time zone difference between committee members meant we needed to consider the best times and methods for discussions. We took two approaches: staggering meeting times and maintaining an online repository. Each week we had two meetings, one that was not at obscene hours for Europe/Africa and another that was not at obscene hours for Australia/Asia. In addition, our shared online repository was accessible and editable by all members. It allowed committee members to access notes about each package asynchronously. The two steps we took (thoughtful committee construction and moderated committee interactions) laid a solid foundation for successful meetings. Without these two steps, the nominees we put forward (while still accomplished) would not have represented our community.

During the first meeting, we discussed the different biases we each hold. We reminded ourselves to be conscious of them throughout the rest of the process. Below is the list of potential biases we identified and attempted to mitigate through a balanced committee and open discussion.

• Gender

• Nationality

• Race/Ethnicity

• Career level (retired/senior/expert vs. mid or even mid/expert/senior)

• Extrovert vs. Introvert (impacting who is seen, heard, and remembered)

• A country or institution’s socioeconomic status

• Large Mission participation vs. smaller projects such as CubeSats, rockets, and balloons.

• Experimentalist vs. theorist vs. observationalist

• Dependence on intrinsically biased, short-cut metrics

• Sub-field bias (familiarity)

• Publication/collaboration environment

• The Matthew Effect (credit being attributed to the most well-known name, not the person who necessarily had the ideas or did most of the work) Merton (1968).

• The Matthew/Matilda affect (where men tend to get the credit more so than women who did just as much or more of the work) Lincoln et al. (2012); Rossiter (1993).

• Work in “up-stream” fields. For example, much of solar physics impacts the other sub-fields, but the ionosphere does not impact the Sun.

• Work in a traditional academic environment

• Multidisciplinary work

• Number of other awards received.

We took a broad view and discussed how biases might affect our perspective on the nominee’s scientific impact. These biases can have positive or negative affects. For instance, we discussed how a mentee’s work should be considered in a nomination package for their mentor (the Matthew effect) Merton (1968). We questioned whether the credit given to the nominees should be attributed to the mentee, especially when the package presented the work done by the mentee as a breakthrough or discovery by the mentor. Or should the nominee rather get credit for supporting and collaborating with the mentee, in an excellent example of sustained scientific impact? For cases like this, how a nomination package presented the work significantly impacted the committee’s perception.

Many of the identified biases were found to affect a package’s shortcut metrics (e.g., the h-index) Tahamtan et al. (2016). The types of projects and work environments a person engages in will significantly impact their number of papers. For example, a person working within a larger collaborative group is likely to be on more papers with a large number of co-authors Tahamtan et al. (2016). Specifically within space physics, the number of co-authors is correlated to the number of citations Moldwin and Liemohn (2018). Another factor that can impact the number of co-authors is visibility within the field, which further leads to more extensive and diverse collaborations Ale Ebrahim et al. (2014). For example, are the nominees able to attend conferences regularly, and are they invited to speak Ford et al. (2018)? The number of papers and citations were found to bias the perceived prestige of the project and the nominee associated with that project instead of the impact and quality of the work. Additionally, shortcut metrics such as the h-index moved the discussion away from the substance of the publications. It did not leave room for acknowledgement of essential, but poorly cited scientific contributions, such as the improvement and curation of geomagnetic indices that are frequently improperly referenced. We discussed similar data sets and tools that are now considered well-understood standards and “owned by the community” Chapman et al. (2019) for each nominee’s package.

The biases we previously identified can affect how the impact of a nominee’s package is recognised. To mitigate this, our committee worked towards building a safe environment where all members felt empowered to speak up when they observed the influence of biases on discussions. This was accomplished by first addressing the issue of bias via email. AGU also addresses these issues in the orientation for the committees. We further discussed and were open with each other about our own biases during the first meeting. As the chair, Dr. Halford asked a few of the committee members to make sure to call her out on biases. This showed that it is okay to be called out. It ensured that we put forward the most accomplished scientists from our field. At least once during each meeting, we asked if anyone had noticed any biases during the discussions, without needing to assign bias to any particular committee member.

Committee members read all nomination packages, and many read the papers referenced within the packages. The materials in the nomination packages provide evidence for the nomination citation and subsequent claims made within the nomination package. Some members initially broke the packages into three groups, top, middle, and bottom, to help focus discussions. Many discussions revolved around what evidence was presented, what was omitted, and if the nomination and supporting letters were consistent with the short citation, CV, and the selected bibliography.

The meetings were timed to ensure each package had a similar amount of discussion time. If a particular package needed extra discussion, if time allowed, it was returned to. Committee members presented the packages and led discussions about what achievements were described and had evidence related to the three previously outlined criteria. If members could not attend the meeting or felt more comfortable providing written comments, they contributed asynchronously to the summary for the nominee so other members could read their comments.

During the final meetings, we discussed the ordering of the nominations. We considered multiple ranking strategies including mean rank, median rank, and rank choice. We found that with few exceptions, the ranking of the nominee changed minimally (typically no more than a shift of 1 - 3 positions) with any given method. This provided confidence in our choices put forward to the Union Committee and their final order. If the ranking did change significantly, or if the shift occurred at a critical boundary (e.g., changed who would be put forward to the Union Committee), we considered the deviation between the rankings. We discussed the reasons behind any scores that significantly differed from the majority opinion. We also took the time to check our potential biases. Given the distribution of submitted nomination packages, we found an even distribution of sub-fields, gender, and other underrepresented groups. We feel confident that through a diverse committee and discussions about potential biases, we sufficiently mitigated our biases and put forth the most deserving nominees.

The top four candidates are typically unanimously supported by the committee. The most contentious packages were those whose nominated work undoubtedly contributed to our field, but did not address the connection between their work and the SPA sub-fields. It is sometimes unclear what the best route is to take with these nominations. Often they are dual submissions with another field such as Planetary or Atmospheric and Space Electricity.

3 Committee recommendations for the program process

At the end of the committee’s work, we reflected on the process to identify issues that may have affected our discussions and rankings. These were added to the list of potential biases for the following year. For example, after 2019 we identified a new bias favouring science within the solar community. The data products and scientific results from this sub-field are frequently utilised by the magnetospheric and ionospheric/atmospheric communities, and so perceived as valuable by members from these communities. However, solar scientists are frequently unaware of the work performed within other sub-fields. The impact of this physical reality was seen both in the applicability of a topic to interdisciplinary science and in the likelihood of Journal articles obtaining higher citations. During committee discussions, we determined that some aspects of this bias are not actively harmful. Each SPA sub-field has a different scope. Unlike the solar community, the magnetospheric and ionospheric/atmospheric results may be perceived as having a more immediate impact on society. This could further interact with the experimentalist/theorist bias. Scientific advances in these sub-fields may be unconsciously interpreted as being more applied science and less worthy of being considered a discovery or breakthrough. Although the committees have been unable to determine the best way to address these biases, they were identified and discussed.

Another example is the number of other fellowships or awards won by a nominee. This shortcut metric was not consistently perceived as good or bad. Some committee members interpreted a large number of awards as a reliable indicator of quality science. Others perceived the presentation of other awards negatively or neutrally. They did not consider it a reliable short-cut metric for excellence, and it took up space that could have discussed the scientific impact made by the nominee. Still others felt we should acknowledge those who did not have other awards, but had an outstanding scientific impact.

We found it essential to discuss biases and the evaluation criteria we would use. We also found it beneficial to have these discussions before reading and ranking the nominations. It provided a moment for everyone to check their thought process before forming an opinion on a package.

AGU’s software plays into one implicit bias. The first information that shows up pertains to the nominator. It should not matter who the nominator is. Putting this upfront gives the impression that the nominator is more important than the nominee. This perpetuates the idea that science is still a “good ol’ boys club”. Putting the nominee information up front would help mitigate the Matthew bias and return the emphasis where it belongs: on the nominee’s skills and accomplishments.

4 Conclusion

The Fellows honour is the highest honour AGU bestows. Thus, it is paramount that the evaluation criteria reflect the values of our community. Most nomination packages deserve high praise for the nominee’s work and commitment to the AGU community. However historically, some very important values were overlooked. These typically fall under the “sustained scientific impact” section of the AGU Honours nominating criteria and include: the impact of service and sustained support activities, such as data curation, which enables countless others to lead breakthroughs and discovery or perform cross-disciplinary work. Unfortunately, there is also a long history of ignoring the breakthroughs and contributions made by individuals from underrepresented groups. This includes women (∼12% of current SPA fellows) and racial/ethnic minorities (< 12% of current SPA fellows) among others. These biases against marginalized groups and institutions can be mitigated by avoiding heavy-weighting metrics such as h-index and past awards Chapman et al. (2019); Leydesdorff et al. (2019). For example, within the SPA community, we have had years where zero women were nominated. This has led to discussions concerning who becomes, or more accurately, does not become, a Fellow. This has improved in recent years thanks to the efforts of the Nominating Task Force Jaynes et al. (2019), the Fellows committee, and AGU’s efforts to acknowledge and mitigate implicit biases. However, we must continue to be vigilant and work towards ensuring we recognise all who are deserving of becoming an AGU fellow. We encourage the AGU community, Union section, and AGU leadership to reflect as we continue to consider biases within our fields. Furthermore, we continue to work towards ensuring that colleagues who have been forgotten because of the “invisible” work they do are honoured according to their contributions. The following are recommendations for other award committees from our experience:

• Build a safe environment for people to become aware of their own biases and bring up biases that they see surface within the discussions

• Build in ways for biases to be checked throughout the process

• Develop and maintain a list of implicit and explicit biases to look out for

• Check for bias at before finalisation of recommendations.

• Build a diverse committee

• Discipline/expertise/sub field

• Gender

• Institution type

• Geographic location

• Career Level

• Ensure work can be done asynchronously.

• Provide useful feedback to nominators for improved nomination letters.

Author contributions

AH wrote the initial draft of the paper and all co-authors helped edit and refine the document as well as served on the SPA fellows committee which developed the best practices included in this document.

Funding

AH’s work on this paper was funded from the Space Precipitation Impacts project at Goddard Space Flight Center through the Heliophysics Internal Science Funding Model.

Acknowledgments

AB is supported by the Office of Naval Research.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ale Ebrahim, N., Salehi, H., Embi, M. A., Habibi, F., Gholizadeh, H., and Motahar, S. M. (2014). Visibility and citation impact. Int. Educ. Stud. 7, 120–125. doi:10.5539/ies.v7n4p120

CrossRef Full Text | Google Scholar

Cameron, E. Z., White, A. M., and Gray, M. E. (2016). Solving the productivity and impact puzzle: Do men outperform women, or are metrics biased? BioScience 66, 245–252. doi:10.1093/biosci/biv173

CrossRef Full Text | Google Scholar

Chapman, C. A., Bicca-Marques, J. C., Calvignac-Spencer, S., Fan, P., Fashing, P. J., Gogarten, J., et al. (2019). Games academics play and their consequences: How authorship, h-index and journal impact factors are shaping the future of academia. Proc. R. Soc. B 286, 20192047. doi:10.1098/rspb.2019.2047

PubMed Abstract | CrossRef Full Text | Google Scholar

Ford, H. L., Brick, C., Blaufuss, K., and Dekens, P. S. (2018). Gender inequity in speaking opportunities at the American geophysical union fall meeting. Nat. Commun. 9, 1358–1366. doi:10.1038/s41467-018-03809-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Jaynes, A. N., MacDonald, E. A., and Keesee, A. M. (2019). Equal representation in scientific honors starts with nominations. Eos 100. doi:10.1029/2019EO117855

CrossRef Full Text | Google Scholar

Leydesdorff, L., Bornmann, L., and Opthof, T. (2019). hα: the scientist as chimpanzee or bonobo. Scientometrics 118, 1163–1166. doi:10.1007/s11192-019-03004-3

CrossRef Full Text | Google Scholar

Lincoln, A. E., Pincus, S., Koster, J. B., and Leboy, P. S. (2012). The matilda effect in science: Awards and prizes in the us, 1990s and 2000s. Soc. Stud. Sci. 42, 307–320. doi:10.1177/0306312711435830

PubMed Abstract | CrossRef Full Text | Google Scholar

Merton, R. K. (1968). The Matthew effect in science. Science 159, 56–63. doi:10.1126/science.159.3810.56

PubMed Abstract | CrossRef Full Text | Google Scholar

Ministry of Higher Education and Science (2022). Danish code of conduct for research integrity.

Google Scholar

Moldwin, M. B., and Liemohn, M. W. (2018). High-citation papers in space physics: Examination of gender, country, and paper characteristics. J. Geophys. Res. Space Phys. 123, 2557–2565. doi:10.1002/2018JA025291

CrossRef Full Text | Google Scholar

National Health and Medical Research Council. (2018). Australian code for the responsible conduct of research. Australian Government Publishing Service, Canberra.

Google Scholar

Pico, T., Bierman, P., Doyle, K., and Richardson, S. (2020). First authorship gender gap in the geosciences. Earth Space Sci. 7, e2020EA001203. doi:10.1029/2020ea001203

CrossRef Full Text | Google Scholar

Rørstad, K., and Aksnes, D. W. (2015). Publication rate expressed by age, gender and academic position – A large-scale analysis of Norwegian academic staff. J. Inf. 9, 317–333. doi:10.1016/j.joi.2015.02.003

CrossRef Full Text | Google Scholar

Rossiter, M. W. (1993). The Matthew Matilda effect in science. Soc. Stud. Sci. 23, 325–341. doi:10.1177/030631293023002004

CrossRef Full Text | Google Scholar

Tahamtan, I., Afshar, A. S., and Ahamdzadeh, K. (2016). Factors affecting number of citations: A comprehensive review of the literature. Scientometrics 107, 1195–1225. doi:10.1007/s11192-016-1889-2

CrossRef Full Text | Google Scholar

Keywords: diversity & inclusion, committee, honors and awards, equitability, bias, recommendation

Citation: Halford AJ, Burrell AG, Yizengaw E, Bothmer V, Carter BA, Raymond JC, Maute A, Samara M, Maruyama N and Alves LR (2022) Thoughts from a past AGU SPA fellows committee. Front. Astron. Space Sci. 9:1054343. doi: 10.3389/fspas.2022.1054343

Received: 26 September 2022; Accepted: 28 October 2022;
Published: 06 December 2022.

Edited by:

Evgeny V. Mishin, Air Force Research Laboratory New Mexico, United States

Reviewed by:

Bertil Fabricius Dorch, University of Southern Denmark, Denmark

Copyright © 2022 Halford, Burrell, Yizengaw, Bothmer, Carter, Raymond, Maute, Samara, Maruyama and Alves. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alexa J. Halford, Alexa.J.Halford@nasa.gov

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.