Skip to main content

EDITORIAL article

Front. Energy Res., 21 August 2023
Sec. Smart Grids
This article is part of the Research Topic Explainability in Knowledge-based Systems and Machine Learning Models for Smart Grids View all 6 articles

Editorial: Explainability in knowledge-based systems and machine learning models for smart grids

  • 1GECAD—Research Group on Intelligent Engineering and Computing for Advanced Innovation and Development, LASI—Intelligent Systems Associate Laboratory, Polytechnic of Porto, Porto, Portugal
  • 2Universidade de Trás-os-Montes e Alto Douro, INESC-TEC, Vila Real, Portugal
  • 3Department of Engineering, School of Science and Technology, Universidad de Salamanca and Air Institute, Salamanca, Spain

1 Introduction

The increasing use of non-dispatchable Renewable Energy Sources (RES) requires that the load-generation balance is no longer exclusively addressed in a centralized way and driven by rigid demand. Consumers gain new roles and importance because their distributed generation capability and demand flexibility can mitigate RES uncertainty and quick variation (Rusche et al., 2023). However, consumers do not have sufficient experience or technical knowledge to manage their generation and demand flexibility properly. The lack of decision support and automated solutions to support them is the main barrier to benefiting from the great potential of consumers’ participation (Pinto and Vale, 2019). Existing solutions to support consumers in energy management and trading are limited in terms of intelligence and automation, while most consumers do not have enough knowledge and trust to use the available solutions. The explainability of intelligent decision support models becomes, thereby, essential to motivate consumers’ widespread use of such tools (Miller, 2019).

Despite the promising advances based on Artificial Intelligence (AI), particularly on Machine Learning (ML) and Knowledge-Based Systems, the conception and development of adequate decision support models for energy management and trading is still limited, as well as the interpretability of such models. In fact, one of the main barriers for the successful adoption of AI-based solutions is the lack of users’ trust. Ensuring models explainability capabilities through Explainable AI (XAI) is a priority to address this issue (European Commission, 2020). Through automatic explanations, users are able to understand the reasons behind the systems decisions, increasing the acceptance on AI-based solutions (Zhang and Chen, 2020). This can be concluded from the results of the study conducted in (Saranya and Subhashini, 2023), which provides a systematic review of XAI in different applications. This work has surveyed 91 articles published from 2018 to 2022. From this work has resulted a XAI taxonomy with four common approaches: Functioning-based, Result-based, Conceptual approach, and Mixed approach. Discussion and results mainly focus on the concept of explainability, methodology, the need for XAI, the principles of XAI, the properties of explanation and the main associated challenges.

The need for AI explainability in the specific scope of energy and power systems is provided by Machlev et al. (2022), referring to literature from 2019 to 2022. This analysis reveals interesting trends in the current research, mainly regarding the way in which different XAI techniques are used and the challenges and limitations of adopting and implementing XAI techniques in the field of energy and power systems. In particular, the survey concludes that SHapley Additive explanation (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) are the most widely used XAI techniques. As main obstacles to the successful implementation of XAI, this study highlights the standardization, security, and incorrect confidence. The work also suggests several potential applications and future research directions related to XAI and energy, which include the optimal energy management and control, energy consumer applications, and power system monitoring. Additionally, most of the ML models to which XAI is applied are traditional ML algorithms, while deep learning models are still very rarely addressed. The application of XAI to ML models in the specific scope of smart grids is reviewed in (Xu et al., 2022) using papers collected from Google Scholar over a 5-year period. In this study, three types of ML interpretability methods are studied, namely: Pre-model, in-model and post-model. It is concluded that pre-model interpretability methods can support the understanding of data. In-model interpretability methods are more faithful to the model. Post-model interpretability methods can interpret more complex deep models in different forms. Overall, this review concludes that post-model interpretability methods are the most widely used type of interpretability in smart grid related ML works.

According to the identified need for a quick advancement in this field, this Research Topic addresses the most recent advances regarding explainability models for intelligent decision support and management systems in the scope of power and energy systems. The Research Topic brings together the most recent and relevant contributions on XAI that enable improving the acceptability, trust and willingness of users to adopt advanced models in this domain, as a way to foster their widespread use. The Research Topic comprises both theoretical conceptual models and applicational models that constitute significant contributions to the body of knowledge on explainability of AI-based solutions related to energy management and decision support.

2 A short review of the contributions in this Research Topic

Of the main sources of mistrust in the field of AI concerns the results achieved by ML models, usually looked upon as black boxes. The work presented in Li et al. presents a short-term load prediction model based on transfer learning. The proposed transfer learning method combines an attention mechanism with a long short-term memory network coupled with input and forgetting gates to construct a novel AM-CIF-LSTM short-term load prediction model. A variational modal decomposition method is used to extract the trend component and certain periodic high-frequency components of the load datasets. The achieved results show that the AM-CIF-LSTM short-term load prediction model is able to surpass the performance of state-of-the-art methods and it is able to adapt to quick variations of the trend of load in the case of insufficient data.

A load pattern extraction method based on multidimensional electrical consumption feature construction is presented in Wang et al.A convolutional autoencoder is created to extract the temporal feature of industrial load data. This temporal feature is combined with industrial load characteristic set, which is created using an improved entropy weight method. A Self-Organizing Map network is then used to calculate the local density and distance attributes of nodes in order to select the initial clustering centers of K-means algorithm as means to achieve the daily load clustering. Results shows that the proposed model achieves a good performance in stability and clustering effect.

Wang et al. presents a novel subsynchronous oscillations detection method for noisy synchrophasor data that considers the issue of detection as a binary classification (true or false). In order to overcome the issue of imbalance caused by subsynchronous oscillations data being substantially less than false data, a weighted kernel extreme learning machine is constructed as a classifier to implement the detection. Results show the effectiveness of the proposed algorithm when dealing with imbalance issues.

Tian et al. presents a non-embedded cable joint temperature inversion method. A uniform manifold approximation and projection is used for feature reduction purposes. A novel meta-heuristic model named improved sparrow search algorithm is then proposed by combining the Tent chaotic map and population mutation perturbation strategy as means to optimize a back propagation neural network. The temperature inversion effect of the proposed model is compared to state of the art algorithms on the cable joint temperature-rise test, showing a superior performance, while improving the interpretability of the model.

The interpretability and explainability capabilities of AI-based models applied to energy system problems is explored in Alsaigh et al. A total of 3,568 relevant papers have been collected from the Scopus database, from which 15 parameters for AI governance in energy systems have been automatically discovered, which have been grouped into four macro-parameters, namely AI Behaviour and Governance, Technology, Design and Development, and Operations. The findings show that research on AI explainability in energy systems is segmented and focused on a few AI traits (fairness, interpretability, explainability, and trustworthiness) and energy system problems (stability and reliability analysis, energy forecasting, and power system flexibility). This study also highlights some examples of specific challenges in XAI for energy systems, namely fault detection, diagnosis, and prediction. Finally, the study points out that another increasingly important area is the security of ML models.

3 Conclusion

The papers within this Research Topic address the field of explainability of AI-based solutions related to energy management and decision support through a complementary view on several of the most important topics in this domain. The development of innovative XAI models applied to a diversity of artificial intelligence-based models, with focus on machine learning approaches, provides a solid array of solutions able to deal with several of the most challenging problems regarding the need for interpretability of AI solutions in future power systems. Within this Research Topic, models applying different ML approaches, namely concerning classification, clustering and forecasting, including transfer learning and deep learning technologies, are addressed, with the aim of solving different application problems, in specific short-term load prediction, subsynchronous oscillations detection and load pattern extraction. Meta-heuristic optimization models are also used and new approaches proposed, in specific for non-embedded cable joint temperature inversion.

Subsequently, this article Research Topic provides a broad spectrum of works covering essential and complementary topics related to the role of interpretability and explainability as a booster of acceptability of AI-based solutions in future power systems. The perspectives presented in this Research Topic are crucial towards a more comprehensive understanding of the already achieved solutions in this domain but also present themselves as a motivator for the significant efforts that are still required in future research and development.

Author contributions

GS: Investigation, Writing–review and editing. TP: Methodology, Supervision, Writing–original draft. CR: Supervision, Writing–review and editing. JC: Funding acquisition, Investigation, Writing–review and editing.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

European Commission (2020). White paper on artificial intelligence: on artificial intelligence-A European approach to excellence and trust. Available at: https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf.

Google Scholar

Machlev, R., Heistrene, L., Perl, M., Levy, K. Y., Belikov, J., Mannor, S., et al. (2022). Explainable artificial intelligence (XAI) techniques for energy and power systems: review, challenges and opportunities. Energy AI 9, 100169. doi:10.1016/j.egyai.2022.100169

CrossRef Full Text | Google Scholar

Miller, C. (2019). What’s in the box?! Towards explainable machine learning applied to non-residential building smart meter classification. Energy Build. 199, 523–536. doi:10.1016/j.enbuild.2019.07.019

CrossRef Full Text | Google Scholar

Pinto, T., and Vale, Z. (2019). “AiD-EM: adaptive decision support for electricity markets negotiations,” in Proceedings of the twenty-eighth international joint conference on artificial intelligence, {IJCAI-19}, 6563–6565. doi:10.24963/ijcai.2019/957

CrossRef Full Text | Google Scholar

Rusche, S., Weissflog, J., Wenninger, S., and Häckel, B. (2023). How flexible are energy flexibilities? Developing a flexibility score for revenue and risk analysis in industrial demand-side management. Appl. Energy 345, 121351. doi:10.1016/j.apenergy.2023.121351

CrossRef Full Text | Google Scholar

Saranya, A., and Subhashini, R. (2023). A systematic review of explainable artificial intelligence models and applications: recent developments and future trends. Decis. Anal. J. 7, 100230. doi:10.1016/j.dajour.2023.100230

CrossRef Full Text | Google Scholar

Xu, C., Liao, Z., Li, C., Zhou, X., and Xie, R. (2022). Review on interpretable machine learning in smart grid. Energies 15 (12), 4427. doi:10.3390/en15124427

CrossRef Full Text | Google Scholar

Zhang, Y., and Chen, X. (2020). Explainable recommendation: A survey and new perspectives. Found. Trends®in Inf. Retr. 14 (1), 1–101. doi:10.1561/1500000066

CrossRef Full Text | Google Scholar

Keywords: energy management, explainable artificial intelligence, knowledge-based systems, machine learning, smart grids

Citation: Santos G, Pinto T, Ramos C and Corchado JM (2023) Editorial: Explainability in knowledge-based systems and machine learning models for smart grids. Front. Energy Res. 11:1269397. doi: 10.3389/fenrg.2023.1269397

Received: 29 July 2023; Accepted: 14 August 2023;
Published: 21 August 2023.

Edited and reviewed by:

ZhaoYang Dong, Nanyang Technological University, Singapore

Copyright © 2023 Santos, Pinto, Ramos and Corchado. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Tiago Pinto, tiagopinto@utad.pt

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.