Skip to main content

ORIGINAL RESEARCH article

Front. Energy Res., 07 February 2022
Sec. Smart Grids
This article is part of the Research Topic Advanced Digital Technologies in Digitalized Smart Grid View all 13 articles

A Reduced-Order RNN Model for Solving Lyapunov Equation Based on Efficient Vectorization Method

Zhiying ChenZhiying Chen1Zhaobin Du
Zhaobin Du1*Feng LiFeng Li2Chengjun XiaChengjun Xia1
  • 1School of Electric Power Engineering, South China University of Technology, Guangzhou, China
  • 2The Grid Planning and Research Center of Guangdong Power Grid Corporation, Guangzhou, China

With the trend of electronization of the power system, a traditional serial numerical algorithm is more and more difficult to adapt to the demand of real-time analysis of the power system. As one of the important calculating tasks in power systems, the online solution of Lyapunov equations has attracted much attention. A recursive neural network (RNN) is more promising to become the online solver of the Lyapunov equation due to its hardware implementation capability and parallel distribution characteristics. In order to improve the performance of the traditional RNN, in this study, we have designed an efficient vectorization method and proposed a reduced-order RNN model to replace the original one. First, a new vectorization method is proposed based on the special structure of vectorized matrix, which is more efficient than the traditional Kronecker product method. Second, aiming at the expanding effect of vectorization on the problem scale, a reduced-order RNN model based on symmetry to reduce the solution scale of RNN is proposed. With regard to the accuracy and robustness, it is proved theoretically that the proposed model can maintain the same solution as that of the original model and also proved that the proposed model is suitable for the Zhang neural network (ZNN) model and the gradient neural network (GNN) model under linear or non-linear activation functions. Finally, the effectiveness and superiority of the proposed method are verified by simulation examples, three of which are standard examples of power systems.

Introduction

With the trend of the electronic power system, the scale of system computing is increasing day by day, while the demand of real-time analysis and calculation in the process of system operation remains unchanged. Traditional serial algorithms cannot solve this contradiction well, so various parallel algorithms and distributed methods appear successively. In power system state estimation, Chen et al. (2017) have used the SuperLU_MT solver to estimate the state of the actual power grid, making full use of the parallel characteristics of multicore and multi-thread solver. Liu Z. et al. (2020) have fully explored the parallelism in the calculation of continuous power flow and applied the continuous Newton method power flow model to realize the parallel solution algorithm of continuous power flow based on GPU in large scale and multiple working conditions. Moreover, a novel distributed dynamic event-triggered Newton–Raphson algorithm is proposed to solve the double-mode energy management problem in a fully distributed fashion (Li et al., 2020). Similarly, Li Y. et al. (2019) proposed an event-triggered distributed algorithm with some desirable features, namely, distributed execution, asynchronous communication, and independent calculation, which can solve the issues of day-ahead and real-time cooperative energy management for multienergy systems. Given that software algorithms are essentially run by hardware, implementing functions directly from hardware is also an option for real-time computing. For example, Hafiz et al. (2020) proposed a real-time stochastic optimization of energy storage management using deep learning–based forecasts for residential PV applications, where the key of the real-time computation is the hardware controller. It is worth pointing out that compared with the aforementioned methods, the neural dynamics method has greater potential in the field of real-time calculation of power systems (Le et al., 2019), and its time constant can reach tens of milliseconds (Chicca et al., 2014) because of its parallel distribution characteristics and the convenience of hardware implementation.

The Lyapunov equation is widely used in some scientific and engineering fields to analyze the stability of dynamic systems (He et al., 2017; He and Zhang, 2017; Liu J. et al., 2020). In addition, the Lyapunov equation plays an important role in the controller design and robustness analysis of non-linear systems (Zhou et al., 2009; Raković and Lazar, 2014). In the field of power systems, the balanced truncation method, controller design, and stability analysis are also inseparable from the solution of the Lyapunov equation (Zhao et al., 2014; Zhu et al., 2016; Shanmugam and Joo, 2021). Therefore, many solving algorithms have been proposed to solve the Lyapunov equation. For example, Bartels and Stewart proposed the Bartels–Stewart method (Bartels and Stewart, 1972), which is a numerically stable solution. Lin and Simoncini (Lin and Simoncini, 2013) proposed the minimum residual method for solving the Lyapunov equation. Stykel (2008) used the low-rank iterative method to solve the Lyapunov equation and verified the effectiveness of the method through numerical examples. However, the efficiency of these serial processing algorithms is not high in large-scale applications and related real-time processing (Xiao and Liao, 2016).

Recently, due to its parallelism and convenience of hardware implementation, recurrent neural networks have been proposed and designed to solve the Lyapunov equation (Zhang et al., 2008; Yi et al., 2011; Yi et al., 2013; Xiao et al., 2019). The RNN mainly includes the Zhang neural network (ZNN) and gradient neural network (GNN) (Zhang et al., 2008). Most of the research studies on RNN focus on the improvement of model convergence. For example, Yi et al. (2013) point out that when solving a stationary or a non-stationary Lyapunov equation, the convergence of the ZNN is better than that of GNN. Yi et al. (2011) used a power-sigmoid activation function (PSAF) to build an improved GNN model to accelerate the iterative convergence of Lyapunov equation. In (Xiao and Liao, 2016), the sign-bi-power activation function (SBPAF) is used to accelerate the convergence of the ZNN model for solving the Lyapunov equation and the proposed ZNN model has finite-time convergence, which is obviously better than the previous ZNN and GNN models. In recent years, some studies have considered the noise-tolerant ZNN model. In Xiao et al. (2019), two robust non-linear ZNN (RNZNN) are established to find the solution of the Lyapunov equation under various noise conditions. Different from previous ZNN models activated by the typical activation functions (such as the linear activation function, the bipolar sigmoid activation function, and the power activation function), these two RNZNN models have predefined time convergence in the presence of various noises.

However, both GNN and ZNN need to transform the solution matrix from the matrix form to the vector form through the Kronecker product, which is called vectorization of the RNN model (Yi et al., 2011). The use of the Kronecker product will make the scale of the problem to be solved larger. As the size of the problem increases, the scaling effect of the Kronecker product becomes more obvious. The enlargement effect of the Kronecker product on the model size will not only lead to insufficient memory when the RNN is simulated on software but also make the hardware implementation of the RNN model need more devices and wiring, which increases the volume of hardware, the complexity of hardware production, and the failure rate of hardware. However, no study has discussed the order reduction of the RNN model.

It should be pointed out that the vectorized RNN model needs to be solved using a hardware circuit. However, as the relevant research of the RNN for solving the Lyapunov equation is still in the stage of theoretical exploration and improvement, there are no reports about hardware products of the RNN solver of the Lyapunov equation. Relevant studies (Zhang et al., 2008; Yi et al., 2011; Yi et al., 2013; Xiao and Liao, 2016; Xiao et al., 2019) simulate the execution process of the RNN hardware circuit through the form of software simulation, and this study also adopts this form. It is undeniable that the results of software simulation are consistent with those of hardware implementation. Therefore, the theoretical derivation and simulation results of the RNN in this article and in the literature (Zhang et al., 2008; Yi et al., 2011; Yi et al., 2013; Xiao and Liao, 2016; Xiao et al., 2019) can be extended to the scenarios of hardware implementation.

The RNN is used to solve the Lyapunov equation, and the ultimate goal is to develop an effective online calculation model to solve the Lyapunov equation, so it is of great significance to improve the calculation speed of the RNN. Current studies focus on improving the computational speed of the RNN by improving the convergence of RNN. However, how to efficiently realize vectorization of the RNN model is also a breakthrough to improve the computational efficiency of the RNN method. At present, the Kronecker product is generally used to transform the solution matrix into the vector form (Horn and Johnson, 1991). The Kronecker product actually performs multiple matrix multiplication operations, and the time complexity of multiplying two n×n matrices is O (n^3), so the time complexity of the Kronecker product increases rapidly as the scale increases. This means that the traditional matrix vectorization method based on the Kronecker product still has room for optimization.

In summary, this article proposes an efficient method for vectorizing the RNN model based on the special structure of the vectorized matrix, which is more efficient than the traditional expansion method by the Kronecker product. Aiming at the expanding effect of vectorization on the problem scale, a reduced-order RNN model based on symmetry was proposed for solving the time-invariant Lyapunov equation, and the validity and applicability of the reduced-order RNN model were proved theoretically. The main contributions of this article are as follows.

1) An efficient method for vectorization of RNN model is proposed. Compared with the traditional vectorization method, this method has higher efficiency and less time consumption.

2) The reduced-order RNN model for solving the Lyapunov equation based on symmetry is proposed, which greatly reduces the solution scale. It is proved theoretically that the proposed model can maintain the same solution as that of the original model. Meanwhile, it is proved theoretically that the proposed model is suitable for the ZNN model and GNN model under linear or non-linear activation functions.

3) Several simulation examples are given to verify the effectiveness and superiority of the proposed efficient method for vectorization of the RNN and the reduced-order RNN model. It is also verified that the neural dynamics method is suitable for solving the Lyapunov equation of power systems through three standard examples of power systems.

In order to show the contributions of this study more clearly, the logical graph using the RNN model for solving the Lyapunov equation is shown in Figure 1, and the main novelties and differences of this article from Refs Yi et al. (2011); Yi et al. (2013); Xiao and Liao (2016); Xiao et al. (2019) are shown in Table 1.

FIGURE 1
www.frontiersin.org

FIGURE 1. Logical graph of using a RNN model for solving the Lyapunov equation.

TABLE 1
www.frontiersin.org

TABLE 1. Main novelties and differences of this article from the relevant references.

In Table1, items and numbers correspond to the three steps of Figure 1. The relevant references include Yi et al. (2011); Yi et al. (2013); Xiao and Liao (2016); Xiao et al. (2019).

In conclusion, Refs (Yi et al., 2011; Yi et al., 2013; Xiao and Liao, 2016; Xiao et al., 2019) focus on constructing a stronger RNN model to improve the convergence and noise-tolerant ability, including using different activation functions and neural networks. However, this study focuses on the vectorization method and the reduced-order RNN model.

Problem Formulation and Related Work

Problem Formulation

Consider the following well-known Lyapunov equation (Yunong Zhang and Danchi Jiang, 1995)

ATX(t)+X(t)A=C,(1)

where A ∈ ℝn×n is a constant stable real matrix and C ∈ ℝn×n is a constant symmetric positive-definite matrix. The objective is to find the unknown matrix X(t) ∈ ℝn×n to make the Lyapunov matrix Eq. 1 hold true. Let XRn×n denote the theoretical solution of Eq. 1. In addition, two of the most relevant works (i.e., GNN and ZNN models) are presented to solve the Lyapunov Eq. 1 in the following.

GNN

According to the principle of GNN (Yi et al., 2011) and combined with the characteristics of Lyapunov equation, a corresponding GNN model can be designed to solve the Lyapunov equation. The design steps are as follows:

First, construct an energy function based on norm as follows:

Δ=ATX(t)+X(t)A+CF22(2)

where .F means F-norm. The minimum value of the energy function is the solution of the Lyapunov equation.

Second, based on the principle of the negative gradient descent of the GNN, the following formula can be constructed:

ΔX=A(ATX(t)+X(t)A+C)(ATX(t)+X(t)A+C)AT(3)

By introducing the adjustable positive parameter γ, the following GNN model can be obtained:

X˙(t)=γA(ATX(t)+X(t)A+C)γ(ATX(t)+X(t)A+C)AT(4)

where γ>0, X(t)Rn×n, and X(0)Rn×n is the initial value of X(t).

Finally, the conventional linear GNN (Eq. 4) can be improved into the following non-linear expression by employing a non-linear activation function array ():

X˙(t)=γ(A(ATX(t)+X(t)A+C)+(ATX(t)+X(t)A+C)AT)(5)

where ():Rn×nRn×n denotes a matrix-valued activation function array of the GNN models. In this study, the bipolar sigmoid activation function (BPAF) is selected as the representative of the non-linear activation function of the GNN model for simulation because of its strong convergence (Yi et al., 2011). The expression of BPAF is as follows:

(x)=1exp(δx)1+exp(δx)(6)

where δ is a constant and δ>1.

ZNN

First, following Zhang et al.’s design method (Zhang et al., 2002), we can define the following matrix-valued error function to monitor the solution process of Lyapunov Eq. 1:

E(t)=ATX(t)+X(t)A+C(7)

Then in view of the definition of E(t) and the design formula dE(t)/dt=γφ(E(t)), the dynamic equation of the ZNN model for solving the online Lyapunov Eq. 1 is derived as follows:

ATX˙(t)+X˙(t)A=γφ(ATX(t)+X(t)A+C)(8)

where φ():Rn×nRn×n denotes a matrix-valued activation function array of the ZNN models. The definition of γ in the ZNN model is the same as that in the GNN model.

In this study, the RNZNN-1 model is selected as the representative of the non-linear activation function of the ZNN model for simulation because of its strong convergence (Xiao et al., 2019). The expression of the non-linear activation function in the RNZNN-1 model is as follows:

φ(x)=(a1|x|η+a2|x|ω)sign(x)+a3x+a4sign(x)(9)

where design parameters 0<η<1, ω>1, a1>0, a2>0,  a30, a40, and sign(x) denotes the signum function.

An Efficient Method for Vectorization of RNN Model

General Method of Vectorizing RNN Model

The RNN model needs to be transformed to the vector form so that it can be used for software simulation (Li X. et al., 2019) and hardware implementation.

Vectorization of the GNN Model

Yi et al. (2011) pointed out that the vectorization of GNN model is as follows:

vecX(t)̇=γ((AI)((ATI)vecX(t)+(IAT) vecX(t)+vecC)+(IA)((ATI)vecX(t)+(IAT)vecX(t)+vecC))=γ((AA)((ATAT)vecX(t)+vecC))(10)

where

AA=AI+IA(11)
ATAT=ATI+IAT(12)

where means the Kronecker product. Given X=[xij]Rn×n, we can vectorize X as a column vector, vec(X) ∈ n2×1, which is defined as vec(X)=[x11,,x1n,x21,,xn1,,xnn]T.

Since the order of matrix addition and matrix transpose is interchangeable (Cheng and Chen, 2017),

(Y+Z)T=YT+ZT(13)

Applying this property to Eq. 11, we can get

(AI+IA)T=(AI)T+(IA)T(14)

According to Chen and Zhou (2012), the relationship between the matrix transpose and Kronecker product is as follows:

(YZ)T=YTZT(15)

Applying this property to Eq. 14, we can get

(AI)T+(IA)T=ATIT+ITAT(16)

Considering I=IT and combining Eqs 11, 12, 14 and 16, we can get

(AA)T=ATAT(17)

Vectorization of the ZNN Model

The vectorization process of the ZNN model is similar to that of the GNN. Carry out Kronecker product on Eq. 8, and we can get:

(ATAT)vecX(t)̇=γφ((ATI)vecX(t)+(IAT)vecX(t)+vecC)=γφ((ATAT)vecX(t)+vecC)(18)

Vectorization of the RNN Model

By comparing Eqs 10, 17 and 18, it can be seen that the key of vectorization of the RNN model is to solve ATAT.

According to Eq. 12, the calculation of ATAT can be divided into three steps:

1) Calculate ATI

ATI=[a110an100a110an1a1n0ann00a1n0ann](19)

where [aij00aij] is a diagonal matrix with n rows and n columns. ATI is a matrix with n2 rows and n2 columns.

2) Calculate IAT

IAT=[a11an100a1nann0000a11an100a1nann]=[AT000AT000AT](20)

where IAT is a matrix with n2 rows and n2 columns.

3) Add ATI to IAT

ATI+IAT=[2a11an1an10a1nann+a110an1a1n0a11+annan10a1na1n2ann](21)

An Efficient Method for Vectorization of RNN Model

According to the previous analysis, no matter how the matrix A is changed, the matrix structure of ATAT is fixed. Based on the special structure of ATAT, an efficient method for vectorization of the RNN model is proposed in this article. The steps are as follows.

1) Create a matrix with n2 rows and n2 columns named K and fill K with the elements of A according to Eq. 19.

2) Fill K with the elements of A according to Eq. 20.

3) Add the corresponding element of A to the diagonal element of K according to Eq. 21.

The vectorization method of the RNN model proposed in this article is still based on the Kronecker product, but the time complexity is greatly reduced. Because the vectorization method proposed in this article replaces matrix multiplication with assignment and addition.

The Reduced-Order RNN Model for Solving Lyapunov Equations Based on Symmetry

Since the solution of Eq. 1, X*, is always symmetric, as long as the upper trigonometric elements of X* are solved, the lower trigonometric elements of X* can be obtained correspondently, which can greatly reduce the computational amount of solving the Lyapunov equations. Based on this idea, a reduced-order RNN model for solving the Lyapunov equation based on symmetry is proposed in this article.

The Reduced-Order ZNN Model With Linear Activation Function

Vectorization

Let’s consider a ZNN model with linear activation function after vectorization. The formula is as follows:

[k11k12k1,n+1k1,n2k21k22k2,n+1k2,n2kn+1,1kn+1,2kn+1,n+1kn+1,n2kn2,1kn2,2kn2,n+1kn2,n2][x1˙x2˙xn+1˙xn2˙]=γ([k11k12k1,n+1k1,n2k21k22k2,n+1k2,n2kn+1,1kn+1,2kn+1,n+1kn+1,n2kn2,1kn2,2kn2,n+1kn2,n2][x1x2xn+1xn2]+[c1c2cn+1cn2])(22)

where K=ATAT.

For the convenience of later discussion, S∈Rn×n is constructed. Assign the following values to S as follows:

S=[12nn+1n+22nn(n1)+1n(n1)+2n2](23)

Each element of S is the index number of the element of A at the same position.

We can expand X to vecXRn2×1. Assuming that x2 and xn+1 are, respectively, the elements of the 1st row and the n + 1th row of vecX. Due to the symmetry, x2 will be equal to xn+1.

Reduce the Column Number of K

If the Kronecker product is directly carried out on Eq. 1, then

[k11k12k1,n+1k1,n2k21k22k2,n+1k2,n2kn+1,1kn+1,2kn+1,n+1kn+1,n2kn2,1kn2,2kn2,n+1kn2,n2][x1x2xn+1xn2]=[c1c2cn+1cn2](24)

We can use KvecX=vecC to express Eq. 24. Lan (2017) points out that if A is stable and C is symmetrically positive definite, then the Lyapunov Eq. 1 has a unique symmetric positive definite solution. Therefore, the K matrix of Eq. 24 must be invertible.

Multiply both sides of Eq. 22 by the inverse matrix of K, then we get

[x1˙x2˙xn+1˙xn2˙]=γ([x1x2xn+1xn2]+K1[c1c2cn+1cn2])(25)

From Eq. 25, we can see that K1vecC is the solution of the Lyapunov Eq. 1, which means K1vecC=vecX*. Since X* is a symmetric matrix, the differential equations of x2(t) and xn+1(t) are the same. If x2(0) and xn+1(0) are equal, then the time domain trajectories of x2(t) and xn+1(t) are the same, namely, x2˙(t)=xn+1˙(t) and x2(t)=xn+1(t). Therefore, for Eq. 22, the column n+1 of K can be added to the second column. Similarly, the same operation of column addition can be performed on the other columns in the symmetric positions. So the column number of K reduces to 0.5(n+1)n. Eq. 22 becomes

[k11k12k1,0.5(n+1)nk21k22k2,0.5(n+1)nkn+1,1kn+1,2kn+1,0.5(n+1)nkn2,1kn2,2kn2,0.5(n+1)n][x1˙x2˙x0.5(n+1)n˙]=γ([k11k12k1,0.5(n+1)nk21k22k2,0.5(n+1)nkn+1,1kn+1,2kn+1,0.5(n+1)nkn2,1kn2,2kn2,0.5(n+1)n][x1x2x0.5(n+1)n]+[c1c2cn+1cn2]),(26)

Reduce the Row Number of K

When the steady state is considered, the differential term of Eq. 26 is 0, and we can get:

[k11k12k1,0.5(n+1)nk21k22k2,0.5(n+1)nkn+1,1kn+1,2kn+1,0.5(n+1)nkn2,1kn2,2kn2,0.5(n+1)n][x1x2x0.5(n+1)n]=[c1c2cn+1cn2](27)

As mentioned before, if A is stable and C is symmetrically positive definite, then Lyapunov Eq. 1 has a unique symmetric positive definite solution. Therefore, the number of equations should be the same as the number of variables (Cheng and Chen, 2017), namely, the rank of the coefficient matrix of Eq. 27 is equal to 0.5(n+1)n, which means the row vector set of the coefficient matrix of Eq. 27 is linearly correlated.

We can construct the augmented matrix of Eq. 27 and name it as G. If we define the first row of G as the vector α1, the second row as the vector α2, and so on, the row n2 is defined as the vector αn2.

According to the knowledge of linear algebra, the vector set α1,α2,,αn2 is linearly dependent only if at least one of the vectors in the set can be represented linearly by the other vectors.

Let us define the vectors which can be represented linearly by the other vectors as the redundant vectors. Suppose that

αn2=h1α1+h2α2++hn21αn21(28)

where h1,h2,,hn21 are real numbers and at least one of them is not equal to 0. Then αn2 is a redundant vector. As long as the redundant vectors are found out and the equations of their corresponding rows are deleted, a new augmented matrix with full row rank can be obtained.

According to the aformentioned analysis, as long as A is stable and C is symmetrically positive definite, then the redundant vectors must exist, which means there are always some vectors that satisfy Eq. 28. However, A and C are independent of each other. Considering there are always some vectors satisfying Eq. 28 in the case of any stable A and any symmetrically positive definite C, there is only one possibility that for some redundant vector, there is another vector that is equal to it, and the row indexes of both of them are symmetric in the matrix S. Only in this way, based on the symmetric characteristics of matrix C, can the redundant vectors always satisfy Eq. 28 when A and C are independent of each other.

For a redundant vector, its own row index and the row index of another vector equal to it can form a pair of indexes. We can use these index pairs to find the redundant vectors and delete the corresponding rows. In general, in an index pair, the equation corresponding to the index whose value is larger is selected for deletion.

After the row deletion, the row number of K also reduces to 0.5(n+1)n. Eq. 26 becomes

[k11k12k1,0.5(n+1)nk21k22k2,0.5(n+1)nk0.5(n+1)n,1k0.5(n+1)n,2k0.5(n+1)n,0.5(n+1)n][x1˙x2˙x0.5(n+1)n˙]=γ([k11k12k1,0.5(n+1)nk21k22k2,0.5(n+1)nk0.5(n+1)n,1k0.5(n+1)n,2k0.5(n+1)n,0.5(n+1)n][x1x2x0.5(n+1)n]+[c1c2c0.5(n+1)n])(29)

We can use KrvecXr˙=γ(KrvecXr+vecCr) to express Eq. 29.

Reduced-Order GNN Model With Linear Activation Function

Consider a GNN model with linear activation function after vectorization. The formula is as follows:

[x1˙x2˙xn+1˙xn2˙]=γ[k11k12k1,n+1k1,n2k21k22k2,n+1k2,n2kn+1,1kn+1,2kn+1,n+1kn+1,n2kn2,1kn2,2kn2,n+1kn2,n2]T([k11k12k1,n+1k1,n2k21k22k2,n+1k2,n2kn+1,1kn+1,2kn+1,n+1kn+1,n2kn2,1kn2,2kn2,n+1kn2,n2][x1x2xn+1xn2]+[c1c2cn+1cn2])(30)

When the steady state is considered, the differential term of Eq. 30 is 0, and Eq. 30 is changed into Eq. 24. From the aforementioned derivation, it can be known that both KvecX=vecC and KrvecXr=vecCr can obtain the solution of Lyapunov Eq. 1. On this basis, an attempt is made to construct a reduced-order GNN model with linear activation function after vectorization, as follows:

[x1˙x2˙x0.5(n+1)n˙]=γ[k11k12k1,0.5(n+1)nk21k22k2,0.5(n+1)nk0.5(n+1)n,1k0.5(n+1)n,2k0.5(n+1)n,0.5(n+1)n]T([k11k12k1,0.5(n+1)nk21k22k2,0.5(n+1)nk0.5(n+1)n,1k0.5(n+1)n,2k0.5(n+1)n,0.5(n+1)n][x1x2x0.5(n+1)n]+[c1c2c0.5(n+1)n]).(31)

When the steady state is considered, Eq. 31 is changed into KrvecXr=vecCr, which means the solution of Lyapunov Eq. 1 can be finally obtained by solving Eq. 31.

Reduced-Order RNN Model With Non-Linear Activation Functions

Before every non-linear activation function is introduced into the linear RNN, it will be theoretically proved that their introduction can guarantee the correct convergence of RNN. However, the reduced-order RNN in this article does not change the structure of RNN, but only changes the size of RNN. Because both the problem scale and the specific values of the matrix are generally expressed in symbolic form in the theoretical derivation (Xiao and Liao, 2016; Xiao et al., 2019), and the reduced-order RNN model in this article is still applicable to the relevant theoretical proof of introducing the non-linear activation functions into the linear RNN. In other words, the reduced-order method in this article can be applied to the RNN model with non-linear activation functions.

The Generation of the Reduced-Order RNN Model

The steps for generating the reduced-order RNN model are as follows:

1) S is constructed, and the construction logic is as described above.

2) Considering that the indexes in the symmetric positions of S can form n(n1)2 index pairs, we construct a matrix L and fill L with n(n1)2 index pairs. It is important to note that all of the elements in the first column of L must be the upper trigonometric elements (excluding diagonal elements) of S. For the convenience of presentation, suppose M is the first column of L, and N is the second column of matrix L.

3) For K and vecC of RNN model, the rows corresponding to the element values of N are deleted.

4) For K and vecC obtained in step 3, add the symmetric columns according to the element values of M and N, and the columns obtained by the addition will replace the columns corresponding to the element values of M, while the columns corresponding to the element values of N will be deleted. For vecX(t)˙ and vecX(t), the rows corresponding to the element values of N will be deleted.

It should be pointed out that in mathematical proof, if the order of row reduction and column reduction is exchanged, the correctness of the reduced-order RNN model cannot be proved or another proof method is needed to complete the proof. However, in the case that the mathematical proof has been completed, the order of row reduction and column reduction does not affect the final result, since we know in advance which rows and columns are to be deleted. In the aformentioned steps of generating the reduced-order RNN model, the reason why we carry out step 3 first is that it can reduce the computational amount of the column addition to achieving higher computational efficiency.

The Significance of the Reduced-Order RNN Model

In order to better explain the value and significance of the reduced-order RNN model proposed in this article, the differences before and after the order reduction are shown from the perspectives of software simulation and hardware implementation, respectively. For the convenience of discussion, the GNN is taken as an example to illustrate.

Simulation on the Software

We use the ode45 function of MATLAB to solve the GNN model after vectorization. By comparing Eq. 30 and Eq. 31, it can be seen that the memory requirement of the reduced-order GNN model is much smaller than that of the original GNN model. Therefore, the reduced-order GNN model greatly alleviates the problem of insufficient memory that may occur in the software simulation of the GNN.

Hardware Implementation

When we use the traditional GNN model, the structure of the circuit diagram is shown as Figure 2 (Yi et al., 2011).

FIGURE 2
www.frontiersin.org

FIGURE 2. Circuit schematics which realizes GNN.

WhereM=[mij]Rn2×n2=AA, P=[p1,p2,,pn2]T=vecCRn2×1,Y=[y1,y2,,yn2]T=vecXRn2×1.

When we use the reduced-order GNN model, the structure of the circuit diagram is shown as Figure 2, except that the n2 in the diagram becomes 0.5(n+1)n.

So the reduced-order GNN model greatly reduces the number of devices and wiring required for the hardware realization of GNN model, which is conducive to reducing the volume of hardware, the complexity of hardware production, and the failure rate of hardware.

Illustrative Verification

The simulation examples in this article are all completed on the MATLAB 2013b platform. In this article, the ode45 function of MATLAB is used to simulate the iterative process of RNN (Zhang et al., 2008). The corresponding computing performance is tested on a personal computer with Intel Core i7-4790 CPU @3.2GHz and 8 GB RAM.

Since there are great differences between software and hardware in the principle of realizing the integral function, there will be a big gap between the time cost in simulating the RNN process using software and the time cost in implementing the RNN model using hardware. Considering the research on the RNN model used to solve the Lyapunov equation is still in the stage of theoretical exploration, and has not reached the stage of hardware production for the time being, this article does not discuss the influence of the proposed reduced-order RNN model on the time consuming of RNN.

The Reduced-Order RNN Model for Solving Lyapunov Equation Based on Symmetry

Example 1

Let us consider the Lyapunov Eq. 1 with the following coefficient matrices:

A=[11234761812]andC=I3×3

where A is similar to Example II in Xiao et al. (2019). However, A and C of Example II in Xiao et al. (2019) do not fit the definition of Eq. 1, so we change A a little bit and set C to be the identity matrix.

In this example, we set γ=10, η=0.25, ω=4, a1=a2=a3=a4=1, and δ=4 (Yi et al., 2011; Xiao et al., 2019).

In order to demonstrate the advantages of the reduced-order RNN model, this article compares the performance of the reduced-order RNN and the original-order RNN, as shown in Table 2. In Table 2, linearity and non-linearity, respectively, mean the linear activation function and the non-linear activation function. Scale means the row number of vecX or vecXr. Proportion means the row number of vecXr divided by the row number of vecX. The ZNN F-norm and GNN F-norm, respectively, mean ATX+XA+CF at the end of the simulation of ZNN and GNN.

TABLE 2
www.frontiersin.org

TABLE 2. Comparison of the original-order RNN and the reduced-order RNN under Example 1.

In order to study the effect of order reduction method proposed in this article on the convergence of the RNN model, the F-norm curves of the original-order RNN model and the reduced-order RNN model are drawn, as shown in Figure 3. In Figure 3A, LAF means the ZNN model with linear activation functions. NAF means the ZNN model with the non-linear activation function of Eq. 9. In Figure 3B, LAF means the GNN model with linear activation functions. BPAF means the GNN model with the non-linear activation function of Eq. 6. In both Figure 3A and Figure 3B, F-norm refers to ATX(t)+X(t)A+CF.

FIGURE 3
www.frontiersin.org

FIGURE 3. Comparison of the convergence between the original-order and the reduced-order RNN models under Example 1. (A) is ZNN, (B) is GNN.

Example 2

To enlarge the scale of the example, we consider Lyapunov Eq. 1 with the following coefficient matrices:

A=[173456317345431734543173654317]andC=I5×5

where A is similar to Example III in Xiao et al. (2019). However, A and C of Example III in Xiao et al. (2019) do not fit the definition of Eq. 1, so we change A a little bit and set C to be the identity matrix.

In this example, RNN’s model parameters are the same as Example 1. Similar to Example 1, we can get Table 3 and Figure 4. The definitions of all nouns in Table 3 are the same as those in Table 2, and the definitions of all nouns in Figure 4 are the same as those in Figure 3.

TABLE 3
www.frontiersin.org

TABLE 3. Comparison of the original-order RNN and the reduced-order RNN under Example 2.

FIGURE 4
www.frontiersin.org

FIGURE 4. Comparison of the convergence between the original-order and the reduced-order RNN models under Example 2. (A) is ZNN, (B) is GNN.

Example 3
A=[50525958361085110107144384351241039856108496105125229247511757992147222436193442746710462101044767136246744985923215144]andC=I10×10

A 10∗10 matrix is randomly generated and then α-shift is applied to the matrix to make it stable (Yang et al., 1993), which is the generation process of A of Example 3 in this article. C is set to be the identity matrix.

In this example, RNN’s model parameters are the same as Example 1. Similar to Example 1, we can get Table 4 and Figure 5. The definitions of all nouns in Table 4 are the same as those in Table 2, and the definitions of all nouns in Figure 5 are the same as those in Figure 3.

TABLE 4
www.frontiersin.org

TABLE 4. Comparison of the original-order RNN and the reduced-order RNN under Example 3.

FIGURE 5
www.frontiersin.org

FIGURE 5. Comparison of the convergence between the original-order and the reduced-order RNN models under Example 3. (A) is ZNN, (B) is GNN.

Based on the information in the aforementioned three tables, we can draw the following conclusions:

a) The reduced-order RNN model has a very obvious effect, with the scale reduced by about 33–45%. Moreover, the effect of the reduced-order RNN model becomes more obvious with the increase in the size of the example. According to limn0.5n(n+1)n2=0.5, it can be seen that when the size of the example is larger, the percentage of the scale decrease is closer to 50%. The reduced-order RNN model not only greatly alleviates the problem of insufficient memory in the software simulation of RNN but also greatly reduces the number of devices and wiring required for the hardware realization of RNN model, which is conducive to reducing the volume of hardware, the complexity of hardware production, and the failure rate of hardware.

b) Under different case scales, whether it is ZNN or GNN, whether it is linear activation function or non-linear activation function, the steady-state errors of the reduced-order RNN model are very close to 0, which means the reduced-order RNN model can always converge to the correct solution of the Lyapunov equation. This indicates that the reduced-order RNN model is applicable to ZNN and GNN, as well as the scenarios of linear activation function and non-linear activation function, which is consistent with the theoretical derivation results above.

c) Under different case scales, the difference in the steady-state accuracy between the reduced-order RNN and the original-order RNN is very small, indicating that the reduced-order RNN basically does not affect the steady-state accuracy of RNN.

Based on the information in the aforementioned three figures, we can draw the following conclusions:

a) Under different case scales, the reduced-order RNN models with linear or non-linear activation functions either have a little effect on the iterative convergence characteristics or enhance the convergence at the beginning of the iteration process and have a little effect on the convergence at the end of it.

b) Under the non-linear activation functions, the convergence of the ZNN model is always stronger than that of the GNN model when other conditions are fixed.

c) Under the linear activation function, the convergence of the ZNN model is weaker than that of the GNN model when the size of the examples is small (e.g., Example 1 and Example 2). The convergence of the linear ZNN model is stronger than that of the linear GNN model when the size of the examples is large (e.g., Example 3).

d) For both ZNN and GNN, the convergence of the RNN model with non-linear activation function is always stronger than that of the linear RNN model.

e) With the increase in the size of the examples, the convergence of ZNN is basically unchanged, while the convergence of GNN will become significantly worse.

Example 4

In order to verify the applicability of neural dynamics method to the power system, the corresponding Lyapunov equation describing system controllability is generated for the IEEE three-machine nine-node system according to the principle of the balanced truncation method in (Zhao et al., 2014). The input signal is the rotor speed deviation and the output signal is the auxiliary stabilizing signal (Zhu et al., 2016). It should be noted that the IEEE standard systems used in this article come from the examples of the PST toolkit (Lan, 2017), and the linearization process of the system is realized by the svm_mgen.m of PST toolkit. We set γ=10. A ∈R15×15 and C ∈R15×15 of the Lyapunov equation are detailed in Supplementary Material.

In this example, the linear ZNN was selected for testing. Similar to Example 1, we can get Table 5 and Figure 6. The definitions of all nouns in Table 5 are the same as those in Table 2 and the definitions of all nouns in Figure 6 are the same as those in Figure 3.

TABLE 5
www.frontiersin.org

TABLE 5. Comparison of the original-order ZNN and the reduced-order ZNN under Examples 4-6.

FIGURE 6
www.frontiersin.org

FIGURE 6. Comparison of the convergence between the original-order and the reduced-order ZNN models under Example 4.

Example 5

Similar to Example 4, we generate the corresponding Lyapunov equation describing system controllability of the IEEE 16-machine system. A ∈ R35×35 and C ∈ R35×35 of the Lyapunov equation are detailed in Supplementary Material. We set γ=100. The simulation results are shown in Table 5 and Figure 7. The definitions of all nouns in Figure 7 are the same as those in Figure 3.

FIGURE 7
www.frontiersin.org

FIGURE 7. Comparison of the convergence between the original-order and the reduced-order ZNN models under Example 5.

Example 6

Similar to Example 4, we generate the corresponding Lyapunov equation describing system controllability of the IEEE 48-machine system. A ∈ R97×97 and C ∈ R97×97 of the Lyapunov equation are detailed in Supplementary Material. We set γ=100. The simulation results are shown in Table 5 and Figure 8. The definitions of all nouns in Figure 8 are the same as those in Figure 3.

FIGURE 8
www.frontiersin.org

FIGURE 8. Comparison of the convergence between the original-order and the reduced-order ZNN models under Example 6.

It can be seen from Table 5 and Figures 68 that the neural dynamics method used to solve Lyapunov equations is also suitable for solving Lyapunov equations in power systems, and the reduced-order RNN mosdels proposed in this article is effective in the example of power systems. Moreover, with the increase in the power system scale, the convergence and steady-state accuracy of ZNN model are almost unchanged, indicating the applicability of the RNN model to power systems of different scales.

It is worth mentioning that the integration between the electric power and natural gas systems has been steadily enhanced in recent decades. The incorporation of natural gas systems brings, in addition to a cleaner energy source, greater reliability and flexibility to the power system (Liu et al., 2021). Since the dynamic model of the electricity–gas coupled system can be expressed by differential-algebraic equations (Zhang, 2005; Yang, 2020), which means the dynamic model of the electricity–gas coupled system is the same as that of the power system, the aforementioned applicability analysis of the methods proposed in this article for large power systems are also applicable to large electricity–gas coupled systems.

An Efficient Method for Vectorization of RNN Model

Table 6 compares the time cost of the RNN model vectorization method proposed in this article and the traditional RNN model vectorization method. For the sake of convenience, the former is called method A and the latter is called method B. In order to better demonstrate the effect of the vectorization method of RNN model proposed in this article, four examples are added, as shown in Table 6. Four newly added examples are generated in the same way as Example 3 and are detailed in Supplementary Material, where ms means millisecond; scale means the order of A; and proportion refers to the time taken by method A divided by the time taken by method B.

TABLE 6
www.frontiersin.org

TABLE 6. Comparison of the time cost of two vectorization methods for RNN models.

It can be seen from Table 6 that method A is significantly better than method B in terms of time cost, with the decrease in time cost between 48 and 98%. With the increase in the size of the examples, the proportion of time cost improvement generally increases. It should be pointed out that when the system sizes are 15, 35, and 97, the corresponding examples are the IEEE standard systems mentioned before, which indicates that the vectorization method proposed in this article is also effective in the example of power systems.

Conclusion

1) We propose an efficient method for vectorizing RNN models, which can achieve higher computational efficiency than the traditional method of vectorizing RNN based on the Kronecker product.

2) In order to reduce the solving scale of the RNN model, a reduced-order RNN model for solving the Lyapunov equation was proposed based on symmetry. At the same time, it is proved theoretically that the proposed model can maintain the same solution as that of the original model, and it is also proved that the proposed model is suitable for both the ZNN model and GNN model under linear or non-linear activation functions.

3) Several simulation examples are given to verify the effectiveness and superiority of the proposed method, while three standard examples of power systems are given to verify that the neural dynamics method is suitable for solving the Lyapunov equation of power systems.

Because the neural dynamics method has parallel distribution characteristics and hardware implementation convenience, its convergence and computation time are not sensitive to the system scale. Considering the current development level and trend of the very large-scale integration (VLSI) chip and the ultra large-scale integration (ULSI) chip, the wide application of the neural dynamics method in large-scale systems is expected.

In addition, the research on the RNN model used to solve the Lyapunov equation is mainly in the stage of theoretical improvement and exploration, and there are few reports about hardware products. The hardware product design will be the main content of the next stage.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Funding

This work was supported in part by the Key-Area Research and Development Program of Guangdong Province (2019B111109001), the National Natural Science Foundation of China (51577071), and the Southern Power Grid Corporation’s Science and Technology Project (Project No. 037700KK52190015 (GDKJXM20198313)).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fenrg.2022.796325/full#supplementary-material

References

Bartels, R. H., and Stewart, G. W. (1972). Solution of the Matrix Equation AX + XB = C [F4]. Commun. ACM 15 (9), 820–826. doi:10.1145/361573.361582

CrossRef Full Text | Google Scholar

Chen, Z., and Zhou, J. (2012). Introduction to Matrix Theory. Beijing: Beihang University Press.

Google Scholar

Chen, Q., Gong, C., Zhao, J., Wang, Y., and Zou, D. (2017). Application of Parallel Sparse System Direct Solver Library Super LU_MT in State Estimation. Automation Electric Power Syst. 41 (3), 83–88. doi:10.7500/AEPS20160607008

CrossRef Full Text | Google Scholar

Cheng, K., and Chen, X. (2017). Linear Algebra. Chongqing: Chongqing University Press.

Google Scholar

Chicca, E., Stefanini, F., Bartolozzi, C., and Indiveri, G. (2014). Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems. Proc. IEEE 102 (9), 1367–1388. doi:10.1109/JPROC.2014.2313954

CrossRef Full Text | Google Scholar

Hafiz, F., Awal, M. A., Queiroz, A. R. d., and Husain, I. (2020). Real-Time Stochastic Optimization of Energy Storage Management Using Deep Learning-Based Forecasts for Residential PV Applications. IEEE Trans. Ind. Applicat. 56 (3), 2216–2226. doi:10.1109/TIA.2020.2968534

CrossRef Full Text | Google Scholar

He, W., and Zhang, S. (2017). Control Design for Nonlinear Flexible Wings of a Robotic Aircraft. IEEE Trans. Contr. Syst. Technol. 25 (1), 351–357. doi:10.1109/TCST.2016.2536708

CrossRef Full Text | Google Scholar

He, W., Ouyang, Y., and Hong, J. (2017). Vibration Control of a Flexible Robotic Manipulator in the Presence of Input Deadzone. IEEE Trans. Ind. Inf. 13 (1), 48–59. doi:10.1109/TII.2016.2608739

CrossRef Full Text | Google Scholar

Horn, R. A., and Johnson, C. R. (1991). Topics in Matrix Analysis. Cambridge: Cambridge University Press.

Google Scholar

Lan, X. (2017). Research on Model Order Reduction Method and Predictive Control Algorithm of Grid Voltage Control System (Beijing: North China Electric Power University). [dissertation/master’s thesis].

Google Scholar

Le, X., Chen, S., Li, F., Yan, Z., and Xi, J. (2019). Distributed Neurodynamic Optimization for Energy Internet Management. IEEE Trans. Syst. Man. Cybern, Syst. 49 (8), 1624–1633. doi:10.1109/TSMC.2019.2898551

CrossRef Full Text | Google Scholar

Li, X., Yu, J., Li, S., Shao, Z., and Ni, L. (2019a). A Non-linear and Noise-Tolerant ZNN Model and its Application to Static and Time-Varying Matrix Square Root Finding. Neural Process. Lett. 50 (2), 1687–1703. doi:10.1007/s11063-018-9953-y

CrossRef Full Text | Google Scholar

Li, Y., Zhang, H., Liang, X., and Huang, B. (2019b). Event-Triggered-Based Distributed Cooperative Energy Management for Multienergy Systems. IEEE Trans. Ind. Inf. 15 (4), 2008–2022. doi:10.1109/TII.2018.2862436

CrossRef Full Text | Google Scholar

Li, Y., Gao, D. W., Gao, W., Zhang, H., and Zhou, J. (2020). Double-Mode Energy Management for Multi-Energy System via Distributed Dynamic Event-Triggered Newton-Raphson Algorithm. IEEE Trans. Smart Grid 11 (6), 5339–5356. doi:10.1109/TSG.2020.3005179

CrossRef Full Text | Google Scholar

Lin, Y., and Simoncini, V. (2013). Minimal Residual Methods for Large Scale Lyapunov Equations. Appl. Numer. Maths. 72, 52–71. doi:10.1016/j.apnum.2013.04.004

CrossRef Full Text | Google Scholar

Liu, J., Zhang, J., and Li, Q. (2020a). Upper and Lower Eigenvalue Summation Bounds of the Lyapunov Matrix Differential Equation and the Application in a Class Time-Varying Nonlinear System. Int. J. Control. 93 (5), 1115–1126. doi:10.1080/00207179.2018.1494389

CrossRef Full Text | Google Scholar

Liu, Z., Chen, Y., Song, Y., Wang, M., and Gao, S. (2020b). Batched Computation of Continuation Power Flow for Large Scale Grids Based on GPU Parallel Processing. Power Syst. Techn. 44 (3), 1041–1046. doi:10.13335/j.1000-3673.pst.2019.2050

CrossRef Full Text | Google Scholar

Liu, H., Shen, X., Guo, Q., and Sun, H. (2021). A Data-Driven Approach towards Fast Economic Dispatch in Electricity-Gas Coupled Systems Based on Artificial Neural Network. Appl. Energ. 286, 116480. doi:10.1016/j.apenergy.2021.116480

CrossRef Full Text | Google Scholar

Raković, S. V., and Lazar, M. (2014). The Minkowski-Lyapunov Equation for Linear Dynamics: Theoretical Foundations. Automatica 50 (8), 2015–2024. doi:10.1016/j.automatica.2014.05.023

CrossRef Full Text | Google Scholar

Shanmugam, L., and Joo, Y. H. (2021). Stability and Stabilization for T-S Fuzzy Large-Scale Interconnected Power System with Wind Farm via Sampled-Data Control. IEEE Trans. Syst. Man. Cybern, Syst. 51 (4), 2134–2144. doi:10.1109/TSMC.2020.2965577

CrossRef Full Text | Google Scholar

Stykel, T. (2008). Low-rank Iterative Methods for Projected Generalized Lyapunov Equations. Electron. Trans. Numer. Anal. 30 (1), 187–202. doi:10.1080/14689360802423530

CrossRef Full Text | Google Scholar

Xiao, L., and Liao, B. (2016). A Convergence-Accelerated Zhang Neural Network and its Solution Application to Lyapunov Equation. Neurocomputing 193, 213–218. doi:10.1016/j.neucom.2016.02.021

CrossRef Full Text | Google Scholar

Xiao, L., Zhang, Y., Hu, Z., and Dai, J. (2019). Performance Benefits of Robust Nonlinear Zeroing Neural Network for Finding Accurate Solution of Lyapunov Equation in Presence of Various Noises. IEEE Trans. Ind. Inf. 15 (9), 5161–5171. doi:10.1109/TII.2019.2900659

CrossRef Full Text | Google Scholar

Yang, J., Chen, C. S., Abreu-garcia, J. A. D., and Xu, Y. (1993). Model Reduction of Unstable Systems. Int. J. Syst. Sci. 24 (12), 2407–2414. doi:10.1080/00207729308949638

CrossRef Full Text | Google Scholar

Yang, H. (2020). Dynamic Modeling and Stability Studies of Integrated Energy System of Electric, Gas and Thermal on Multiple Time Scales (Hunan: Changsha University of Science & Technology). [dissertation/master’s thesis].

Yi, C., Chen, Y., and Lu, Z. (2011). Improved Gradient-Based Neural Networks for Online Solution of Lyapunov Matrix Equation. Inf. Process. Lett. 111 (16), 780–786. doi:10.1016/j.ipl.2011.05.010

CrossRef Full Text | Google Scholar

Yi, C., Chen, Y., and Lan, X. (2013). Comparison on Neural Solvers for the Lyapunov Matrix Equation with Stationary & Nonstationary Coefficients. Appl. Math. Model. 37 (4), 2495–2502. doi:10.1016/j.apm.2012.06.022

CrossRef Full Text | Google Scholar

Yunong Zhang, Z., and Danchi Jiang, M. T. J. (1995). A Recurrent Neural Network for Solving Sylvester Equation with Time-Varying Coefficients. New York: Academic Press.

Google Scholar

Zhang, Y., Jiang, D., and Wang, J. (2002). A Recurrent Neural Network for Solving Sylvester Equation with Time-Varying Coeffificients. IEEE Trans. Neural Netw. 13 (5), 1053–1063. doi:10.1109/TNN.2002.1031938

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, Y., Chen, K., Li, X., Yi, C., and Zhu, H. (2008). “Simulink Modeling and Comparison of Zhang Neural Networks and Gradient Neural Networks for Time-Varying Lyapunov Equation Solving,” in Proceedings of IEEE International Conference on Natural Computation. Jinan. IEEE, 521–525. doi:10.1109/ICNC.2008.47

CrossRef Full Text | Google Scholar

Zhang, Y. (2005). Study on the Methods for Analyzing Combined Gas and Electricity Networks (Beijing: China Electric Power Research Institute). [dissertation/master’s thesis].

Zhao, H., Lan, X., Xue, N., and Wang, B. (2014). Excitation Prediction Control of Multi‐machine Power Systems Using Balanced Reduced Model. IET Generation, Transm. Distribution 8 (6), 1075–1081. doi:10.1049/iet-gtd.2013.0609

CrossRef Full Text | Google Scholar

Zhou, B., Duan, G.-R., and Li, Z.-Y. (2009). Gradient Based Iterative Algorithm for Solving Coupled Matrix Equations. Syst. Control. Lett. 58 (5), 327–333. doi:10.1016/j.sysconle.2008.12.004

CrossRef Full Text | Google Scholar

Zhu, Z., Geng, G., and Jiang, Q. (2016). Power System Dynamic Model Reduction Based on Extended Krylov Subspace Method. IEEE Trans. Power Syst. 31 (6), 4483–4494. doi:10.1109/TPWRS.2015.2509481

CrossRef Full Text | Google Scholar

Keywords: Lyapunov equation, vectorization, reduced-order RNN, symmetry, ZNN, GNN

Citation: Chen Z, Du Z, Li F and Xia C (2022) A Reduced-Order RNN Model for Solving Lyapunov Equation Based on Efficient Vectorization Method. Front. Energy Res. 10:796325. doi: 10.3389/fenrg.2022.796325

Received: 16 October 2021; Accepted: 03 January 2022;
Published: 07 February 2022.

Edited by:

Yan Xu, Nanyang Technological University, Singapore

Reviewed by:

Yushuai Li, University of Oslo, Norway
Dazhong Ma, Northeastern University, China

Copyright © 2022 Chen, Du, Li and Xia. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zhaobin Du, epduzb@scut.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.