Skip to main content

ORIGINAL RESEARCH article

Front. Robot. AI, 08 November 2022
Sec. Space Robotics

NN-Poly: Approximating common neural networks with Taylor polynomials to imbue dynamical system constraints

Frances Zhu
Frances Zhu1*Dongheng JingDongheng Jing2Frederick LeveFrederick Leve3Silvia FerrariSilvia Ferrari2
  • 1Hawaii Institute of Geophysics and Planetology, University of Hawaii, Honolulu, HI, United States
  • 2Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY, United States
  • 3Air Force Office of Scientific Research, Arlington, VA, United States

Recent advances in deep learning have bolstered our ability to forecast the evolution of dynamical systems, but common neural networks do not adhere to physical laws, critical information that could lead to sounder state predictions. This contribution addresses this concern by proposing a neural network to polynomial (NN-Poly) approximation, a method that furnishes algorithmic guarantees of adhering to physics while retaining state prediction accuracy. To achieve these goals, this article shows how to represent a trained fully connected perceptron, convolution, and recurrent neural networks of various activation functions as Taylor polynomials of arbitrary order. This solution is not only analytic in nature but also least squares optimal. The NN-Poly system identification or state prediction method is evaluated against a single-layer neural network and a polynomial trained on data generated by dynamic systems. Across our test cases, the proposed method maintains minimal root mean-squared state error, requires few parameters to form, and enables model structure for verification and safety. Future work will incorporate safety constraints into state predictions, with this new model structure and test high-dimensional dynamical system data.

1 Introduction

Neural networks have emerged as general-purpose regression models and revolutionized fields such as computer vision and machine translation, where occasional errors are less likely to jeopardize human lives. To extend deep neural networks (NNs) to dynamic system applications of high consequence while retaining predictive capabilities, neural networks should be verifiable and conform to physical laws. The original dynamical system adheres to physical laws but the exact form of the dynamical system’s transition model is not known. A neural network excels in learning a representation of the transition model but this representation can disagree with physical laws, even within the bounds of training as the learned features are agnostic/uninformed of the physical laws but particularly outside bounds of training due to lack of generalization. Furthermore, measurement (input) uncertainty or disturbance gives rise to a state prediction that is inaccurate; garbage in—garbage out. For example, if a camera obscurant leads to an incorrect state estimate (not propagating position) while other sensors continue to propagate, a physically uninformed model will incorporate anomalous sensor reading in generating a physically infeasible state prediction. Restructuring a neural network or transforming the NN to a different abstraction could ensure that the model output adheres to physical laws as the original system that generated the data does.

An ideal, interpretable solution leverages the power of neural networks and incorporates physics through 1) trust: safety guarantees in the output prediction and algorithm behavior, 2) causality: deriving relationships between the input and the output, and 3) information: analyzing the abstraction learned from the neural network and inferring system characteristics from the learned parameters and relationships (Lipton, 2018). Trust, or safety, can come in the form of adhering to constraints set by the user to mitigate the impact of a disturbance spike in the input signal on the model prediction. Added domain information in the way of physical laws could counter undesirable behavior. Furthermore, adding constraints has the added benefit that the user imposing some user-defined information into the prediction; that is, there is some component of that system that the user can explain and guarantee in behavior. Causality may be captured by tracing the contribution of an input to output, for which causality in a set of linear equations is very straightforward to correlate contribution from the coefficient matrix in analysis, whereas nesting layers within a neural network is less obvious in drawing input/output casual relationships (saliency maps). Information can include the number of parameters/terms in a model, the familiarity of bases, and the existence of another abstraction: “Computing a tractable function model from the original model can also be viewed as a form of knowledge distillation from the verification perspective, as the function model should be able to produce comparable results or replicate the outputs of the target neural network on specific inputs” (Huang et al., 2019).

This article’s contribution is to solve the system identification or state prediction problem by constructing a mapping from an NN function to a polynomial function from which users can more easily infer behavior (information and causality) and handle changes in constraints (trust). The proposed method approximates a trained NN function of various common architectures (fully connected perceptron, convolution, and recurrent) into a system of linear differential equations in polynomial basis space. As neural networks are universal approximators, the derivation assumes that the general NN structure approximates the system mapping well (Hornik et al., 1989; Pinkus, 1999). The final form of the representation used to approximate the NN is a set of matrix equations with polynomial entries. Constraints or invariant quantities derived from physical laws may then be applied to offer safety and viability to the system dynamics. The polynomial approximation of neural networks is a straightforward mapping, executes in real-time, requires minimal data storage, and limits overfitting by limiting the polynomial expression power. Both the neural network and polynomial abstraction increase the interpretability of the dynamic system that generated the data.

Linear models, the simplest polynomial, are a favored abstraction over neural networks by scientists for their familiarity and analytic traceability. A polynomial basis is capable of expressing nonlinear relationships with a linear model. Polynomials are more computationally tractable, which enables verification (Sidrane et al., 2022). Their analytic traceability enables theoretical guarantees, inference rules (Dutta et al., 2019), and analysis (Lyapunov stability and coefficient stability). In polynomial space, users may apply safety criteria or domain knowledge in the context of constraints, invariant physical quantities, input/output relations, and continuity or bounded sensitivity properties (Narasimhamurthy et al., 2019). The polynomial form appears in many physical contexts (energy, heat transfer, and friction), which may be applied to the approximation as domain knowledge or safety constraints, and could inform scientists of the underlying physical system characteristics.

There is value in obtaining both the universal (NN) approximation and a polynomial representation. Polynomials and NNs with polynomial activation functions are not universal approximators like NNs with sigmoid or tanh activation functions are (Hornik et al., 1989). A generic dataset generated by a dynamic system does not guarantee a sum of squares polynomial solution, which is why a neural network universal approximation is necessary in capturing the transition model from the dynamic data (Ahmadi and Parrilo, 2011). Deep NNs train a non-convex optimization problem, where the constraints do not usually make the solution more tractable as the training protocol typically uses a stochastic gradient search in the first place. Turning the mapping for the dynamics into a polynomial imposes the dynamics as a constraint to be semi-algebraic (i.e., p(x) > = 0 or p(x) = 0, including many other polynomial constraints). Semi-algebraic optimization has new powerful results (e.g., sum-of-squares and moment sequences) that allow the highly non-convex optimization program to be converted to an iterative set of semi-definite programs (SDPs) that converge to the global minima. This article’s optimization solution provides the initial mapping for the dynamics represented as a polynomial constraint. Tools, like semi-definite programs, make such optimization problems tractable.

The proposed method advances upon previous methods (Ferrari et al., 2013; Ferrari and Stengel, 2005) by deriving a polynomial abstraction of a trained neural network that is capable of incorporating knowledge and constraints into state prediction by solving the end-to-end polynomial function and constraints simultaneously. Outputs, like a state prediction, can be constrained to adhere to guarantees if the function is polynomial form, whereas neural networks traditionally lack that ability (Rudy et al., 2017; Lagaris et al., 1998; Psichogios and Ungar, 1992). Recently, a class of neural networks ingrain physics directly into the neural network structure, like PINNs (Raissi et al., 2019; Wang et al., 2022), Lagrangian/Hamiltonian NNs (Cranmer et al., 2020; Greydanus et al., 2019), neural ODEs (Djeumou et al., 2022), and deep Markov models (Liu et al., 2022). A difference in our proposed work lies in when the imposition of physics occurs: physics-guided NNs during training and NN-Poly post-training. Our work also differs from strictly learning a polynomial directly from data as we learn a polynomial from a neural network. Furthermore, the polynomial that results from this approach does not have to adhere to the properties of a Lyapunov function as is the case in learning homogenous polynomial Lyapunov functions, in that V(x) does not need to be strictly positive and V̇(x) does not need to be strictly negative, where V is traditionally a function of energy (Ahmadi and Parrilo, 2011). While for some state definitions and systems, this form may be the most convenient form that is ultimately used, we do not impose that form in our formulation. The constraint equations that are used may not use energy and may use momentum or distance.

The following sections detail the derivation to approximate a neural network model by transforming the trained NN parameters into coefficients of polynomial form. The derivation consists of four general steps detailed in Sections 2–5:

• Section 2 derives a general Taylor series expansion for a vector function in vector domain (i.e., tensor form) from a trained NN model; higher than second-order derivatives are tensor derivatives.

• Section 3 simplifies tensor derivatives and states to matrix and vector form. An important contribution of this article is unfolding the tensor derivatives into a matrix form and the tensor states into a vector form, which results in matrix manipulability and computation savings.

• Section 4 rewrites the general Taylor series expansion, containing tensor derivatives into Taylor series expansion with only matrix and vector form derivatives. The Taylor series expansion with only matrix and vector form derivatives is desirable because modern scientific programming languages are optimized for vectorized computations. The Taylor series expansion can be presented as an expression in a polynomial form, mapping inputs to outputs. Coefficients of each polynomial entry are derived, and the subsequent dynamic system that the NN approximates can be interpreted.

• Section 5 extends the single-layer polynomial approximation to a multi-layer network, resulting in a polynomial that represents an arbitrarily deep network.

The remaining article sections give context as to how to apply this methodology and show results for simulated dynamic systems.

• Section 6 relates physical constraints for Newtonian dynamics to semi-algebraic constraints that can be applied to the output of the polynomial approximated function.

• Section 7 demonstrates how to solve for state prediction simultaneously with the proposed constraints.

• Section 8 analyzes the proposed NN-extracted polynomial method under various cases. Results show high accuracy and efficiency when dealing with test cases; we discuss how to extend NN-Poly to various other cases.

2 Problem formulation: Taylor expansion of a neural network

Representing the NN model as a Taylor polynomial involves two steps. First, a Taylor expansion of vector function f must be derived in state vector domain x. The input to the NN model is the state vector at some time step, xk. The output of the NN model is either the state vector at the next time step, xk+1, or the derivative of the state vector, ẋk. The expansion includes various dimensions of tensors and redundant multinomial cross terms. Next, the general derivatives of a neural network are derived in tensor form. The two efforts together produce the final polynomial coefficients for the Taylor polynomial. To validate the method and offer context for other methods, the last section offers a comparison of model fidelity and computation to other numerical system identification methods.

Given a vector input xRm×1 and output yRn×1, a function f maps the input state to the output state f(x): Rm×1Rn×1. Assume the function f(⋅) is a smooth, continuous vector function, where all derivatives with respect to x exist and are smooth.

y=fx(1)

Given the training pairs (x, y), a neural network predicts ŷNN with a mapping fNN(W, b, x), given in Eq. 2 where the notation of a hat ()̂ signifies a prediction of the variable. The learned parameters are W and b in the neural network.

ŷNN=fNNW,b,x(2)

A polynomial p(x) in the form of a Taylor expansion approximates the neural network for which the order of the polynomial d that approximates the NN is defined by the NN structure (Rolnick and Tegmark, 2017). The polynomial output ŷp is given in Eq. 3, where the expression consists of polynomial coefficients \{a0, A1, …, Ad\}.

ŷppx=a0+A1x+12!A2x2++1d!Adxd(3)

The goal is to find coefficients a of a polynomial expression that minimizes the error relative to the neural network model, defined by the cost function, C, given in Eq. 4 where the coefficients are the set a = {a0, A1, …, Ad}.

C=argminafNNW,b,xpx,a,d(4)

The polynomial form is a terminal form of the Taylor expansion, given in Eq. 5, where the kth partial derivative of function f is given by kfxk, analogously the Jacobian term, Jfk. The polynomial terminates at order d, and the remaining higher order terms are captured in R(x).

fx=f0+fxx+2fx2x2++Rx=f0+Jf10x+12!Jf20x2++Rx(5)

The outer product ⊗ is used in this manuscript to exponentiate a vector, for which an exponentiated vector equation example is given in Eq. 6 and the index notation in Eq. 7, adopted from Granados (2015).

x3xxx(6)
xxxijk=xixjxk(7)

The inner product ⊙ is used in this manuscript to multiply the function derivatives with the exponentiated states, for which index notation is given in Eq. 8, adopted from Granados (2015).

Jf2x2=ijJijk2xij2(8)

The polynomial expression is a vectorial series defined with Jacobian terms and exponentiated state vector terms in Eq. 9, where the expansion is expressed in a summation over d + 1 Jacobian terms. For simplicity, the expansion is about the zero state, assuming x0 = 0, which simplifies the lower dimension terms. Note the equivalency of Eqs 3, 9.

px=k=0dJfkx0k!xx0kk=0d1k!Akxx0k(9)

Although Eq. 5 is elegant and may be able to derive a closed-form expression for the original function, the subsequent derivative terms incrementally increase in dimension, seen in Eq. 10.

fx=f1x0f2x0fnx0+f1x1f1x2f1xmf2x1f2x2f2xmfnx1fnx2fnxmx1x2xm+12!2f1x122f1x2x12f1xmx12f2x122f2x2x12f2xmx12fnx122fnx2x12fnxmx1,,2f1x1xm2f1x2xm2f1xm22f2x1xm2f2x2xm2f2xm22fnx1xm2fnx2xm2fnxm2x12x1x2x1xmx2x1x22x2xmxmx1xmx2xm2++Rx.(10)

Instead, we would like a set of linear equations that simply solves for the polynomial coefficients with a single matrix operation, given in Eq. 11. To transform the various tensors into a set of linear equations, the derivative tensor terms must be unfolded and compressed into matrices and vectors to achieve the desired form, given in Eq. 11, described in the following sections.

fx=f1x0f1x1f1xm2f1x122f1x1xm2f1xm2R1xkfnx0fnx1fnxm2fnx122fnx1xm2fnxm2Rnxk1x1xm12!x1222!x1xm12!xm21(11)

3 Unfolding and compressing tensors into matrices and vectors

The tensor equation that approximates the original function may be collapsed into a set of linear equations, consisting of a coefficient matrix and state vector, a useful form for a linear least square solution or semi-algebraic optimization. Exponentiated states of order higher than two are in the form of tensors and contain redundant multinomial terms as they are symmetric. By analogy, the upper triangular part of a symmetric matrix contains all of its unique values. To unfold and reshape the high dimensional tensors to matrices and vectors, the tensor indices of each higher dimension has a relationship to a single index in the vector and the matrix. The derivation may be intuited visually. The columns of the upper triangular matrix are sequentially appended to a vector, seen in Figure 1A. This process is extended to symmetric tensors, Figure 1B, to yield the augmented state vector.

FIGURE 1
www.frontiersin.org

FIGURE 1. (A) Unfolding an upper triangular matrix into a vector. (B) Unfolding a tensor into a vector. (C) Reduction ratio between number of reductions and number of full terms. The blue and orange arrows represent the movement of the reduction ratio curve at two extremes: blue as the state vector dimension approaches infinity; orange as the state vector dimension approaches one.

By grouping redundant multinomial cross terms together into a coefficient vector, the input vector contains only the unique states. The first step is gathering redundant state terms in the same equation, highlighted (Eq. 12). The desired matrix equation, (Eq. 11), of the tensor equation yields the same set of linear equations, yet reduces the number of total state terms and is in a more useful representation. The augmented state vector in Eq. 11 contains scalar coefficients, which represent the number of redundant terms in the multinomial expansion of the state. These scalar coefficients are separated into a new coefficient vector. Defining this new coefficient vector is necessary to reform the exponentiated states into a state vector of only unique states and in the process reveals the reduction in computation by using only unique states. For intuition, the unique multinomial state vector of third order is x̃3, given in Eq. 13, where the addition of the tilde annotates that the uniqueness operation has been performed on the original state vector x⊗3. The corresponding multinomial coefficient vector, a3, contains respective coefficients, representing the combinatorial number of redundant multinomial states, given in Eq. 14.

f(x)=f1(x0)+(f1x1x1+f1x2x2++f1xmxm)++12!(2f1x12x12+2f1x2x1x2x1++2f1xmx1xmx1)++(2f1x1xmx1xm+2f1x2xmx2xm++2f1xm2xm2)++R1(xk)f2(x0)+f2x1x1+f2x2x2++(f1xmxm)++12!(2f2x12x12+2f2x2x1x2x1++2f2xmx1xmx1)++2f2x1xmx1xm+(2f2x2xmx2xm++2f2xm2xm2)++R2(xk)fn(x0)+(fnx1x1+fnx2x2++fnxmxm)++12!(2fnx12x12+2fnx2x1x2x1++2fnxmx1xmx1)++(2fnx1xmx1xm+2fnx2xmx2xm++2fnxm2xm2)++Rn(xk)(12)
x̃3=x13,x12x2,,x12xm,x1x22,,x1x2xm,,x1xm2,x23,x22x3,,x22xm,x2xm2,,xm3(13)
a3=13!3332,132,131,231,1,132,13332,132,131,233=13!13336313331(14)

This augmented state vector is the unique multinomial state vector, x̃d, which is x̃ exponentiated to degree d in Eq. 15, where j is the vector term index and i1, i2, …, id are the tensor dimension indices.

x̃dj=xdi1,i2,,id=xi1xi2xidwherej=i1+i2i212++k=1did+k2kfori1=1:m,fori2=i1:m,,forid1=id2:m,forid=id1:m(15)
nd=i=1dm+i1i(16)

This augmented state vector, x̃d, is of size nd, given in Eq. 16 and calculated with the multinomial theorem (Hildebrand, 2009). Unfolding tensors is a non-unique process, for which a different unfolding yields a different vector of coefficients. These solution sets are all minima for the same unfolding problem. Supplementary Appendix Section S10.1 gives the explicit description of augmented state vectors in ascending degree and corresponding multinomial coefficient vector.

The general solution for the multinomial coefficient vector ad(j) is given in Eq. 17, where the operator (⋅) is the binomial coefficient of choosing d states out of m total number of states and ni is the number of individual xi states in the multinomial state x̃d(j) (Hildebrand, 2009). The size of ad also follows (Eq. 16). The multinomial coefficient vector index j in Eq. 17 aligns with the state j index in Eq. 17. Explicit coefficient definitions for ascending orders of x̃d(j) are given in Table 8 in Supplementary Appendix Section S10.1.

adj=1d!dn1,n2,,,nmforn1=Ox1x̃dj,,nm=Oxmx̃dj(17)

The general solution for the Jacobian matrix J̃f2(:,j) is given in Eq. 18. The index j of the modified Jacobian J̃fd mapping follows the index j of the multinomial state vector x̃d mapping with one additional rule: the first dimension’s index i0 in the original Jacobian tensor Jfd directly translates to the first dimension’s index i0 in the modified Jacobian matrix J̃fd. An example of the explicit definition of J̃f2(:,j) can be found in Table 8 in Supplementary Appendix Section S10.1. Note that for dm, i1 to idm+1 does not increment in index but stays at index 1 as dummy dimensions for the multinomial state, coefficient, and Jacobian derivations.

J̃fdi0,j=Jfdi0,i1,i2,,idforj=i1+i2i212++k=1did+k2kfori0=1:nfori1=1:m,fori2=i1:m,,forid1=id2:m,forid=id1:m(18)

Compressing redundant state terms saves an immense amount of computation, especially for the number of states in real-world applications and for deriving approximations with more than two derivatives. The general reduction ratio, r, between the number of unique terms and the full expansion is given in Eq. 19.

r=1i=1dm+i1imd(19)

The reduction can be seen to approach 100% for large states and 0% for scalar domain, Figure 1C. The real reduction rate falls somewhere beneath the m = bounding curve and the m = 1 origin. The minimum number of states and derivatives to yield a significant computation reduction of 25% is already achieved at m = 2 states and d = 2 derivatives. Table 6 in Supplementary Appendix Section S10.1 illustrates how the reduction ratio scales with the derivative order, explicitly calculating the corresponding number of states for full state tensor expansion compared to the unique state terms in the augmented state vector x̃d. Motivated by linear solutions and significant computational cost, a framework was derived such that the tensor derivatives and states in the Taylor expansion can be reshaped into a unique multinomial matrix and vectors, respectively.

4 Tensor derivatives of a neural network

This section derives different single-layer neural network’s tensor derivatives Jfd that populate the Taylor series. The tensor derivatives evaluated at the origin Jfdx=0 are the coefficient tensors Ad. This section’s derivation is the next step in the overall process of approximating a single-layer neural network, with a polynomial function. The derivative formulation covers a wide range of the most popular networks, classified into network type and activation function. Network layers contained in this section are feedforward, convolution, and recurrent layers. Activation functions include binary, max, linear, ReLU, softmax, sigmoid, tanh, and probabilistic, referenced in Table 1.

TABLE 1
www.frontiersin.org

TABLE 1. Activation function type by ascending complexity with their associated vector and index expressions.

The tensor derivatives that approximate the neural network layer are dictated by the pairing of cell layer type and activation function type; a feedforward layer with a ReLU activation has different tensor derivatives than a feedforward layer with a sigmoid activation. The following subsections vary the relevant transformations σ(⋅) and calculates ascending orders of different NN layer derivatives to then populate a Taylor approximation in tensor form (Eq. 5), and then explicitly in matrix form (Eq. 18). The layer types are illustrated in Figure 2.

FIGURE 2
www.frontiersin.org

FIGURE 2. Basis single layer types transforming input to output: (A) perceptron layer, (B) convolution layer, and (C) recurrent layer.

4.1 Perceptron layer

A single-layer feedforward network with n number of neurons in the layer is depicted in Figure 2A. The output from a hidden layer yF is a transformation fF of the input x, and a learned weight and bias matrix, W and b. The input and output are in vector form, where x is of size Rm×1 and yF is of size Rn×1. The vector equation is given in Eq. 20, where σ(⋅) is the activation function.

yF=fFx=σW,b,x(20)

The index equation is given in Eq. 21, where the order of indices follows the dimension. The index, i, corresponds to a value in input state x and the index, j, corresponds to a value in output state yF.

yj=fjxi=σwji,bj,xi(21)

For many activation functions, derivatives truncate in finite order, such as binary, linear, and ReLU functions. The coefficient tensors for these terminating derivatives are trivial solutions as their Taylor series are equivalent to their original function expressions. These derivations may be found in Supplementary Appendix Section S10.2. For continuously differentiable activation functions, like the sigmoid, tanh, softmax, and probabilistic, the Taylor approximation series is artificially truncated at a user-defined order d (Eq. 22).

yjaj0+i=1mAji1xi+12!i1=1mi2=1mAji1i22xi1i22++1d!i1=1mid=1mAji1iddxi1idd(22)

The higher order polynomial coefficient tensors are iteratively derived with immediately previous polynomial coefficient tensors. The general derivative relationships for each activation function are proved by induction from lower order derivatives. The general sigmoid derivative is given in Eq. 23 with the 0th to 2nd order terms given in Eqs 88–90 in Supplementary Appendix as proof of the iterative pattern.

Aji1idd=Aji1id1d1σwjidσbj1σbj(23)

The tanh derivative is given by Eq. 24 with 0th to 2nd derivatives in Eqs 91–93 in the Supplementary Appendix.

Aji1idd=Aji1id1d1σwjid1σ2bj(24)

The softmax derivative is given by Eq. 25 with 0th to 1st derivatives in Eqs 94, 95 in Supplementary Appendix.

Aji1idd=Aji1id1d1wjid(25)

Probabilistic activation functions come in many different forms of which the Gaussian or radial basis function is the most popular function, given in Table 1. The polynomial coefficients for this function are complicated expressions for which two intermediate expressions simplify and reveal iterative patterns more clearly in Eq. 26: αtD and stD. Each polynomial coefficient tensor is a sum nt terms composed of αtD and stD, where t the increment of a subexpression and D is the term order. The number of terms nt depends only on the term’s degree order D, defined in Eq. 27.

Aji1idd=t=1ntαtDstD(26)
nt=D+12(27)

The most general terms, αtD and stD, for any term and degree definition are in Eqs 28, 29, with explicit definitions in Table 9 in Supplementary Appendix Section S10.2 and 0th to 4th order terms are explicitly given in Eqs 99–103 in Supplementary Appendix.

αtD=k=1t12k1βjcjt1α1Dt+1(28)
stD=l=1D2t1k=lDβjcjikcj(29)

4.2 Convolution layer

The convolution layer commonly uses two types of activation functions, linear and max, to achieve filtering and pooling. The derivatives of these activation functions truncate either on the 0th or 1st order derivative. The coefficient tensors when injected into the Taylor series yield exact representations of the original activation functions but offer standard indexing for matrix and vector conversion. A single-layer convolution layer with n filters of f spatial extent, s stride, and p zero padding is depicted in Figure 2B. Like the perceptron layer, the output from the hidden layer, Y, is a transformation of the input, X, and a learned weight and bias tensor, W and b. Unlike the perceptron layer, the input and output are typically in the matrix or third order tensor form, where X is of size Rw1×h1×d1 and Y is of size Rw2×h2×d2. The tensor equation is given in Eq. 30.

Y=fCX=σW,b,X(30)

The tensor equation with indices is given in Eq. 31. The indices i, j, k for the output range from: i = 1, 2, , w2, j = 1, 2, , h2, and k = 1, 2, , d2. The indices l, m, n range from: l = 1, 2, , w1, m = 1, 2, , h1, and n = 1, 2, , d1.

Yijk=σWijk,bk,Xlmn(31)

The linear activation function within a convolution layer is often called a convolution filter and offers feature extraction of input data. The convolution filter function utilizing the convolution operator in matrix and index form is given in Eq. 32, where the definition of X̃ij is given in Eq. 33.

Yijk=WkX̃ij+bk(32)
X̃ij=Xsi1p+1:si1p+f,sj1p+1:sj1p+f,1:d1(33)

The relationship between input and output sizes is defined by the network hyperparameters, reiterated here: number of filters n, height and width of the filter f, stride length s, and padding p, given in Eq. 34.

w2=w1f+2ps+1,h2=h1f+2ps+1,d2=n(34)

The hyperparameters n, f, s, p are user-defined for which a common setting for the hyperparameters is: f = 3, s = 1 and p = 1. The constraint p = (f − 1)/2 preserves input to output size. The learned parameters W is of size Rf×f×n and b is of size Rn. The output equation in strictly index form is given in Eq. 35.

Yijk=l=1fm=1fn=1d1WlmnkXl+si1p,m+sj1p,n+bk(35)

The linear activation function is a continuously differentiable function with a Taylor approximation series that truncates after the 1st order term, given in Eq. 36.

Yijk=aijk0+l=1w1m=1h1n=1d1Aijklmn1Xlmn(36)

The 0th and 1st order terms are given in Eqs 37, 38. Aijklmn1 is a sparse six dimensional tensor that performs an equivalent convolution operation with Wk. The Taylor approximation yields an exact representation to the original function expression despite the difference in operation representation.

aijk0=fijkXlmn=0=bk(37)
Aijklmn1=fijkXlmnXlmn=0=Wl+s1i+p,m+s1j+p,nk,if 1+si1+plf+si1+p and 1+sj1+pmf+sj1+p0,otherwise(38)

The maximum activation function within a convolution layer is often called a pooling layer. A pooling layer typically follows a filter layer, offering translation invariance in terms of convolution filter output. The max function outputs the maximum value of the input that falls within the kernel. The matrix form is given in Eq. 39.

Yijk=maxi,j,kXsi1+1:si1+f,sj1+1:sj1+f,k(39)

As there are no distinct kernels and no padding, the only hyperparameters are field size f and stride s for which the relationship between input and output sizes is defined in Eq. 40.

w2=w1fs+1,h2=h1fs+1,d2=d1(40)

Common settings for the hyperparameters are f = 2 and s = 2. The Taylor approximation follows the same form as Eq. 36, which truncates after the 1st order term. The 0th and 1st order terms are given in Eqs 41, 42. The Taylor approximation yields an exact representation to the original function expression despite the difference in operation representation.

aijk0=fijkXlmn=0=0(41)
Aijklmn1=1,if Xlmn=maxX̃ijk0,otherwise(42)

4.3 Recurrent layer

The structure of a single-layer with n number of recurrent units in the layer is depicted in Figure 2C. The output from the convolution hidden layer, yCt or equivalently st, is a transformation of the current time step’s input, xt, the previous time step’s internal state, st−1, and a learned weight and bias matrices, W, U, and b. The input, internal state, and output are typically in vector form, where xt is of size Rm×1 but st−1 and yCt are of size Rn×1. The weights are typically in matrix form, where W is of size Rn×n, U is of size Rn×m, and b is of size Rn×1. The vanilla recurrent unit vector relationship is given in Eq. 43.

y=fst1,xt=σW,st1,U,xt,b(43)

The index relationship is given in Eq. 44, where the order of indices follows the dimension. The index i corresponds to a value in the internal state st−1, the index k corresponds to a value in the input state xt, and the index j corresponds to a value in the output state yCt.

yj=fjsit1,xkt=σwji,sit1,ujk,xkt,bj(44)

The explicit vanilla RNN vector equation and index equation are given in Eqs 45, 46, respectively.

y=σWst1+Uxt+b(45)
yj=σiwjisit1+kujkxkt+bj(46)

The vanilla RNN vector equation can be reformed into a feed-forward layer with analogous tensor and coefficient derivations. The previous state st−1 and the current input xt may be joined together into one input z, given in Eq. 47.

zt=st1xt(47)

The combined input z is of additive size R(n+m)×1 with associated dimension index l. The combined weight matrix, V, is given in Eq. 48 and is of size Rn×(n+m).

V=WU(48)

The reformed state prediction vector equation and index equation are given in Eqs 49, 50.

yCt=σVzt+b(49)
yj=σlvjlzl+bj(50)

The similarity between the reformed equations of the recurrent layer and feed-forward layer is apparent. To be explicit, the modified Taylor series expansion about the reformed state z is given in Eqs 51, 52.

fz=fz0+Jf1z0z+12!Jf2z0z2++Rnz(51)
yjaj0+l=1n+mAjl1zl+12!l1=1n+ml2=1n+mAjl1l22zl1l22++1d!l1=1n+mld=1n+mAjl1lddzl1ldd(52)

The tensor and coefficient derivations can be found in Supplementary Appendix Section S10.2] as the form of the equations are identical but the input state, weight matrix, and respective indices are directly analogous to the feedforward derivation.

5 Multiple layer network approximation

The output of the multi-layer network is an embedded Taylor approximation of each individual layer. Given a network with k layers, the final output y is a transformation of the kth intermediate state zk. The output expression is given in Eq. 53.

y=fozk=ak,0+Ak,1zk+Ak,2zk,2++Ak,dzk,d(53)

The last hidden layer prior to the output layer transforms intermediate state zk−1 to zk, given in Eq. 54, repeated for all intermediate states z for hidden layers 1 to k − 1. The input layer transforms the data input x to the first intermediate state z1, given in Eq. 55.

zk=fkzk1=ak1,0+Ak1,1zk1+Ak1,2zk1,2++Ak1,dzk1,d(54)
z1=f1x=ai,0+Ai,1x+Ai,2x2++Ai,dxd(55)

The mapping from the input to the output is a recursive function embedding intermediate functions backward from the final layer, given in Eq. 56.

y=fx=fofkf2f1x(56)

A verification of the multi-layer approximation may be found in Supplementary Appendix Section S10.4.

6 Physical constraints in the form of semi-algebraic constraints

The development of new semi-algebraic optimization routines motivate a polynomial representation of an unknown dynamical system extracted from NN. Since many physical vector fields are at least piecewise smooth, we can use Taylor’s theorem to construct a polynomial that uniformly approximates the underlying ground truth arbitrarily tightly as the degree increases.

In the context of dynamical systems, physical laws provide valuable context that may be applied to state prediction. A more informed state prediction is found by combining the NN-derived polynomial state equations and user-defined semi-algebraic constraints. This is mainly enticing when learning and updating the new polynomial expression on-the-fly when new data is gathered. Hierarchies of semi-definite and sometimes even, linear program relaxations may be possible toward convergence of a global minimizer within the set of coefficients as parameters for such a problem. Moreover, incorporation of such constraints on the admissible set for its domain may indeed collapse the degree and required state dimension of the learned polynomial. Indeed, after the initial and possibly large polynomial is accurately extracted from the NN differential equation from the analytic unconstrained methods presented in this article, a new optimization goal would be to set up a hierarchy of optimization problems minimizing simultaneously, degrees and state dimensions, thus compressing dimension of the parameter space. The dimension of the parameter space, ns, is given in Eq. 57, where n is the size of the state s and d is the degree of the polynomial.

nsd=n+dd=n+d!d!n!(57)

This section offers common constraints but in application, any constraint in polynomial form may be defined and appended to this larger set of linear equations. A comprehensive state for rigid body dynamics is translation [x y z] and rotation, [θx θy θz], in all six degrees of freedom and the associated velocity states, denoted by a derivative dot above the state. The full state vector is shown in Eq. 58.

s=xyzẋẏżθxθyθzθ̇xθ̇yθ̇z(58)

For a predictive dynamics model, the neural network could be trained to develop a discrete transition model fd to yield the next state sk+1 from the current state sk or a continuous model fc to yield the current state derivative ṡk, given in Eqs 59, 60, respectively.

sk+1=fdsk(59)
ṡk=fcsk(60)

Some suggested constraints of interest can be separated into systems under no external influence, external influence, and physical constraints. Under no external influence, a system’s total energy (kinetic energy and potential energy U) is conserved, given in Eq. 61. Similarly, momentum is conserved in both translation and angular, given in Eq. 62. If rotational degrees of freedom are constrained, the θ terms in Eq. 62 are zero, and for constrained translation, the x, y, z terms are zero.

12ẋkẏkżkMẋkẏkżk+θxkθykθzkIθxKθykθzK+Usk=12ẋk+1ẏk+1żk+1Mẋk+1ẏk+1żk+1+θx,k+1θy,k+1θz,k+1Iθx,k+1θy,k+1θz,k+1+Usk+1(61)
Mẋkẏkżk+Iθx,kθy,kθz,k=Mẋk+1ẏk+1żk+1+Iθx,k+1θy,k+1θz,k+1(62)

Under external influence, these matrix equalities turn into matrix inequalities and additional external terms are integrated. Regarding conservation of energy, if this external influence has a known energy relationship, the external work term W(s) may be seen in Eq. 63. Likewise, the momentum conservation equation is modified with external influences in the form of force and torque, seen in Eq. 64. If rotational degrees of freedom are constrained, the θ terms in Eq. 64 are zero, and for constrained translation, the x, y, z terms are zero.

12ẋkẏkżkMẋkẏkżk+θxkθykθzkIθxkθykθzk+Usk+Wsk12ẋk+1ẏk+1żk+1Mẋk+1ẏk+1żk+1+θx,k+1θy,k+1θz,k+1Iθx,k+1θy,k+1θz,k+1+Usk+1+Wsk+1(63)
Mẋkẏkżk+FskΔt+Iθx,kθy,kθz,k+τskΔt=Mẋk+1ẏk+1żk+1+Fsk+1Δt+Iθx,k+1θy,k+1θz,k+1+τsk+1Δt(64)

Outside of physical laws of dynamics, a more general constraint is a state bound in the form of an inequality. A state boundary could result from a physical boundary, in which a body cannot intersect another body. A lower bound and higher bound are given by Eq. 65, where r is a reference point.

sk+1rORsk+1r(65)

These two constraints can be superimposed as long as the constraints do not conflict with each other. Another set of bounding conditions results from a state relationship and sign of state derivative, given in state inequality (Eq. 66).

sk+1skifṡk0ORsk+1skifṡk0(66)

7 Solving for state prediction simultaneously with constraints

The proposed NN-Poly method can be used as a system identification method and/or a state prediction method. The state prediction solution in matrix form and in polynomial basis space is given in Eq. 67, where yk is either the state derivative sk or the propagated state sk+1.

yk=J̃fdads̃d(67)

As derived previously, J̃fd and ad are both matrices populated only with numerical coefficients that approximate the NN function. Together, J̃fdad are an analogous state transition matrix, where the traditional input state sk is augmented into the previously defined higher order polynomial basis space s̃d. As previously shown, constraints may be represented as linear equations in the same polynomial basis space. To solve for equality constraints simultaneously with state prediction, the constraint equation matrix, he, is appended to the polynomial state equation matrix, shown in Eq. 68, and solved with semi-definite programs (SDPs).

ykhe=J̃fdadHes̃d(68)

For inequality constraints, numerical programs may be utilized to solve the semi-algebraic optimization problem, given in Eq. 69, where hi are inequality constraint equations. Solving the semi-algebraic optimization problem is outside scope of this study, but the authors refer us to prevalent tools, such as the linear programming and semi-definite programming packages (Lasserre, 2015). These SDPs are convex but grow arbitrarily large as upon iteration. Future research includes exploiting the information incorporated in constraints known about the system to reduce the size of these SDPs, building upon work of Ahmadi and El Khadir (2020).

yk=J̃fdads̃dsuch thathiHis̃d(69)

One can implement constrained state prediction in real-time on an embedded dynamic system in a learning application. In an algorithmic loop, a neural network may be trained with every iteratively collected measurement, then the subsequent polynomial may be generated and the state prediction/control effort constrained with predefined constraints. Other applications include post-processing system identification for data-constrained applications, in which neural network parameters are more effective to communicate than large data sets, like space or deep-sea applications.

8 NN-Poly state prediction performance

Our aim in these results is to show the minimal approximation loss between the NN-Poly and the models that were derived directly from data: the neural network and polynomial. In the unconstrained case, the NN-Poly can only be as accurate as the neural network from which the polynomial is derived from as the polynomial approximates the neural network. Similarly, the NN-Poly can only be as accurate as a polynomial found from the original data because NN-Poly lost information between the original data and the neural network representation. In the constrained case, the NN-Poly has the ability to incorporate domain knowledge that the neural network cannot capture, which offers NN-Poly the ability to better represent the underlying dynamic system generating the data; for example, a neural network trained on data generated from a bouncing ball may predict the ball’s state to intersect with the ground, but constraints placed on the polynomial model would bound predictions such that the ball never intersects with the ground, yielding a more accurate state prediction. The proposed Taylor expansion with the NN parameters is coded in MATLAB (https://github.com/alexdhjing/NNX_matlab). Metrics to evaluate performance of each method are efficiency, accuracy, and complexity. This section discusses the method of comparison, metrics to evaluate performance, and subsequent results of the state prediction methods.

8.1 Methods

Several dynamic models are used to measure efficiency and accuracy of each state prediction method, not only in the state prediction but also in the model representation. Each algorithm is trained with the same pairwise input and output data, generated from the true dynamic model, to predict a state for a range of dynamic systems. The input–output data pairs are of form {xk, xk+1}, in which the input is state vector at time step k and the output is of time k + 1. A majority of data pairs train the various models and the rest of the data pairs are set aside to evaluate state error and computation time. Other metrics evaluate the model coefficients and structure after training. All metrics are formally defined in the next subsection.

The selected dynamic models were chosen in ascending complexity, from a two-dimensional feature space with linear dynamics and non-linear dynamics. The 1DOF underdamped linear spring mass damper system has a linear transition matrix from previous state to next state, constituting the simplest dynamic system to identify, described in Eq. 70. The 1DOF non-linear spring system has non-linear spring stiffness and is governed by a differential equation, given in Eq. 71. A 2DOF spring pendulum demonstrates non-linearity and coupling of dynamics, given in Eq. 72. In both the 1DOF non-linear spring and 2DOF spring pendulum cases, the differential equations of motion propagate the state to generate data. The system identification methods do not generate expressions for acceleration but generate a mapping from previous state to next state, implicitly integrating the double derivative.

xt+1vt+1=0.99950.010.09990.9985xtvt(70)
ẍ=3μ02πx+a4(71)
ẍÿ=5x2+y21xx2+y25x2+y21yx2+y210(72)

8.2 Metrics of evaluation

Metrics for evaluation are computation time, state error (mean squared error), coefficient error, length of solution, and coefficient stability, given in Table 2. The state error is of mean squared error form of the algorithm’s state prediction and the true state for which smaller error represents a more accurate approximation. Coefficient error refers to a coefficient vector c of length m, geared toward polynomial solutions for which smaller error again signifies a more accurate model. Parameter stability refers to the learned parameters, relevant for all methods, for which values closer to 1 represent high stability and values approaching infinity represent instability. The length of solution refers to the number of h(x) for which a smaller number of terms represents a more parsimonious solution.

TABLE 2
www.frontiersin.org

TABLE 2. Metrics for evaluating system identification methods on dynamic systems.

8.3 Complexity

We will explore the complexity of our proposed method by deriving and comparing the number of flops for the following methods: a polynomial directly from data, training a NN and NN-Poly. Intuitively before even approaching a rigorous derivation, the NN-Poly method explicitly relates NN parameters to polynomial coefficients, emulating a “hard-coded” computation. We added computing time into the evaluation metrics as calling table elements from memory would likely be the most computationally intensive task in the NN-Poly method. Calculating polynomial coefficients directly from a large data set with the least-squares method will be computationally intensive due to the matrix inverse operation that scales cubed with dimension of the data set. A neural network’s flops depend on training epochs until convergence, stochastically related to the initial parameterization of the neural network. For all flop calculations, we will assume that:

• The input state size m and output state size n are equivalent m = n

• The state size m is much smaller than the amount of training points t, mt

The least-squares method to calculate polynomial coefficients A with a matrix of inputs X and outputs Y is given in Eq. 73, where Ap is the set of polynomial coefficients derived from the least-squares method.

Ap=YXdTXdXdT1(73)

The number of flops of a polynomial derived directly from data is on the order given in Eq. 74. Intermediate steps are given in Eqs 135–138 in Supplementary Appendix Section S10.5.

OAptmdd!+m3dd!(74)

To train neural networks, we must account for forward passes and backward propagation per epoch overall training data. In implementation, the neural network does not necessarily train across all the data but for this derivation of flops will assume so to simplify the calculation. We will also assume that the network has only a single layer with k neurons. The total number of flops to train a neural network is given in Eq. 75.

ONNmnkt(75)

To convert this neural network into a polynomial, the polynomial coefficients for each polynomial degree scales with the number of neurons in the layer. The highest order polynomial dominates the number of flops, which approximates the total number of flops in converting a single-layer NN with k neurons to a polynomial of degree d given in Eq. 76. The set of polynomial coefficients derived from the NN is given by ANNd. Intermediate steps are shown in Eqs 140–145 in Supplementary Appendix Section S10.5.

OANNdkmd+1(76)

The combined complexity of training a neural network and converting those NN parameters to polynomial coefficients is given in

ONN+ANNdkm2t+kmd+1(77)
=km2t+md+1(78)

The condition for the neural network training to be more computationally complex than the NN-Poly conversion is given in Eq. 79.

ONN>OANNd,if t>md1OANNd>ONN,otherwise(79)

Given the order of complexity of each method that ultimately yields a polynomial (least-squares polynomial directly from data and polynomial from NN parameters), what are the conditions on state size m, training data size t, and neural network size by neurons k that make either method more or less complex than the other? The condition for the NN-Poly method to be less complex than a polynomial from the raw data is given in Eq. 80.

OAp>ONN+ANNd,if mk<t+mdd!ONN+ANNd>OAp,otherwise(80)

Intuitively, the NN-Poly method is computationally advantageous for the following conditions:

• There are many training points

• The neural network is small

• The state size is large

• The state size is sufficiently large and the polynomial degree is large

Complexity does not fully capture computational load though, as the NN-Poly method relies heavily on memory calls. The next section incorporates computing time for this reason.

8.4 Results

For each dynamic system, Tables 35 report the performance of the strict polynomial, the sole NN and NN-Poly for each dynamic system. Wherever an ϵ is listed in the tables, ϵ denotes an immensely small value within machine precision. The computation time for the NN-polynomial expansion does not include the NN training time, only the time to transform the NN parameters into polynomial coefficients.

TABLE 3
www.frontiersin.org

TABLE 3. 1DOF spring evaluation of performance for all system identification methods.

The method that best predicts the 1 DOF linear spring state is the strict polynomial; although curiously, the NN-Poly exactly predicts the original system state matrix coefficients. The performance across all methods is shown in Table 3. The proposed NN-Poly approximation method performance in each state’s time series and across increasing polynomial degrees is shown in Figure 3. As expected, increasing the order of polynomial approximation yields less state error, although not much performance is gained past the second-order approximation, seen in Figure 3. The small error bars for each degree signals that the final solution from each NN random initialization does not vary error much. The NN-polynomial conversion is comparable in computation but includes more state error as expected. This straightforward test case is best solved with the simplest solver, the direct polynomial solution; even a least squares solution is sufficient. This system does not need a complex, expressive system identification method due to the simplicity of its linear form.

FIGURE 3
www.frontiersin.org

FIGURE 3. (A) Third-order NN-Poly approximation of 1DOF linear system and error as difference of each state. (B) MSE error of position and velocity state across different orders of approximation for NN-Poly.

The method that best predicts the 1 DOF non-linear spring state are NN and NN-Poly, with performance reported in Table 4. Unlike the linear system approximation, the proposed NN-Poly approximation varies from the NN parameter initialization and does not converge to a steady state error until the fourth order, seen in Figure 4. The sole NN and NN-Poly approximate the data with the least state error, generate a minimal representation, and yield the most stable parameters of all methods. Subsequently, the NN-Poly retains the same value of state error from NN, demonstrating the accuracy of transformation from a NN form to polynomial form, while also offering a form from which to apply domain knowledge and safety guarantees.

TABLE 4
www.frontiersin.org

TABLE 4. 1 DOF flux-pinned system evaluation of performance for all system identification methods.

FIGURE 4
www.frontiersin.org

FIGURE 4. (A) Third-order NN-Poly approximation of 1DOF non-linear spring system and error as difference of each state. (B) MSE error of position and velocity state across different orders of approximation for NN-Poly.

For the 2DOF system, NN and NN-Poly predict state accurately, seen in Table 5. The third order NN-poly approximation for each state is shown in Figure 5A. The NN-Poly MSE error decreases with increasing polynomial degree and does not converge until after the fourth order, seen in Figure 5B. The NN and NN-Poly predict the 2DOF system with fewer terms. Each method has comparable parameter stability and computation time. This coupled and slightly non-linear dynamic system straddles the boundary in deciding which model to use.

FIGURE 5
www.frontiersin.org

FIGURE 5. (A) Third-order NN-Poly approximation of a 2DOF non-linear coupled spring–pendulum system and error as difference of each state. (B) 2DOF spring–pendulum individual MSE error of position and velocity state across different orders of NN-Poly approximation.

TABLE 5
www.frontiersin.org

TABLE 5. 2 DOF spring pendulum system evaluation of performance for all system identification methods.

9 Conclusion

Leveraging recent advances in deep learning, NN-Poly provides accurate predictions of non-linear, coupled system dynamics with minimal context. A NN-to-polynomial mapping avoids the need to download an immense amount of data and fit a polynomial directly to a large dataset by exploiting the compactness of the NN. A polynomial enhances interpretability of the neural network by making its feature dependence explicit, as polynomials have a long history of analysis and mathematical literature, including safety verification and guarantees. This work’s major contribution is offering a polynomial form and semi-algebraic constraints, such polynomial inequality and equality constraints, to capture the NN model and system context in the final predictive function solution. These semi-algebraic constraints represent the application of domain knowledge, such as conservation of energy in the form of quadratic rates, and safety constraints, such as linear inequalities that bound specific state values. The results of this effort show comparable prediction and computation performance between a sole NN, sole polynomial, and the proposed method for linear systems but great improvement in the proposed method for highly nonlinear systems. Further future work also includes approximation of more complex systems, in increasing degrees of freedom, in the degree of non-linearity, and in the coupling of states.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found as follows: https://github.com/alexdhjing/NNX_matlab.

Author contributions

FZ was the primary author for the majority of this manuscript and derived all the theory that produced all equations. DJ wrote the code that generated the plots in the results. FL contributed the insight and section on semi-algebraic constraints. SF reviewed the paper and assisted in identifying gaps in explanation. FL and SF guided the trajectory of the research contributions.

Funding

FZ thanks the NASA Space Technology Research Fellowship Grant NNX15AP55H and the NSF AI Institute in Dynamics Systems award number 2112085 for supporting this research. SF’s work was funded by the Office of Naval Research (ONR) grant N00014-19-1-2266.

Acknowledgments

FZ would like to thank Chelsea Sidrane and Anish Potnis for their valuable feedback.

Conflict of interest

Author DJ was employed by the company Huawei.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frobt.2022.968305/full#supplementary-material

References

Ahmadi, A. A., and El Khadir, B. (2020). “Learning dynamical systems with side information,” in Learning for Dynamics and Control (PMLR), 718.

Google Scholar

Ahmadi, A. A., and Parrilo, P. A. (2011). “Converse results on existence of sum of squares lyapunov functions,” in 2011 50th IEEE conference on decision and control and European control conference (IEEE), 6516.

CrossRef Full Text | Google Scholar

Cranmer, M., Greydanus, S., Hoyer, S., Battaglia, P., Spergel, D., and Ho, S. (2020). Lagrangian neural networks. arXiv preprint arXiv:2003.04630.

Google Scholar

Djeumou, F., Neary, C., Goubault, E., Putot, S., and Topcu, U. (2022). “Neural networks with physics-informed architectures and constraints for dynamical systems modeling,” in Learning for Dynamics and Control Conference (PMLR), 263–277.

Google Scholar

Dutta, S., Chen, X., and Sankaranarayanan, S. (2019). “Reachability analysis for neural feedback systems using regressive polynomial rule inference,” in Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control. (IEEE).

CrossRef Full Text | Google Scholar

Ferrari, S., Rudd, K., and Muro, G. (2013). “A constrained backpropagation (cprop) approach to function approximation and approximate dynamic rogramming,” in Reinforcement Learning and Approximate Dynamic Programming for Feedback Contro, 162–181.l.

CrossRef Full Text | Google Scholar

Ferrari, S., and Stengel, R. F. (2005). Smooth function approximation using neural networks. IEEE Trans. Neural Netw. 16, 24–38. doi:10.1109/tnn.2004.836233

PubMed Abstract | CrossRef Full Text | Google Scholar

Granados, A. (2015). Taylor series for multi-variable functions.

Google Scholar

Greydanus, S., Dzamba, M., and Yosinski, J. (2019). “Hamiltonian neural networks,” in Advances in neural information processing systems. Editors H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alche-Buc, E. Fox, and R. Garnett (Curran Associates, Inc.), 32.

Google Scholar

Hildebrand, A. (2009). Multinomial coefficients.

Google Scholar

Hornik, K., Stinchcombe, M., and White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366. doi:10.1016/0893-6080(89)90020-8

CrossRef Full Text | Google Scholar

Huang, C., Fan, J., Li, W., Chen, X., and Zhu, Q. (2019). Reachnn: Reachability analysis of neural-network controlled systems. ACM Trans. Embed. Comput. Syst. 18, 1–22. doi:10.1145/3358228

PubMed Abstract | CrossRef Full Text | Google Scholar

Lagaris, I. E., Likas, A., and Fotiadis, D. I. (1998). Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 9, 987–1000. doi:10.1109/72.712178

PubMed Abstract | CrossRef Full Text | Google Scholar

Lasserre, J. B. (2015). An introduction to polynomial and semi-algebraic optimization, 52. Cambridge: Cambridge University Press.

Google Scholar

Lipton, Z. C. (2018). The mythos of model interpretability. Queue 16, 31–57. doi:10.1145/3236386.3241340

CrossRef Full Text | Google Scholar

Liu, W., Lai, Z., Bacsa, K., and Chatzi, E. (2022). Physics-guided deep markov models for learning nonlinear dynamical systems with uncertainty. Mech. Syst. Signal Process. 178, 109276. doi:10.1016/j.ymssp.2022.109276

CrossRef Full Text | Google Scholar

Narasimhamurthy, M., Kushner, T., Dutta, S., and Sankaranarayanan, S. (2019). “Verifying conformance of neural network models,” in 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (IEEE), 1.

Google Scholar

Pinkus, A. (1999). Approximation theory of the mlp model in neural networks. Acta Numer. 8, 143–195. doi:10.1017/s0962492900002919

CrossRef Full Text | Google Scholar

Psichogios, D. C., and Ungar, L. H. (1992). A hybrid neural network-first principles approach to process modeling. AIChE J. 38, 1499–1511. doi:10.1002/aic.690381003

CrossRef Full Text | Google Scholar

Raissi, M., Perdikaris, P., and Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707. doi:10.1016/j.jcp.2018.10.045

CrossRef Full Text | Google Scholar

Rolnick, D., and Tegmark, M. (2017). The power of deeper networks for expressing natural functions. arXiv preprint arXiv:1705.05502.

Google Scholar

Rudy, S. H., Brunton, S. L., Proctor, J. L., and Kutz, J. N. (2017). Data-driven discovery of partial differential equations. Sci. Adv. 3, e1602614. doi:10.1126/sciadv.1602614

PubMed Abstract | CrossRef Full Text | Google Scholar

Sidrane, C., Katz, S., Corso, A., and Kochenderfer, M. J. (2022). Verifying inverse model neural networks. arXiv preprint arXiv:2202.02429.

Google Scholar

Wang, S., Sankaran, S., and Perdikaris, P. (2022). Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404.

Google Scholar

Keywords: neural networks, safety, interpretability, polynomial, dynamic systems, prediction

Citation: Zhu F, Jing D, Leve F and Ferrari S (2022) NN-Poly: Approximating common neural networks with Taylor polynomials to imbue dynamical system constraints. Front. Robot. AI 9:968305. doi: 10.3389/frobt.2022.968305

Received: 13 June 2022; Accepted: 05 August 2022;
Published: 08 November 2022.

Edited by:

Marcello Romano, Naval Postgraduate School, United States

Reviewed by:

Pitoyo Hartono, Chukyo University, Japan
Muhammad Khan, Atılım University, Turkey

Copyright © 2022 Zhu, Jing, Leve and Ferrari. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Frances Zhu, zhuf@hawaii.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.