Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci., 02 May 2024
This article is part of the Research Topic The Applications of Advanced Artificial Neural Networks in Unmanned Key Subsystems View all 6 articles

Feedback stabilization of probabilistic finite state machines based on deep Q-network

  • 1Key Laboratory of Industrial Internet of Things and Networked Control, Ministry of Education, Chongqing University of Posts and Telecommunications, Chongqing, China
  • 2School of Electrical and Electronic Engineering, Chongqing University of Technology, Chongqing, China

Background: As an important mathematical model, the finite state machine (FSM) has been used in many fields, such as manufacturing system, health care, and so on. This paper analyzes the current development status of FSMs. It is pointed out that the traditional methods are often inconvenient for analysis and design, or encounter high computational complexity problems when studying FSMs.

Method: The deep Q-network (DQN) technique, which is a model-free optimization method, is introduced to solve the stabilization problem of probabilistic finite state machines (PFSMs). In order to better understand the technique, some preliminaries, including Markov decision process, ϵ-greedy strategy, DQN, and so on, are recalled.

Results: First, a necessary and sufficient stabilizability condition for PFSMs is derived. Next, the feedback stabilization problem of PFSMs is transformed into an optimization problem. Finally, by using the stabilizability condition and deep Q-network, an algorithm for solving the optimization problem (equivalently, computing a state feedback stabilizer) is provided.

Discussion: Compared with the traditional Q learning, DQN avoids the limited capacity problem. So our method can deal with high-dimensional complex systems efficiently. The effectiveness of our method is further demonstrated through an illustrative example.

1 Introduction

The finite state machine (FSM), also known as finite automata (Yan et al., 2015b), is an important mathematical model, which has been used in many different fields, such as manufacturing system (Wang et al., 2017; Piccinini et al., 2018), health care (Shah et al., 2017; Zhang, 2018; Fadhil et al., 2019), and so on. The deterministic finite state machine (DFSM) is known for its deterministic behaviors, in which each subsequent state is uniquely determined by its input event and preceding state (Vayadande et al., 2022). However, DFSMs may not be effective in dealing with random behaviors (Ratsaby, 2019), for example, the randomness caused by component failures in sequential circuits (El-Maleh and Al-Qahtani, 2014). To address the challenge, a probabilistic finite state machine (PFSM) was proposed in the study by Vidal et al. (2005), which provides a more flexible framework for those systems that exhibit random behaviors. Especially, it gives an effective solution to practical issues, such as the reliability assessment of sequential circuits (Li and Tan, 2019). Therefore, the PFSM offers a new perspective for the theoretical research of FSMs.

On the other hand, the stabilization of systems is an important and fundamental research topic, and there have been many excellent research results in various fields, for example, Boolean control network (Tian et al., 2017; Tian and Hou, 2019), time-delay systems (Tian and Wang, 2020), neural networks (Ding et al., 2019), and so on. The stabilization research of FSMs is no exception and has also attracted the attention of many scholars. The concepts of stability and stabilization of discrete event systems described by FSMs were given in the study by Özveren et al. (1991). A polynomial solution of stability detection and a method for constructing stabilizers were presented. Passino et al. (1994) utilzed the Lyapunov method to study the stability and stabilization of FSMs. Tarraf et al. (2008) proposed some new concepts, including gain stability, incremental stability and external stability, and then established a research framework for robust stability of FSMs. Kobayashi et al. developed a linear state equation representation method for modeling DFSMs in the study by Kobayashi (2006) and Kobayashi and Imura (2007) and derived a necessary and sufficient condition for DFSM to be stabilizable at a target equilibrium node in the study by Kobayashi et al. (2011).

However, as we know, the FSM is most often non-linear. Moreover, none of the above methods are convenient when analyzing and designing various FSMs. In the last decade, scholars applied the semi-tensor product (STP) of matrices to FSMs and derived many excellant results. First, with the help of STP, an algebraic form of DFSMs was given in the study by Xu et al. (2013). This algebraic form is a discrete-time bilinear equation. Then, the classic control theory can be used to investigate FSMs. Especially, under the algebraic form, necessary and sufficient conditions for the stabilizability of DFSMs were derived in the study by Xu et al. (2013), and a state feedback controller was obtained by computing a corresponding matrix inequality. Moreover, Yan et al. (2015a) provided a necessary and sufficient condition to check whether a set of states can be stabilized. Han and Chen (2018) considered the set stabilization of DFSMs and provided an optimal design approach for stabilizing controllers. Later, Zhang et al. used the STP method to investigate PFSMs and non-deterministic FSMs. Specifically, a necessary and sufficient condition for stabilization with probability one and a design method for optimal state feedback controller were provided in the study by Zhang et al. (2020a). Moreover, a systematic procedure was designed to get a static output feedback stabilizer for non-deterministic FSMs in the study by Zhang et al. (2020b). Although the STP method is very useful in analyzing discrete event systems, including various FSMs, it suffers from high computational complexity and can only handle small-scale or even micro-scale discrete event systems. To solve the problem, this study refers to techniques developed by Acernese et al. (2019) to solve the stabilization problem of high-dimensional PFSMs, and then provides a reinforcement learning algorithm to compute a state feedback stabilizer for PFSMs. The algorithm is especially advantageous in dealing with high-dimensional systems.

The rest of this study is arranged as follows: Section 2 introduces some preliminary knowledge, including PFSM, Markov decision process (MDP), deep Q newtwork (DQN), and ϵ-greedy strategy. In Section 3, a stabilizabillity condition is derived and an algorithm based on DQN is provided. An illustrative example is employed to show the effectiveness of our results, as shown in Section 4, which is followed by a brief conclusion in Section 5.

2 Methods

For the convenience of statement, some symbol explanations are provided first.

Notation: ℝ expresses the set of all real numbers. ℤ+ stands for the set of all positive integers. a,b+ denotes the set {a, a+1, ⋯ , b}, where a, b ∈ ℤ+, ab. |A| is the cardinality of set A.

2.1 Probabilistic finite state machine

A PFSM is a five-tuple

Λ=(X,U,P,f,X0),    (1)

where the set X:={X1,X2,,Xn} represents a finite set of states, and X0X is the initial state. U:={U1,U2,,Um} denotes a finite set of events. P:X×U×X[0,1] is a transition probability function, and P(Xi,Uk,Xj):=PXi,XjUk expresses the probability of PFSM (1), transiting from state XiX to state XjX under the input event UkU, satisfying

XjXPXi,XjUk=1

or

XjXPXi,XjUk=0.

The state transition function f:X×U2X describes that PFSM (1) may reach different states from one state under the same input event, where 2X is the power set of X.

2.2 Markov decision process and optimization methods

A Markov decision process (MDP) is characterized by a quintuple

Ω=(S,A,P,R,γ),    (2)

where S is a set of states, A is a set of actions, P is a state transition probability function, R is a reward function, and γ ∈ [0, 1] is a discount factor that determines the trade-off between short-term and long-term gains.

MDP (2) may reach state st+1 from state stS under the chosen action atA, and its probability is determined by the function Pst,st+1at=P(st+1st,at). The expected one-step reward from state st to state st+1 via action at is as follows:

Rst,st+1at=𝔼[rt+1st,at]

where rt+1 = rt+1(st, at, st+1) represents the immediate return after adopting action at at time t, and E[·] is the expected value of [·].

The objective of MDP (2) is to determine an optimal policy π. This policy can maximize the expected return Eπ[Gt] under policy π where

Gt=k=0γkrt+k+1.

For a given policy π, the value function of a state st, denoted by vπ(st), is the expected return of MDP (2) taking an action according to the policy π at time step t:

vπ(st)=𝔼π[k=0γkrt+k+1st],stS.    (3)

The optimal policy is as follows:

π*(st,at)=argmaxπΠvπ(st),stS    (4)

where Π is the set of all admissible policies.

From (4), it is easy to understand v*(st)=vπ*(st). Since vπ(·) satisfies the Bellman equation, we have

v*(st)=maxaAsSPst,sa[Rst,sa+γv*(s)]    (5)

Similarly, the action-value function describes the cumulative return from state-action (st, at) under policy π

qπ(st,at)=𝔼π[k=0γkrt+k+1s=st,a=at],atA.    (6)

By substituting (3) into (6), we can obtain

qπ(st,at)=𝔼π[rt+1+γvπ(st+1)],

which represents the expected return of action at adopted by MDP (2) at state st, following policy π. The action-value function under optimal strategy π* is called as the optimal action-value function,i.e., q*(st,ut):=qπ*(st,at),stS,atA. Since v*(st)=maxaq*(st,a), from (5), we can get

q*(st,at)=sSPst,sat[Rst,sat+γmaxaq*(s,a)].

Therefore, if MDP (2) exists an optimal deterministic policy, it can be expressed as follows:

μ*(st)=argmaxaAq*(st,a),stS.

DQN is such a technique that combines Q leaning with arificial neural networks (ANNs), providing an effective approach to decision-making problems in dynamic and uncertain environments. It uses ANNs to construct parametric models and estimate action value functions online. Compared with Q learning, the main advantages of DQN are as follows: (1) DQN uses ANNs to approximate Q functions, overcoming the issue of limited capacity in Q tables and enabling the algorithm to handle high-dimensional state spaces. (2) DQN makes full use of empirical knowledge.

Q learning updates the value function according to the following temporal difference (TD) formula:

q(st,at)q(st,at)+α[rt+1+γmaxaq(st+1,a)-q(st,at)],    (7)

where rt+1+γmaxaq(st+1,a ) is the TD target, rt+1+γmaxa q(st+1,a )-q(st,at) is the TD error δ, and 0 < α ≤ 1 is a constant that determines how quickly the past experiences are forgotten.

When dealing with high-dimensional complex systems, the action-value function q(s, a), as described in Equation (7), is approximated by an ANN to reduce computational complexity. This can be achieved by minimizing the following loss function

L(θt)=(rt+1+γmaxaq(st+1,a;θt-)-q(st,at;θt))2,    (8)

where the parameter θt- is a periodic copy of the current network parameter θt.

By differentiating Equation (8), we have

θt(θt)=2(rt+1+γmaxaq(st+1,a;θt)q(st,at;θt))(θtq(st,at;θt)),    (9)

where ▽θtq(st, at; θt) represents the gradient of q(st, at; θt) with respect to the parameter θt.

We choose the gradient descent method as the optimization strategy

θt+1=θt-α2θtL(θt).    (10)

By substituting Equations (9) into (10), we obtain an update formula for parameter θt

θt+1=θt+α[rt+1+γmaxaq(st+1,a;θt)q(st,at;θt)]θtq(st,at;θt).

Finally, the ϵ-greedy strategy is used for action selection. Specifically, an action is chosen randomly with probability ϵ ∈ ℝ(0 < ϵ ≤ 1), and the best estimated action is chosen with probability 1−ϵ. As learning progresses, ϵ gradually decreases, and the policy is shifted from exploring the action space to exploiting the learned Q values. The policy π(as) is as follows:

π(as)={1ϵ+ϵ|A| if a=arg maxaAq(s,a)ϵ|A| other actions ,

where π(as) is the probability of MDP (2) selecting action a at state s. argmaxaAq(s,a) stands for the action with the highest estimated Q value for state s.

3 Results

We first give a definition.

Definition 1: Assume that Xe is an equilibrium state of PFSM (1). The PFSM is said to be feedback stabilizable to Xe with probability one, if for any initial state XiX, there exists a control sequence U:=Ul1,Ul2,,UlkU, such that PXi,XeU=1.

We define an attraction domain k(Xe) for an equilibrium state Xe, which is a set of states that can reach Xe in k steps.

k(Xe)={XiXthere exists a control sequence U:=Ul1,Ul2,,UlkU, such that PXi,XeU=1}.    (11)

Next, we give an important result.

Theorem 1: Assume that Xe is an equilibrium state of PFSM (1). The PFSM is feedback stabilizable to Xe with probability one, if and only if there exists an integer ρ ≤ n−1 such that

ρ(Xe)=X.    (12)

Proof (Necessity): Assume that PFSM (1) is feedback stabilizable to the equilibrium state Xe with probability one. Then, according to Definition 1, for any initial state Xi, there exists a control sequence U:=Ul1,Ul2,,Ulρ, such that PXi,XeU=1, namely Xik(Xe). Due to the fact that the state space is a finite set, there must be an integer ρ, such that ρ(Xe)=X holds.

(Sufficiency): Assume that Equation (12) holds. For any initial state XiX, we have Xiρ(Xe). From Equation (11), there exists a positive integer ρ and a control sequence U:=Ul1,Ul2,,Ulρ such that Xi can be driven to Xe by U in ρ steps with probability one. According to Definition 1, PFSM (1) is feedback stabilizable to Xe with probability one.      ■

We cast the feedback control problem of PFSM (1) into a model-free reinforcement learning framework. The main aim is to find a state feedback controller, which can guarantee the finite time stabilization of PFSM (1). This means that all states can be controlled and brought to an equilibrium state within finite steps. Therefore, PFSM (1) is rewritten as (X,U,P,R,γ), where P is unknown. The stabilization problem of PFSM (1) is formulated as follows:

maxμ(·)𝔼μ[t=0γtrt+1(Xt,Ut,Xt+1)],X0X    (13)

                  subject to (1),

where

rt+1={1,if Xt+1=Xe0.1,otherwise.

The objective of Equation (13) is to find an action U that maximizes the action-value function q* among all possible actions in U. Therefore, for any state Xt and external condition Xe, the optimal state feedback control law of PFSM (1) is as follows:

μ*(Xt,Xe)=argmaxUUq*(Xt,U,Xe;θ-),XtX.

Based on the above discussion, we are ready to introduce an algorithm to design an optimal feedback controller (see Algorithm 1). It should be noted that in this algorithm, DQN uses two ANNs. The structure diagram of DQN is shown in Figure 1.

Algorithm 1
www.frontiersin.org

Algorithm 1. State feedback stabilization of PFSM (1) based on deep Q-network.

Figure 1
www.frontiersin.org

Figure 1. Structure diagram of DQN.

Remark 1: This algorithm is mainly used to solve the stabilization problem of high dimensional PFSMs. For small or micro-scale PFSMs, it is slightly more complex. In this case, we can choose the STP method. Therefore, Algorithm 1 and the STP method complement each other.

According to the results calculated by Algorithm 1, a state feedback controller can be given. Specifically, from Algorithm 1, the result is an optimal policy. Assume that μ*(Xi,Xe) is the calculation result. Then, we get a state feedback controller μi*:=μ*(Xi,Xe), i1,n+.

4 Discussion

Example 1: Consider a PFSM

X1(t+1)={f(X1(t),U1)={X2,P=0.5X5,P=0.5f(X1(t),U2)=X1,P=1.0f(X1(t),U3)={X1,P=0.6X3,P=0.4,   X2(t+1)={f(X2(t),U1)=X2,P=1.0f(X2(t),U2)={X3,P=0.7X6,P=0.3f(X2(t),U3)={X2,P=0.5X4,P=0.5,X3(t+1)={f(X3(t),U1)=X3,P=1.0f(X3(t),U2)=X3,P=1.0f(X3(t),U3)=X3,P=1.0,    X4(t+1)={f(X4(t),U1)={X4,P=0.4X7,P=0.6f(X4(t),U2)=X5,P=1.0f(X4(t),U3)={X4,P=0.3X6,P=0.7,X5(t+1)={f(X5(t),U1)=X5,P=1.0f(X5(t),U2)={X3,P=0.9X6,P=0.1f(X5(t),U3)={X5,P=0.8X9,P=0.2,  X6(t+1)={f(X6(t),U1)=X3,P=1.0f(X6(t),U2)=X6,P=1.0f(X6(t),U3)={X6,P=0.7X7,P=0.3,X7(t+1)={f(X7(t),U1)=X7,P=1.0f(X7(t),U2)=X5,P=1.0f(X7(t),U3)={X7,P=0.6X8,P=0.4,  X8(t+1)={f(X8(t),U1)={X7,P=0.7X8,P=0.3f(X8(t),U2)=X9,P=1.0f(X8(t),U3)={X8,P=0.9X1,P=0.1,X9(t+1)={f(X9(t),U1)=X6,P=1.0f(X9(t),U2)=X9,P=1.0f(X9(t),U3)={X9,P=0.5X2,P=0.5,    X10(t+1)={f(X10(t),U1)={X11,P=0.3X12,P=0.7f(X10(t),U2)=X10,P=1.0f(X10(t),U3)={X10,P=0.4X13,P=0.6,X11(t+1)={f(X11(t),U1)=X11,P=1.0f(X11(t),U2)={X12,P=0.5X14,P=0.5f(X11(t),U3)={X11,P=0.7X15,P=0.3,   X12(t+1)={f(X12(t),U1)={X16,P=0.6X17,P=0.4f(X12(t),U2)=X12,P=1.0f(X12(t),U3)={X12,P=0.8X18,P=0.2,X13(t+1)={f(X13(t),U1)=X13,P=1.0f(X13(t),U2)={X14,P=0.9X19,P=0.1f(X13(t),U3)={X13,P=0.5X20,P=0.5,   X14(t+1)={f(X14(t),U1)={X15,P=0.8X16,P=0.2f(X14(t),U2)=X14,P=1.0f(X14(t),U3)={X17,P=0.6X18,P=0.4,X15(t+1)={f(X15(t),U1)=X15,P=1.0f(X15(t),U2)={X16,P=0.7X20,P=0.3f(X15(t),U3)={X19,P=0.5X15,P=0.5,   X16(t+1)={f(X16(t),U1)=X16,P=1.0f(X16(t),U2)={X17,P=0.8X18,P=0.2f(X16(t),U3)={X16,P=0.6X19,P=0.4,X17(t+1)={f(X17(t),U1)=X17,P=1.0f(X17(t),U2)={X18,P=0.9X20,P=0.1f(X17(t),U3)={X17,P=0.7X19,P=0.3,X18(t+1)={f(X18(t),U1)=X18,P=1.0f(X18(t),U2)={X19,P=0.8X20,P=0.2f(X18(t),U3)={X18,P=0.5X1,P=0.5,X19(t+1)={f(X19(t),U1)=X19,P=1.0f(X19(t),U2)={X20,P=0.6X1,P=0.4f(X19(t),U3)={X19,P=0.8X2,P=0.2,  X20(t+1)={f(X20(t),U1)=X20,P=1.0f(X20(t),U2)={X1,P=0.7X3,P=0.3f(X20(t),U3)={X20,P=0.9X4,P=0.1,    (14)

where Xi(t) represents the i-th state of PFSM (14) at time step t. It is easy to observe that X3 is an equilibrium state.

We now use Algorithm 1 to compute a state feedback controller to stabilize PFSM (14) to X3. The computation is performed on a computer with Intel i5-11300H processor, 2.6 GHz frequency, 16 GB RAM, and Python 3.7 software. We adopt TensorFlow in Keras to train the DQN model, where the discount factor γ is 0.99, the rang for ϵ in ϵ-greedy policy is from 0.05 to 1.0, and the sizes of memory buffer B and mini-batch M are 10,000 and 128, respectively.

Through calculation, we obtain a state feedback controller

μi*=μ*(Xi,X3),i[1,20],    (15)

which is shown in Table 1.

Table 1
www.frontiersin.org

Table 1. A state feedback controller of PFSM (14).

Model (14) is a PFSM with 20 states, which is not a simple system. Here, we utilize average rewards to track the performance during training (see Figure 2). It is easy to observe that as training time goes on, the performance inceases and tends to be stable. We put the state feedback controller (15), as shown in Table 1, into PFSM (14) and get a closed-loop system.

X1(t+1)={X2,P=0.5X5,P=0.5,    X2(t+1)={X3,P=0.7X6,P=0.3,X3(t+1)=X3,P=1.0,    X4(t+1)={X4,P=0.4X7,P=0.6,X5(t+1)={X3,P=0.9X6,P=0.1,   X6(t+1)=X3,P=1.0,X7(t+1)=X5,P=1.0,   X8(t+1)=X9,P=1.0,X9(t+1)=X6,P=1.0,    X10(t+1)={X10,P=0.4X13,P=0.6,X11(t+1)={X12,P=0.5X14,P=0.5,  X12(t+1)={X12,P=0.8X18,P=0.2,X13(t+1)={X13,P=0.5X20,P=0.5,  X14(t+1)={X17,P=0.6X18,P=0.4,X15(t+1)={X16,P=0.7X20,P=0.3,  X16(t+1)={X17,P=0.8X18,P=0.2,X17(t+1)={X18,P=0.9X20,P=0.1,  X18(t+1)={X18,P=0.5X1,P=0.5,X19(t+1)={X20,P=0.6X1,P=0.4,  X20(t+1)={X1,P=0.7X3,P=0.3.    (16)
Figure 2
www.frontiersin.org

Figure 2. Performance of Algorithm 1 in Example 1.

The state transition trajectory of the closed-loop system (16) starting from any initial state is shown in Figure 3. It can be observed from Figure 3 that all states reach X3 after a finite number of steps and then stay at X3 forever with probability one. This demonstrates the effectiveness of our controller. The number of steps required to reach X3 for each state is shown in Figure 4. From these results, we can observe that based on DQN, Algorithm 1 can solve the stabilization problem of non-small-scale PFSMs.

Figure 3
www.frontiersin.org

Figure 3. Evolution of the closed-loop system (16).

Figure 4
www.frontiersin.org

Figure 4. The number of steps required to stabilize PFSM (14) to X3.

5 Conclusion

This article studied the state feedback stabilization of PFSMs using the DQN method. The feedback stabilization problem of PFSMs was first transformed into an optimization problem. A DQN was built, whose two key parts: TD target and Q function, are approximated through neural networks. Then, based on the DQN and a stabilizability condition derived in this paper, an algorithm was developed. The algorithm can be used to calculate the optimization problem mentioned above and then solves the feedback stability problem of PFSMs. Since DQN avoids the limited capacity problem of Q learning, our algortithm can handle high-dimensional complex systems. Finally, an illustrative example is provided to show the effectiveness of our method.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

HT: Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration, Supervision, Writing—review & editing. XS: Conceptualization, Formal analysis, Investigation, Methodology, Validation, Writing—original draft. YH: Formal analysis, Investigation, Writing—review & editing.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This research was funded by the National Key R&D Program of China (2021YFB3203202) and Chongqing Nature Science Foundation (cstc2020jcyj-msxmX0708).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Acernese, A., Yerudkar, A., Glielmo, L., and Vecchio, C. D. (2020). Double deep-Q learning-based output tracking of probabilistic Boolean control networks. IEEE Access 8, 199254–199265. doi: 10.1109/ACCESS.2020.3035152

Crossref Full Text | Google Scholar

Ding, S., Wang, Z., and Zhang, H. (2019). Quasi-synchronization of delayed memristive neural networks via region-partitioning-dependent intermittent control. IEEE Trans. Cybern. 49, 4066–4077. doi: 10.1109/TCYB.2018.2856907

PubMed Abstract | Crossref Full Text | Google Scholar

El-Maleh, A., and Al-Qahtani, A. (2014). A finite state machine based fault tolerance technique for sequential circuits. Microelectron. Reliab. 54, 654–661. doi: 10.1016/j.microrel.2013.10.022

Crossref Full Text | Google Scholar

Fadhil, A., Wang, Y., and Reiterer, H. (2019). Assistive conversational agent for health coaching: a validation study. Methods Inf. Med. 58, 9–23. doi: 10.1055/s-0039-1688757

PubMed Abstract | Crossref Full Text | Google Scholar

Han, X., and Chen, Z. (2018). A matrix-based approach to verifying stability and synthesizing optimal stabilizing controllers for finite-state automata. J. Franklin Inst. 355, 8642–8663. doi: 10.1016/j.jfranklin.2018.09.009

Crossref Full Text | Google Scholar

Kobayashi, K. (2006). “Modeling of discrete dynamics for computational time reduction of model predictive control,” in Proceedings of the 17th International Symposium on Mathematical Theory of Networks and Systems (Tokyo), 628–633.

Google Scholar

Kobayashi, K., and Imura, J. (2007). “Minimalilty of finite automata representation in hybrid systems control,” in Hybrid Systems: Computation and Control (Berlin Heidelberg: Springer), 343–356. doi: 10.1007/978-3-540-71493-4_28

PubMed Abstract | Crossref Full Text | Google Scholar

Kobayashi, K., Imura, J., and Hiraishi, K. (2011). Stabilization of finite automata with application to hybrid systems control. Discret. Event Dyn. Syst. 21, 519–545. doi: 10.1007/s10626-011-0110-2

Crossref Full Text | Google Scholar

Li, J., and Tan, Y. (2019). A probabilistic finite state machine based strategy for multi-target search using swarm robotics. Appl. Soft Comput. 77, 467–483. doi: 10.1016/j.asoc.2019.01.023

Crossref Full Text | Google Scholar

Özveren, C., Willsky, A., and Antsaklis, P. (1991). Stability and stabilizability of discrete event dynamic systems. J. ACM 38, 729–751. doi: 10.1145/116825.116855

Crossref Full Text | Google Scholar

Passino, K., Michel, A., and Antsaklis, P. (1994). Lyapunov stability of a class of discrete event systems. IEEE Trans. Automat. Contr. 39, 269–279. doi: 10.1109/9.272323

Crossref Full Text | Google Scholar

Piccinini, A., Previdi, F., Cimini, C., Pinto, R., and Pirola, F. (2018). Discrete event simulation for the reconfiguration of a flexible manufactuing plant. IFAC-PapersOnLine 51, 465–470. doi: 10.1016/j.ifacol.2018.08.362

Crossref Full Text | Google Scholar

Ratsaby, J. (2019). On deterministic finite state machines in random environments. Probab. Eng. Inf. Sci. 33, 528–563. doi: 10.1017/S0269964818000451

Crossref Full Text | Google Scholar

Shah, S., Velardo, C., Farmer, A., and Tarassenko, L. (2017). Exacerbations in chronic obstructive pulmonary disease: identification and prediction using a digital health system. J. Med. Internet Res. 19:e69. doi: 10.2196/jmir.7207

PubMed Abstract | Crossref Full Text | Google Scholar

Tarraf, D., Megretski, A., and Dahleh, M. (2008). A framework for robust stability of systems over finite alphabets. IEEE Trans. Automat. Contr. 53, 1133–1146. doi: 10.1109/TAC.2008.923658

Crossref Full Text | Google Scholar

Tian, H., and Hou, Y. (2019). State feedback design for set stabilization of probabilistic boolean control networks. J. Franklin Inst. 356, 4358–4377. doi: 10.1016/j.jfranklin.2018.12.027

Crossref Full Text | Google Scholar

Tian, H., Zhang, H., Wang, Z., and Hou, Y. (2017). Stabilization of k-valued logical control networks by open-loop control via the reverse-transfer method. Automatica 83, 387–390. doi: 10.1016/j.automatica.2016.12.040

Crossref Full Text | Google Scholar

Tian, Y., and Wang, Z. (2020). A new multiple integral inequality and its application to stability analysis of time-delay systems. Appl. Math. Lett. 105:106325. doi: 10.1016/j.aml.2020.106325

Crossref Full Text | Google Scholar

Vayadande, K., Sheth, P., Shelke, A., Patil, V., Shevate, S., Sawakare, C., et al. (2022). Simulation and testing of deterministic finite automata machine. International Journal of Comput. Sci. Eng. 10, 13–17. doi: 10.26438/ijcse/v10i1.1317

Crossref Full Text | Google Scholar

Vidal, E., Thollard, F., de la Higuera, C., Casacuberta, F., and Carrasco, R. (2005). Probabilistic finite-state machines - part I. IEEE Trans. Pattern Anal. Mach. Intell., 27, 1013–1025. doi: 10.1109/TPAMI.2005.147

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, L., Zhu, B., Wang, Q., and Zhang, Y. (2017). Modeling of hot stamping process procedure based on finite state machine (FSM). Int. J. Adv. Manuf. Technol. 89, 857–868. doi: 10.1007/s00170-016-9097-z

Crossref Full Text | Google Scholar

Xu, X., Zhang, Y., and Hong, Y. (2013). “Matrix approach to stabilizability of deterministic finite automata,” in 2013 American Control Conference (Washington, DC), 3242–3247.

Google Scholar

Yan, Y., Chen, Z., and Liu, Z. (2015a). Semi-tensor product approach to controllability and stabilizability of finite automata. J. Syst. Eng. Electron. 26, 134–141. doi: 10.1109/JSEE.2015.00018

Crossref Full Text | Google Scholar

Yan, Y., Chen, Z., and Yue, J. (2015b). Stp approach to controlliability of finite state machines. IFAC-PapersOnLine 48, 138–143. doi: 10.1016/j.ifacol.2015.12.114

Crossref Full Text | Google Scholar

Zhang, X. (2018). Application of discrete event simulation in health care: a systematic review. BMC Health Serv. Res. 18, 1–11. doi: 10.1186/s12913-018-3456-4

PubMed Abstract | Crossref Full Text | Google Scholar

Zhang, Z., Chen, Z., Han, X., and Liu, Z. (2020a). Stabilization of probabilistic finite automata based on semi-tensor product of matrices. J. Franklin Inst. 357, 5173–5186. doi: 10.1016/j.jfranklin.2020.02.028

Crossref Full Text | Google Scholar

Zhang, Z., Xia, C., and Chen, Z. (2020b). On the stabilization of nondeterministic finite automata via static output feedback. Appl. Math. Comput. 365:124687. doi: 10.1016/j.amc.2019.124687

Crossref Full Text | Google Scholar

Keywords: probabilistic finite state machine (PFSM), deep Q-network (DQN), feedback stabilization, artificial neural network (ANN), controller

Citation: Tian H, Su X and Hou Y (2024) Feedback stabilization of probabilistic finite state machines based on deep Q-network. Front. Comput. Neurosci. 18:1385047. doi: 10.3389/fncom.2024.1385047

Received: 11 February 2024; Accepted: 08 April 2024;
Published: 02 May 2024.

Edited by:

Yang Cui, University of Science and Technology Liaoning, China

Reviewed by:

Geyang Xiao, Zhejiang Lab, China
Junqi Yang, Henan Polytechnic University, China

Copyright © 2024 Tian, Su and Hou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Hui Tian, dGlhbmh1aSYjeDAwMDQwO2NxdXB0LmVkdS5jbg==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.