Skip to main content

METHODS article

Front. Robot. AI, 22 March 2022
Sec. Robot Vision and Artificial Perception

Deep Learning-Based Complete Coverage Path Planning With Re-Joint and Obstacle Fusion Paradigm

  • 1Department of Electrical and Computer Engineering, Mississippi State University, Mississippi State, MS, United States
  • 2Department of Electrical Engineering, National Taipei University, and Tainan National University of the Arts, Taipei, Taiwan
  • 3Department of Civil and Mechanical Engineering, Purdue University Fort Wayne, Fort Wayne, IN, United States

With the introduction of autonomy into the precision agriculture process, environmental exploration, disaster response, and other fields, one of the global demands is to navigate autonomous vehicles to completely cover entire unknown environments. In the previous complete coverage path planning (CCPP) research, however, autonomous vehicles need to consider mapping, obstacle avoidance, and route planning simultaneously during operating in the workspace, which results in an extremely complicated and computationally expensive navigation system. In this study, a new framework is developed in light of a hierarchical manner with the obtained environmental information and gradually solving navigation problems layer by layer, consisting of environmental mapping, path generation, CCPP, and dynamic obstacle avoidance. The first layer based on satellite images utilizes a deep learning method to generate the CCPP trajectory through the position of the autonomous vehicle. In the second layer, an obstacle fusion paradigm in the map is developed based on the unmanned aerial vehicle (UAV) onboard sensors. A nature-inspired algorithm is adopted for obstacle avoidance and CCPP re-joint. Equipped with the onboard LIDAR equipment, autonomous vehicles, in the third layer, dynamically avoid moving obstacles. Simulated experiments validate the effectiveness and robustness of the proposed framework.

1 Introduction

In real-world applications such as environmental exploration (Rose and Chilvers, 2018), environmental sensing (Stolfi et al., 2021) and disaster response (Carrillo-Zapata et al., 2020), and other autonomous vehicle applications such as agricultural harvesting and forest surveillance, prospecting, search and rescue vehicles, concurrent complete coverage path planning (CCPP), and mapping are needed to navigate a vehicle to cover every part of the terrain in unknown environments (Wang et al., 2019; Iqbal et al., 2020; Cèsar-Tondreau et al., 2021; Meng, 2021). In previous CCPP research, the vehicle needs to concurrently consider mapping, obstacle avoidance, and route planning intractably while traversing in a workspace, which makes the entire navigation system fairly complicated and computationally expensive (Lee et al., 2014; Poonawala and Spong, 2017; Niyaz et al., 2019; Jiang et al., 2020). Particularly, in real-time navigation, re-planning with unforeseen moving obstacles may be computationally expensive. This study proposes a new framework that tackles issues of environment mapping, path generation, CCPP, and dynamic obstacle avoidance in a hierarchical manner.

1.1 Related Work

For decades, CCPP has undergone extensive research, and many algorithms have emerged, such as the bio-inspired neural network (BNN) approach, the Boustrophedon Cellular Decomposition (BCD) method, and the deep reinforcement learning approach (DRL). Luo and Yang (2008) developed the bio-inspired neural network (BNN) method to navigate robots to perform CCPP while avoiding obstacles within dynamic environments in real time (Zhu et al., 2017). The robot is attracted to unscanned areas and repelled by the accomplished areas or obstacles based on the neuron activity in the BNN given by the shunting equation (Yang and Luo, 2004; Li et al., 2018). Without any prior knowledge about the environment, the next position of the robot depends on the current position of the robot and neuron activity associated with its current position (Luo et al., 2016). However, it is time- and energy-consuming for the vehicles and requires high computing resources to process fine-resolution mapping (Sun et al., 2018). Unlike the BNN approach, the boundary representation method that defines the workspace is adopted by the Boustrophedon Cellular Decomposition (BCD) method and the deep reinforcement learning approach (DRL). The BCD method is proposed by Acar and Choset (2002), which decomposes the environment into many line scan partitions and is explored through a back-and-forth path (BFP) in the same direction. The BCD is an effective CCPP method with more diverse, non-polygonal obstacles in workspace. In trapezoidal decomposition as a cell, it is covered in back-and-forth patterns. For a complex configuration space with irregular-shaped obstacles, BCD needs to construct a graph that represents the adjacency connections of the cells in the boustrophedon decomposition. Therefore, a deep leaning-based method may promote it to a more efficient CCPP method (Sünderhauf et al., 2018; Valiente et al., 2020; Rawashdeh et al., 2021). Similarly, Nasirian et al. (2021) utilized traditional graph theory to segment the workspace and proposed a deep reinforcement learning approach to solve the CCPP problem in the complex workspace. However, the most common shape of the workspace is represented by polygons. As irregular areas of non-convex polygons, they can still be decomposed into multiple convex polygons (Li et al., 2011). Thus, the representation of polygons is also adopted in this study to express most workspace that needs to be explored. Such a method simplifies the complex environments and solves the covering irregularity for vehicles (Quin et al., 2021).

Faster R-CNN originated from R-CNN, and Fast CNN uses a unified neural network (NN) for object detection shown in Figure 4A. The faster R-CNN avoids using selective search, which accelerates region selection and further reduces computational costs. The faster R-CNN detector is mainly composed of a region proposal network (RPN), which generates region proposals, and a network that uses these generated feature patches (FP) for object detection. The region of interest (ROI) pooling layer is used to resize the feature patch (RFP), finally concatenated with a set of fully connected (FC) layers in our study. The two fully connected NN layers are utilized to refine the location of the bounding box and classify the objects. Faster R-CNN effectively uses the bounding box in our studies to identify and locate vehicles and obstacles in the images. This is also applied to the map obtained from farms, search, and rescue scenes to distinguish the vehicles, machines, and human beings on the image.

Although the above-mentioned CCPP approaches have achieved remarkable results, such approaches may still be sub-optimal when the starting and target positions required by the vehicle are included in the path. Especially for multiple sub-region exploration tasks shown in Figure 1A, the task is considered continuous to explore the four sub-regions, and the starting point of the next sub-region to be explored is the target point of the last sub-region as shown by the red circles in Figure 1. The selection of intermediate target points for multiple polygonal exploration areas is still an open problem because it needs to consider the shape and relative position of each sub-region, as well as the entrance and exit of the exploration area (Graves and Chakraborty, 2018). For simplicity, the entrances of the next sub-region are selected as target points here. The connection path length from the starting point to the target point should be considered, as shown in the blue lines in Figure 1B. In this case, ignoring the connection path may increase the complete path length of the overall exploration task (Xie et al., 2019). Thus, it is vital to consider the starting and target points of the vehicle, including the exploration task, and obtain a shorter path that effectively utilizes the limited onboard resources. Another challenging problem that arises in CCPP is obstacle avoidance (An et al., 2018; Wang et al., 2021). Based on the excellent optimization and search capabilities of nature-inspired algorithms, researchers have recently explored many nature-inspired computational approaches to solve vehicle collision-free navigation problems (Deng et al., 2016; Ewerton et al., 2019; Lei et al., 2019, 2021; Segato et al., 2019). For instance, a hybrid fireworks algorithm with LIDAR-based local navigation was developed by Lei et al. (2020a), capable of generating short collision-free trajectories in unstructured environments. Zhou et al. (2019) developed a modified firefly algorithm with the self-adaptive step factor to avoid the premature and improve the operational efficiency of autonomous vehicles. Lei et al. (2020b) proposed a graph-based model integrated with ant colony optimization (ACO) to navigate the robot under the robot’s kinematics constraints. Xiong et al. (2021) further improved ACO using the time Taboo strategy to improve the algorithm convergence speed and global search ability in a dynamic environment. Cèsar-Tondreau et al. (2021) proposed a human-demonstrated navigation system, which integrates the behavioral cloning model into an off-the-shelf navigation stack.

FIGURE 1
www.frontiersin.org

FIGURE 1. (A) Illustration of multiple sub-regions exploration task. (B) The entire CCPP trajectory of multiple sub-regions with connection paths.

1.2 Proposed Framework and Original Contributions

This study proposes a progressive three-layer framework for the CCPP navigation of autonomous vehicles. Initially, in the first layer, a new type of deep learning-based complete coverage path generation method is developed to generate complete coverage trajectories without considering obstacles. A feature learning-enabled fully convolutional deep neural network (FCNN) model is developed to identify the edges of the workspace to be explored, in combination with the starting and target positions of the vehicle to estimate waypoints given an occupancy grid map and generate the CCPP paths. The generated paths are references to guide the vehicle in the following layers to reset and continue CCPP with obstacle avoidance once traversing in the vicinity of obstacles, which improves the computational efficiency of vehicle re-planning.

In the second layer, the obstacles in the environment are considered in this stage. A nature-inspired path planning method is proposed to perform autonomous navigation of vehicles in the environment. Particularly, the vehicle deeply re-plans when it traverses in the vicinity of obstacles. In this study, the Bat algorithm is utilized to plan a collision-free trajectory in light of the size and shape of the obstacles. Once the vehicle completes the re-planning near the obstacles, a new re-joint mechanism is developed to enable the vehicle to re-join complete coverage trajectories. Additionally, an environment-based obstacle approximation and fusion paradigm is developed using image processing of feature extraction. Based on the proposed obstacle approximation and fusion method and the nature-inspired path planning method integrated with the re-joint mechanism, the autonomous vehicle takes less computational effort for optimal path planning on the map populated with obstacles.

Furthermore, a reactive local navigator in the third layer is developed to dynamically update the path and map in real time, so as to avoid moving obstacles and unknown obstacles in the dynamical environment. It dynamically adjusts the speed and direction based on onboard LIDAR sensors to navigate autonomous vehicles locally, thereby benefiting obstacle avoidance and safety assurance.

Overall, the framework composed of three layers advances accurately and is efficiently based on the environmental information layer by layer. Specifically, in the first layer, only the satellite images are needed to provide the size and shape of the searching area and the vehicle’s initial and final positions. In the second layer, the images obtained from the unmanned aerial vehicle (UAV) are required to gather detailed information of the obstacles in the environment, such as minecarts, planters, and vehicles. The third layer is based on onboard LIDAR sensors, used for real-time local reactive navigation of autonomous vehicles, avoiding moving obstacles, and building maps simultaneously. The contributions of this study are summarized as follows:

1) A hierarchical framework is proposed for the autonomous vehicle CCPP navigation in real-time environments;

2) A deep learning-based complete coverage path generation method is developed to generate complete coverage trajectories without considering obstacles;

3) For the problem of obstacle avoidance, an obstacle fusion paradigm and Bat algorithm-based path re-joint method is proposed;

4) Regarding avoiding dynamic and unknown obstacles in real-time environments, a local reactive navigator is introduced.

The rest of this study is organized as follows: in Section 2, the deep learning-based complete coverage path generation method is addressed. The second layer with regard to the nature-inspired algorithm and re-joint mechanism is explained in Section 3. Section 4 shows the reactive local navigator based on LIDAR sensors, which is the third layer in our proposed framework. Simulation and comparison studies are presented in Section 5. Several important properties of the presented framework are summarized in Section 6.

2 Deep Learning-Based Complete Coverage Path Planning

In the first layer, a deep learning-based method is proposed to generate a path with the starting and end positions while considering the shape of the explored areas for creating the optimal back-and-forth (BFP) coverage trajectories.

2.1 Preliminaries

In this section, the required assumptions are described for the proposed method. The region to be explored is assumed in a 2D environment, and the configuration space for autonomous vehicle Δ is formulated as R2. For this study, the boundary of the area to be explored is first obtained based on image processing. There are many existing studies on edge detection (Poma et al., 2020; Nasirian et al., 2021), proving its practicability and reliability (Wagner and Oppelt, 2020). Hence, this study omitted this step and the workspace is directly analyzed. Each region is described by a standard form of convex polygon, ζ={V,E},V={1,2,,n},E={(1,2),,(n,1)}, where V is a set of vertices in clockwise order and E a set of edges. The vehicle’s exploration range (for the task of seeding, cleaning, rescuing, etc.) is a circle with a diameter of d. The vehicle starting position is denoted as Ps and the end position is denoted as Pe. The CCPP path is denoted as ω={F1,F2,,Fn}, while the full CCPP path is Ω={Ps,ω,Pe}. There are infinite potential solutions for covering an area known as an NP-hard problem (Arkin et al., 2000). Therefore, a variety of search patterns have been developed, such as star, zigzag, spiral, or BFP. The BFP path is utilized to establish the complete coverage path with advantages of low spatial complexity to be tracked easily by the autonomous vehicle.

2.2 Search Direction

Previous research has mainly focused on the CCPP exploration in the workspace to be explored while ignoring the vehicle’s starting and end positions in real-world scenarios. However, based on energy optimization and constraint considerations, the entire trajectories need to be considered. Therefore, for multiple edges of the polygons, the starting and end points of the vehicle should be combined to determine the vehicle’s search direction. Meanwhile, in light of the properties of the BFP-based CCPP trajectories, the optimal trajectory lines are parallel to one of the edges of the area (Torres et al., 2016). The procedure of the search direction is developed in Algorithm 1, and the process details are discussed in the following sections. The algorithm requires searching the set of opposite vertex pairs η, such as vertices (i, j) in Figure 2A. One of the vertexes, such as i, finds its adjacent vertex iadj, and the BFP is formed in parallel to the line i,iadj̄ with the gap distance based on the vehicle exploration range d. In this case, the search direction θ is perpendicular to i,iadj̄ toward the j. The function Dist (i, j) calculates LR, the total length of the BFP trajectory in the workspace.

LR=s=1nxFs+1xFs2+yFs+1yFs2(1)

FIGURE 2
www.frontiersin.org

FIGURE 2. (A) Construction of BFP path. (B) BFP search direction based on the vehicle’s starting and end position.

Then, the starting and end points of the autonomous vehicle are enclosed in the total length L to obtain the optimal CCPP path Ω. Notably, the optimal BFP-based CCPP trajectory is first obtained in light of the edge of the explored polygon before combining it with the start and end points to obtain the minimum total length. Thus, the search direction θ of the BFP is obtained in the range of [ − π, π] represented by dashed lines with autonomous vehicle BFP segmentation lines, as shown in Figure 2B.

Algorithm 1. Pseudo-code for search direction.

www.frontiersin.org

2.3 Deep Learning-Based Path Generation

Through the obtained BFP path segmentation line, we take the points that are intersections of the segmentation line and the workspace edge as a regression problem, and a fully convolutional deep neural network is utilized to estimate the positions of different points. In light of the turning radius of the vehicle, as shown in Figure 3B, the global CCPP trajectories are predicted by the neural network (NN) from the input image. The input image with resolution M × N is first divided into an IM × IN grid map (Figure 3A). The grid map IM × IN is h times smaller than the input image. Each grid contains h × h pixels, and the confidence probability C(s) denotes the confidence of the points in the grid s. C(s) tends to zero when no point in the grid s while the confidence probability C(s)>T(c) represents the possible points in the grid s, where T(c) denotes the confidence threshold. In each grid, the location of the final CCPP trajectory point is further refined by δm and δn in accordance with the vehicle turning radius.

FIGURE 3
www.frontiersin.org

FIGURE 3. (A) The occupancy grid map of the workspace. (B) The computation of the CCPP path waypoint location based on vehicle turning radius. (C) Overview of the deep learning architecture.

The fully convolutional neural network (FCN) designed by an input tensor X(i) is gradually convolved by a stack of n residual reduction modules as shown in Figure 3C. Each module is composed of a series of two-dimensional convolutional layers, with Mish as the activation function, and the channel and spatial attention layer, allowing the network to highlight more relevant features. In addition, each module ends with a convolutional layer with stride 2 to reduce the spatial dimension of the input tensor. After n residual reduction modules, the two dimensions of the first dimension are reduced to a factor h + 1. Therefore, we insert a transposed convolutional layer with a stride of 2 to obtain a two-dimensional output tensor of IM × IN (refer to Figure 3). Add a remaining connection of the output tensor from the n − 1 block to include important spatial information in the tensor before the last layer. Finally, similar to a single-stage target detection network, the output tensor Y(i) and the shape IM × IN × 3 are calculated by 1 × 1 convolution operation, and the sigmoid and tanh are utilized as the activation of the first and last two channels, respectively. Thus, the confidence probability C(s) obtained by sigmoid predicts the existence of possible waypoints. In contrast, the tanh function is limited between −1 and +1, and the two coordinate compensations δm and δn of each unit are calculated.

3 Path Re-Joint and Obstacle Fusion

In the second layer, through the generated deep learning-based coverage paths, the obstacles in the environment are considered.

3.1 Obstacle Detection and Approximation

In the field of autonomous vehicles, map information is highly important, especially for global path planning, which determines the accuracy of the trajectory. However, for complex environments, such as disaster sites, many obstacles are scattered or gathered in various places, which bring safety and computational difficulties to the path planning of autonomous vehicles. When acquiring the disaster area or agricultural field map through drones, we need to retrieve map information to obtain specific locations of obstacles and approximate and merge a large number of obstacles into several large convex obstacles, thereby improving the efficiency of search and exploration tasks. Especially at the disaster site, the rescue time is limited, and it is important to quickly locate and approximate obstacles in the complex environment. Therefore, this section proposes an effective method for obstacle detection, obstacle approximation, and fusion.

3.1.1 Object Detection With Bounding Box

Many methods have been developed for object detection. The most commonly used methods include single shot detector (SSD), region-based faster convolutional neural network (Faster R-CNN), region-based fully connected network (RCF). When these deep learning CNN models perform object detection and classification, they will obtain a bounding box based on the object’s shape. The bounding box provides us with the specific location of the object in the image and the object classification to be found. In this study, we use Faster R-CNN as our object detection method, which has been proven an efficient and accurate method in many fields (Alzadjali et al., 2021).

3.1.2 Obstacle Approximation and Fusion

In this section, obstacles are approximated and merged into larger convex-shaped obstacles. Through object detection, we can obtain a large amount of information in the pictures, such as inaccessible and dangerous areas. The formed map is of great assistance to the subsequent search and distribution of ground vehicles. However, excessively unorganized obstacle information on the map will cause computational costs to vehicle path planning, especially as overlapping obstacles and excessive tiny obstacles, which are very close to each other. Therefore, it is essential to integrate multiple tiny obstacles or overlapping obstacles into an approximation of the overall obstacle.

The method of finding the approximated obstacles is to find the obstacles to be integrated in the area. For example, in Figure 4B, the trucks parked in the mining site are considered a greater obstacle in the environment. The red bounding box is generated by the object detection method before the approximation method merges multiple bounding boxes into convex obstacles enclosed in the blue lines. Unmanned aerial vehicles (UAVs) are particularly suitable for searching large-scale farms and dangerous areas (Hassler and Baysal-Gurel, 2019). The detailed information of the scene generated through the photos of the drone’s onboard camera has made great contributions to agriculture, search, and rescue (Alzadjali et al., 2021). By merging a large number of scattered or even overlapping bounding boxes in the image to approximate as a convex obstacle, we need to select a set of suitable points from the bounding box.

FIGURE 4
www.frontiersin.org

FIGURE 4. (A) Schematic illustration of the faster region-based convolutional neural network (Faster R-CNN) object detection method. (B) The obstacle approximation for a real map. Red boxes are the bounding boxes for object detection, and the green boxes are the approximation of the objects.

Assuming that the image has been gathered and recognized, the selection of points that approximates the obstacles will not be internal of the bounding box; thus. only the corners of the bounding box need to be considered. We then define the four reference points as the leftmost (Rl), topmost (Rt), rightmost (Rr), and bottommost (Rb) points of the convex hull as shown in Figure 5. The reference points are found by initially identifying the boundary box of obstacles to be integrated into the area. Then, it finds the midpoint of the bounding box of the boundary obstacle and expands half of the short side of the rectangular bounding box. These four reference points are extended to an axis-aligned rectangular ABCD, where A, B, C, and D are the intersection between the vertical line through one reference point and the horizontal line through another reference point. These four reference points connection lines decompose the rectangular ABCD into four triangle sections, such as top-left triangle ΔDRlRt. The vertex points at the top left section, top right section, bottom right section, and bottom left section are denoted as Ptl, Ptr, Pbr, and Pbl, respectively. Thus, the four reference points are also denoted as Rl=Ptl0, Rt=Ptr0, Rr=Pbl0, and Rb=Pbl0. Different h, i, j, and k, implying the different numbers of vertices are contained in the top left, top right, bottom right, and bottom left section, respectively. The structure Phijk to be the set of vertices consists of the convex hull, such that from Rl to Ptlh via a number of top left corners of the bounding boxes with mh. Thus, the Phijk is computed in linear time using the structures Pmijk for mh

Phijk=maxPmijk+ΔPtlmPtlhRl(2)

FIGURE 5
www.frontiersin.org

FIGURE 5. The illustration of obstacle fusion. (A) The final obstacle fusion for a set of non-overlapping obstacles. (B) The obstacle fusion for a set of overlapping obstacles\enleadertwodots.

The initial convex polygon RlRrRtRb is expanded with multiple triangles starting from the reference point Rl and there are O (logn) structures to compute in linear time. Therefore, the algorithm runs in O (nlogn) time. Because the bounding boxes in the images may overlap, this algorithm is also applicable to overlapping bounding boxes, as shown in Figure 5B. Therefore, our obstacle approximation and fusion approach adaptively fuse the bounding boxes of the detected obstacles according to the size and shape of the vehicle to rule out the gaps that are infeasible for the vehicle to pass through. Based on the proposed obstacle approximation and fusion method and the nature-inspired path planning method integrated with the re-joint mechanism, the autonomous vehicle takes less computational effort for optimal path planning on the map populated with obstacles.

3.2 Bat Algorithm-Based Path Re-Joint

3.2.1 Bat Algorithm

The Bat algorithm (BA) is a nature-inspired population-based meta-heuristic optimization algorithm (Yang, 2010). The search strategy of the BA is inspired by the social behavior of bats and the use of echolocation in foraging and avoiding obstacles. The echolocation process of bats is addressed as follows: 1) All bats apply echolocation to sense the distance between the current position and different sources, in which all bats can distinguish food/prey and background barriers intelligently. 2) Bats automatically adjust the wavelength and frequency of their emitted ultrasonic pulses while foraging. They fly randomly at position Xi with speed Vi, fixed frequency Qmin, and loudness A0 and continuously adjust the pulse transmission frequency R[0,1] depending on the proximity to the destination. 3) The loudness of the bats varies from a minimum positive constant Amin to A0. Hence, the update rule for the ith bat’s frequency Qi, speed Viτ, and new solution Xiτ at time step τ are provided by

Qi=Qmin+ζQmaxQmin(3)
Viτ=Viτ1+Xiτ1XgbestQi(4)
Xiτ=Xiτ1+Viτ(5)

where ζ denotes a randomly generated number within the interval [0, 1] and Xgbest represents the current global best position achieved by comparing all the positions among all the bats. Because the bats also have speed limits, the speed is bond in [Vmin,Vmax], where Vmin=Vmax.

In order to achieve a balance between local search and global search capabilities, a random walk procedure is processed in local search under certain probability. The new solution Xnew to replace the original solution Xiτ is governed by

Xnew=Xiτ+ρAτ̄(6)

where ρ is the scaling factor which is confined to the random walk’s step size and ρ ∈ [ − 1, 1] is a random number. Aτ̄ is the average loudness of all bats at time step τ. Because bats approach their target, the amplitude of the ultrasonic pulses decreases while the pulse rate increases; the loudness Aiτ+1 and the pulse emission rate Riτ+1 must be updated as the iteration proceeds, which is defined as

Aiτ+1=γAiτ(7)
Riτ+1=R01expητ(8)

where γ and η are positive constants. A0 and R0 are initial values of loudness and pulse rate, respectively.

3.2.2 Obstacle Avoidance and Path Re-Joint

In order to fulfill a high degree of autonomy in autonomous vehicle navigation, environment modeling or map construction is necessary to enable autonomous vehicles to generate collision-free trajectories. Therefore, in this section, the BA is utilized to perform autonomous navigation of vehicles in the grid-based environment; especially, the vehicles deeply re-plan when they traverse in the vicinity of obstacles. The grid map is composed of equal-sized grids, referred to as the generated CCPP path in Section 2. It should be noted that the grid occupied as obstacles is an inaccessible area in Figure 6. When an obstacle is presented in front of the vehicle, the current grid is defined as the initial point S, and the next point on the unoccupied grid on the CCPP path is defined as the target point T. Then, the re-joint path P is defined by the initial point S, target point T, and n waypoints among them:

P=S,wp1,wp2,,wpn,T(9)

FIGURE 6
www.frontiersin.org

FIGURE 6. The illustration of the path re-joint mechanism.

Each point is defined by its grid coordinates (x, y), and the center of the grid pixel is regarded as a grid point. Path length is defined by the sum of the Euclidean distance between two adjacent points on the trajectory:

LP=i=0nxwpi+1xwpi2+ywpi+1ywpi2(10)

where xwp0 and xwpn+1 denote starting and destination points. BA is utilized to cut down the length of point-to-point navigations. The trajectory is established between two points, which can be selected from each grid centroid of the decomposed workspace. Each point is recursively connected with the remaining points, whereas the distances of connection lines passing through obstacles are assigned with infinite numbers. As a result, the point-to-point navigations with obstacles are excluded out, and the feasible solutions are retained. The shortest paths between each pair of points are selected from those feasible solutions (Figure 6). The procedure of the proposed deep learning-based CCPP is summarized in Algorithm 2.

Algorithm 2. Procedure of proposed deep learning-based CCPP.

www.frontiersin.org

4 Real-Time Navigation of Autonomous Vehicles

In the third layer, once the coverage trajectories are planned, a velocity-based local reactive navigator with mapping capability is considered to avoid moving obstacles while locally constructing an environmental map. The environment of autonomous vehicle navigation is dynamic, including static obstacles and moving obstacles. The local navigation only reacts to their local environment at any moment in time, aimed to create velocity commands of an autonomous vehicle to traverse towards a destination, such as the dynamic window approach of Fox et al. (1997) and Borenstein et al. (1991). Including a sequence of bread crumbs as local waypoints in the path planning, which decomposes the coverage trajectories into a sequence of segments, makes the model particularly efficient for the environment densely populated by obstacles. In this case, a velocity obstacle approach (VOA) for real-time autonomous vehicle navigation is utilized in this study as our LIDAR-based local navigator (Fiorini and Shiller, 1998). The required information is the other sensed agents’ current position, velocity, and exact shape. The definition of the VOA is defined as follows.

Δ represents the autonomous vehicle that needs to be navigated, and M and N represent the dynamic obstacles moving in the environment. Let PΔ, PM, and PN denote the current positions of the autonomous vehicle A and dynamic obstacles M and N, respectively. Similarly, VΔ, VM, and VN denote the current velocity of the autonomous vehicle Δ and dynamic obstacles M and N, respectively. The autonomous vehicle has a fixed radius RΔ, a goal located at PΔgoal, and a preferred speed VΔpref according to the road condition. To compute the velocity obstacle (VO), Δ, M and N are mapped into the configuration space, and the autonomous vehicle Δ is shrunk into a point while expanding the obstacles M and N by the radius of Δ. The VO can geometrically be interpreted in Figure 7A. It is clear that the VO of autonomous vehicle Δ caused by dynamic obstacle M, written as VOΔ|M, is the set of all velocities of Δ resulting in a collision between Δ and M at some moment in time, assuming that M maintains its velocity VM. Let PQ represent the Minkowski sum of two objects P and Q, and let − P represent the object P appearing in its reference point:

PQ=p+qpP,qQ,P=ppP(11)

FIGURE 7
www.frontiersin.org

FIGURE 7. The illustration of velocity obstacle approach (VOA) in our model. (A) Velocity obstacles VOΔ|M and VOΔ|N for moving obstacles M and N. (B) Admissible velocity set (AS) and collision-free velocity set (FS).

Let λ(P,V) represent a ray starting at position P and heading in the direction of velocity V:

λP,V=P+tVt0(12)

As shown in Figure 7A, the λ(PΔ,VΔVM) represents a ray starting from PΔ and heading in the direction of the relative velocity of VΔVM intersecting the Minkowski sum of M and -Δ centered on PM. Then, velocity VΔ is in the VO of M. It follows that if Δ chooses a velocity inside VOΔ|M or VOΔ|N, then Δ and M or N will collide at some point in time. If the velocity chosen is outside VOΔ|M and VOΔ|N, such a collision will never occur. Therefore, the VO of M to Δ can be represented as

VOΔ|MVM=VΔλPΔ,VΔVMMΔ(13)

The current autonomous vehicle VΔ subject to kinematics and dynamic constraints restricts the admissible set of new velocity, denoting this set as AS(VΔ). According to different conditions of autonomous vehicles, such as maximum speed and maximum acceleration, AS(VΔ) can have any shape. In Figure 7B, an arylide yellow rectangle represents the admissible velocity set for the current velocity VΔ. In each cycle of planning, the reactive navigator selects a speed that lies outside of any velocity obstacles caused through moving obstacles. As shown in Figure 7B, multiple maroon areas are collision-free velocity set FS(VΔ) where autonomous vehicles can avoid the moving obstacles M and N.

Our approach uses both the current position and velocity of other moving obstacles to compute their future collision-free trajectories. Obstacles are also considered in the environments, uncertainty in radius, position, and velocity, as well as dynamics and kinematics of the vehicles. The proposed velocity-based local navigator avoids unforeseen moving obstacles on the planned trajectory, which re-joins the previously planned route after it traverses in the vicinity of the obstacle. Furthermore, each layer takes advantage of the results of the previous layer as a reference to decrease the computational effort.

5 Simulated Experiments and Results

In this section, two simulation studies are conducted to validate the feasibility and merit of the proposed framework. The first simulation investigates the CCPP obtained by the deep learning method. The second simulation, through more detailed images obtained by drones, undertakes obstacle avoidance and re-joint paths. Moreover, onboard LIDAR is utilized to identify moving obstacles in the environment. The parameters of the proposed framework are listed below. In the CCPP deep learning training, 3000 environment maps with polygonal shape areas are utilized for training, with a resolution of 1000 × 1000 and h = 10. The prediction made by the NN for the image is based on the spatial dimension of 100 × 100. Parameters of BA are set as the following: A0=0.1, R0=0.65, Qmin=0.1, and Qmax=0.75. Each algorithm runs 200 times in a case, and the population size is 50.

5.1 Simulation and Comparative Studies in CCPP Without Obstacles

In order to compare the proposed model with others, we compare the proposed CCPP method with the well-known Boustrophedon Cellular Decomposition (BCD) method (Acar and Choset, 2002). In this section, the map is a satellite image from the North Farm of Mississippi State University as shown in Figure 8. The starting and target points are randomly selected. Different shapes of the targeted areas are selected to perform coverage searches. The edges of the targeted areas as three scenarios are highlighted in yellow, blue, and green in Figure 8. The starting and target points are represented by squares and stars, respectively. The exploration range d of the autonomous vehicle is set as 5 m. The shape of the targeted areas, the starting and target positions of the autonomous vehicle are considered. Then, the directions of the vehicle’s search of the proposed CCPP in these three scenarios are achieved in Figures 9A–C, respectively.

FIGURE 8
www.frontiersin.org

FIGURE 8. Real-world satellite images taken from Google Maps. Three targeted areas with specific starting and target points are assigned to CCPP (image from Mississippi State University North Farm).

FIGURE 9
www.frontiersin.org

FIGURE 9. Real-world scenarios of autonomous vehicle CCPP trajectories in Figure 8. (A–C) Trajectories generated by proposed CCPP method. (D–F) Trajectories generated by Boustrophedon Cellular Decomposition (BCD) method.

In three scenarios, the proposed CCPP method has a shorter path length regardless of the coverage path within the targeted areas, the path connecting the starting and target points, and the final total path. The trajectories of the proposed CCPP method are shown in Figures 9A–C, respectively. The trajectories of the BCD method are shown in Figures 9D–F, respectively. The comparative studies are summarized in Table 1.

TABLE 1
www.frontiersin.org

TABLE 1. Performance analysis of the proposed CCPP method with Boustrophedon Cellular Decomposition (BCD) method (Acar and Choset, 2002) under different scenarios.

The average precision (AP) metric is utilized to evaluate the training results. A total of 3000 synthetic images with a resolution of 800 × 800 are utilized for training and h = 8. Then, the network is evaluated with 1000 synthetic images. The network is trained with 200 epochs using Adam optimizer. The learning rate is equal to 3e-4, and the batch size is 16. A tunning point prediction is within the selected area as a true positive (TP), while more predictions fall within the selected range. Only one is counted as TP and all others as false positive (FP). All ground-truths not covered by a prediction are counted as false negatives (FN). Because different confidence thresholds can obtain different recall and precision values and the AP calculation is obtained by the common definition of recall and precision, we change the threshold value from 0 to 1 with step size 0.1. Multiple results obtained by modifying the threshold show that recall and precision are inversely proportional. The final confidence threshold is set as 0.9. At a distance range of 8 pixels, the average precision equals 0.9735.

Consequently, through the deep learning method, the turning points of the autonomous vehicle are generated, and the final CCPP paths are obtained, as shown in Figures 10A,B. The neural network training and testing procedure are similar to the Deepway model (Mazzia et al., 2021). However, the Deepway model relies only on identifying row-based crops to manually sort the order of waypoints that generates the final CCPP result. It remarkably limits the usage scenarios of the model and requires additional labor time to sort the waypoints. Our proposed model extends the range of usage to random environments with arbitrary shape search areas and considers the relative positions of the autonomous vehicle to obtain the optimal CCPP path, as shown in Figure 10.

FIGURE 10
www.frontiersin.org

FIGURE 10. Illustration of the final trajectory in light of the deep learning-based CCPP. (A) The final deep learning-based CCPP path regarding the targeted area in Figure 9B. (B) The final deep learning-based CCPP path regarding the targeted area in Figure 9C.

5.2 CCPP Amid Stationary and Dynamic Obstacles

In this section, simulation studies are carried out to validate the second and third layers of the proposed framework, utilized for CCPP re-joint and obstacle avoidance in environments with stationary and dynamic obstacles. Due to the relatively large environment, in order to better show the re-joining path of the autonomous vehicle, a part of the map is truncated. More detailed information of the images is obtained from the drones, as shown in Figure 11A, in which haystacks and trucks are detected as obstacles. Obstacles are then approximated and merged into a grid-based map. The CCPP with obstacle avoidance function uses the CCPP path obtained in the first layer as a reference to decrease the computational effort. The proposed obstacle avoidance method based on the BA algorithm will only be triggered when an obstacle is presented in front of the vehicle. The grid of the current position is taken as the starting point, and the next grid of the CCPP reference path unoccupied by obstacles is regarded as the target point to plan a collision-free trajectory. The proposed method is unnecessarily to recalculate for the complete map, which can flexibly adapt to the alterations of obstacles in the map. The CCPP trajectory of static obstacle avoidance is shown in Figure 11B. When there are unknown and moving obstacles in the environment, such as the trucks in Figure 11, the autonomous vehicles can still rely on the onboard LIDAR to dynamically avoid obstacles and return to the original coverage path to search the entire environment. Two specific operations of the autonomous vehicle avoiding moving trucks are shown in Figure 12. The autonomous vehicle, the first truck, and the second truck are represented by dark blue circles, yellow rectangles, and light blue rectangles, respectively. The autonomous vehicle performs dynamic avoidance twice for the first truck. The vehicle successfully avoids obstacles and returns to the planned CCPP trajectory. The second truck first stops at the original position before the vehicle avoids obstacles according to the obstacle avoidance path planned by the second layer of the framework. During the returning process, the truck starts to move, and the vehicle can still avoid obstacles to the updated truck position. These results prove that the proposed CCPP framework is effective and efficient in coverage navigation under real-world applications.

FIGURE 11
www.frontiersin.org

FIGURE 11. (A) Real-world images taken from UAVs. (B) Re-joint and collision-free CCPP trajectories.

FIGURE 12
www.frontiersin.org

FIGURE 12. Illustration of autonomous vehicle CCPP navigation with unknown and moving obstacles in the real-world environment. The dashed boxes depict the avoidance of the moving trucks.

6 Conclusion and Future Work

A new framework to tackle issues of environment mapping, path generation, CCPP, and dynamic obstacle avoidance in a hierarchical manner has been proposed. The proposed framework comprises three layers that advance more accurately and efficiently based on environmental information. The framework adopts a layer-by-layer approach with the intention of each layer treating the results of the previous layer as a reference to reduce the computational effort. Simulation studies validated the effectiveness and robustness of the proposed framework. We are working on ROS-based sensor configuration and implementation of this proposed model on an actual mobile robot. Sensors being integrated include five components: a camera, a Hokuyo LIDAR, a differential global positioning system, a digital compass, and an inertial measurement unit.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.

Author Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

The authors would like to thank the editor-in-chief, the associate editor, and the reviewers for their valuable comments and suggestions, which improved the manuscript.

References

Acar, E. U., and Choset, H. (2002). Sensor-based Coverage of Unknown Environments: Incremental Construction of morse Decompositions. Int. J. Robotics Res. 21, 345–366. doi:10.1177/027836402320556368

CrossRef Full Text | Google Scholar

Alzadjali, A., Alali, M. H., Sivakumar, A. N. V., Deogun, J. S., Scott, S., Schnable, J. C., et al. (2021). Maize Tassel Detection from UAV Imagery Using Deep Learning. Front. Robotics AI 8, 136. doi:10.3389/frobt.2021.600410

CrossRef Full Text | Google Scholar

An, V., Qu, Z., Crosby, F., Roberts, R., and An, V. (2018). A Triangulation-Based Coverage Path Planning. IEEE Trans. Syst. Man, Cybernetics: Syst. 50, 2157–2169.

Google Scholar

Arkin, E. M., Fekete, S. P., and Mitchell, J. S. B. (2000). Approximation Algorithms for Lawn Mowing and milling☆☆A Preliminary Version of This Paper Was Entitled "The Lawnmower Problem" and Appears in the Proc. 5th Canad. Conf. Comput. Geom., Waterloo, Canada, 1993, Pp. 461-466. Comput. Geometry 17, 25–50. doi:10.1016/s0925-7721(00)00015-8

CrossRef Full Text | Google Scholar

Borenstein, J., and Koren, Y. (1991). The Vector Field Histogram-Fast Obstacle Avoidance for mobile Robots. IEEE Trans. Robot. Automat. 7, 278–288. doi:10.1109/70.88137

CrossRef Full Text | Google Scholar

Carrillo-Zapata, D., Milner, E., Hird, J., Tzoumas, G., Vardanega, P. J., Sooriyabandara, M., et al. (2020). Mutual Shaping in Swarm Robotics: User Studies in Fire and rescue, Storage Organization, and Bridge Inspection. Front. Robot. AI 7, 53. doi:10.3389/frobt.2020.00053

PubMed Abstract | CrossRef Full Text | Google Scholar

Cèsar-Tondreau, B., Warnell, G., Stump, E., Kochersberger, K., and Waytowich, N. R. (2021). Improving Autonomous Robotic Navigation Using Imitation Learning. Front. Robotics AI 8, 46.

PubMed Abstract | Google Scholar

Deng, L., Ma, X., Gu, J., Li, Y., Xu, Z., and Wang, Y. (2016). Artificial Immune Network-Based Multi-Robot Formation Path Planning with Obstacle Avoidance. Int. J. Robotics Automation 31, 233–242. doi:10.2316/journal.206.2016.3.206-4746

CrossRef Full Text | Google Scholar

Ewerton, M., Arenz, O., Maeda, G., Koert, D., Kolev, Z., Takahashi, M., et al. (2019). Learning Trajectory Distributions for Assisted Teleoperation and Path Planning. Front. Robot. AI 6, 89. doi:10.3389/frobt.2019.00089

PubMed Abstract | CrossRef Full Text | Google Scholar

Fiorini, P., and Shiller, Z. (1998). Motion Planning in Dynamic Environments Using Velocity Obstacles. Int. J. Robotics Res. 17, 760–772. doi:10.1177/027836499801700706

CrossRef Full Text | Google Scholar

Fox, D., Burgard, W., and Thrun, S. (1997). The Dynamic Window Approach to Collision Avoidance. IEEE Robot. Automat. Mag. 4, 23–33. doi:10.1109/100.580977

CrossRef Full Text | Google Scholar

Graves, R., and Chakraborty, S. (2018). A Linear Objective Function-Based Heuristic for Robotic Exploration of Unknown Polygonal Environments. Front. Robot. AI 5, 19. doi:10.3389/frobt.2018.00019

PubMed Abstract | CrossRef Full Text | Google Scholar

Hassler, S. C., and Baysal-Gurel, F. (2019). Unmanned Aircraft System (UAS) Technology and Applications in Agriculture. Agronomy 9, 618. doi:10.3390/agronomy9100618

CrossRef Full Text | Google Scholar

Iqbal, J., Xu, R., Halloran, H., and Li, C. (2020). Development of a Multi-Purpose Autonomous Differential Drive mobile Robot for Plant Phenotyping and Soil Sensing. Electronics 9, 1550. doi:10.3390/electronics9091550

CrossRef Full Text | Google Scholar

Jiang, M., Chi, G., Pan, G., Guo, S., and Tan, K. C. (2020). Evolutionary Gait Transfer of Multi-Legged Robots in Complex Terrains. arXiv preprint arXiv:2012.13320.

Google Scholar

Lee, S.-M., Kim, H., Myung, H., and Yao, X. (2014). Cooperative Coevolutionary Algorithm-Based Model Predictive Control Guaranteeing Stability of Multirobot Formation. IEEE Trans. Control. Syst. Technol. 23, 37–51.

Google Scholar

Lei, T., Luo, C., Ball, J. E., and Bi, Z. (2020a). “A Hybrid Fireworks Algorithm to Navigation and Mapping,” in Handbook of Research on Fireworks Algorithms and Swarm Intelligence (Pennsylvania, United States: IGI Global), 213–232. doi:10.4018/978-1-7998-1659-1.ch010

CrossRef Full Text | Google Scholar

Lei, T., Luo, C., Ball, J. E., and Rahimi, S. (2020b). “A Graph-Based Ant-like Approach to Optimal Path Planning,” in 2020 IEEE Congress on Evolutionary Computation (CEC), 1–6. doi:10.1109/cec48606.2020.9185628

CrossRef Full Text | Google Scholar

Lei, T., Luo, C., Jan, G. E., and Fung, K. (2019). “Variable Speed Robot Navigation by an ACO Approach,” in International Conference on Swarm Intelligence (Berlin, Germany: Springer), 232–242. doi:10.1007/978-3-030-26369-0_22

CrossRef Full Text | Google Scholar

Lei, T., Luo, C., Sellers, T., and Rahimi, S. (2021). A Bat-pigeon Algorithm to Crack Detection-Enabled Autonomous Vehicle Navigation and Mapping. Intell. Syst. Appl. 12, 200053. doi:10.1016/j.iswa.2021.200053

CrossRef Full Text | Google Scholar

Li, Y., Chen, H., Joo Er, M., and Wang, X. (2011). Coverage Path Planning for UAVs Based on Enhanced Exact Cellular Decomposition Method. Mechatronics 21, 876–885. doi:10.1016/j.mechatronics.2010.10.009

CrossRef Full Text | Google Scholar

Li, Y., Cui, R., Li, Z., and Xu, D. (2018). Neural Network Approximation Based Near-Optimal Motion Planning with Kinodynamic Constraints Using RRT. IEEE Trans. Ind. Electron. 65, 8718–8729. doi:10.1109/tie.2018.2816000

CrossRef Full Text | Google Scholar

Luo, C., and Yang, S. X. (2008). A Bioinspired Neural Network for Real-Time Concurrent Map Building and Complete Coverage Robot Navigation in Unknown Environments. IEEE Trans. Neural Netw. 19, 1279–1298. doi:10.1109/tnn.2008.2000394

CrossRef Full Text | Google Scholar

Luo, C., Yang, S. X., Li, X., and Meng, M. Q.-H. (2016). Neural-dynamics-driven Complete Area Coverage Navigation through Cooperation of Multiple mobile Robots. IEEE Trans. Ind. Electron. 64, 750–760.

Google Scholar

Mazzia, V., Salvetti, F., Aghi, D., and Chiaberge, M. (2021). Deepway: A Deep Learning Waypoint Estimator for Global Path Generation. Comput. Electron. Agric. 184, 106091. doi:10.1016/j.compag.2021.106091

CrossRef Full Text | Google Scholar

Meng, M. Q.-H. (2021). Bridging AI to Robotics via Biomimetics. Biomimetic Intelligence and Robotics 1, 100006. doi:10.1016/j.birob.2021.100006

CrossRef Full Text | Google Scholar

Nasirian, B., Mehrandezh, M., and Janabi-Sharifi, F. (2021). Efficient Coverage Path Planning for mobile Disinfecting Robots Using Graph-Based Representation of Environment. Front. Robotics AI 8, 4. doi:10.3389/frobt.2021.624333

CrossRef Full Text | Google Scholar

Niyaz, S., Kuntz, A., Salzman, O., Alterovitz, R., and Srinivasa, S. S. (2019). Optimizing Motion-Planning Problem Setup via Bounded Evaluation with Application to Following Surgical Trajectories. Rep. U S 2019, 1355–1362. doi:10.1109/IROS40897.2019.8968575

PubMed Abstract | CrossRef Full Text | Google Scholar

Poma, X. S., Riba, E., and Sappa, A. (2020). “Dense Extreme Inception Network: Towards a Robust Cnn Model for Edge Detection,” in IEEE/CVF Winter Conference on Applications of Computer Vision, 1923–1932.

Google Scholar

Poonawala, H. A., and Spong, M. W. (2017). Time-optimal Velocity Tracking Control for Differential Drive Robots. Automatica 85, 153–157. doi:10.1016/j.automatica.2017.07.038

CrossRef Full Text | Google Scholar

Quin, P., Nguyen, D. D. K., Vu, T. L., Alempijevic, A., and Paul, G. (2021). Approaches for Efficiently Detecting Frontier Cells in Robotics Exploration. Front. Robot. AI 8, 1. doi:10.3389/frobt.2021.616470

CrossRef Full Text | Google Scholar

Rawashdeh, N. A., Bos, J. P., and Abu-Alrub, N. J. (2021). “Drivable Path Detection Using CNN Sensor Fusion for Autonomous Driving in the Snow,” in Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2021. Vol. 11748, 1174806. doi:10.1117/12.2587993

CrossRef Full Text | Google Scholar

Rose, D. C., and Chilvers, J. (2018). Agriculture 4.0: Broadening Responsible Innovation in an Era of Smart Farming. Front. Sustain. Food Syst. 2, 87. doi:10.3389/fsufs.2018.00087

CrossRef Full Text | Google Scholar

Segato, A., Pieri, V., Favaro, A., Riva, M., Falini, A., De Momi, E., et al. (2019). Automated Steerable Path Planning for Deep Brain Stimulation Safeguarding Fiber Tracts and Deep gray Matter Nuclei. Front. Robot. AI 6, 70. doi:10.3389/frobt.2019.00070

PubMed Abstract | CrossRef Full Text | Google Scholar

Stolfi, D. H., Brust, M. R., Danoy, G., and Bouvry, P. (2021). UAV-UGV-UMV Multi-Swarms for Cooperative Surveillance. Front. Robotics AI 8, 5. doi:10.3389/frobt.2021.616950

CrossRef Full Text | Google Scholar

Sun, B., Zhu, D., Tian, C., and Luo, C. (2018). Complete Coverage Autonomous Underwater Vehicles Path Planning Based on Glasius Bio-Inspired Neural Network Algorithm for Discrete and Centralized Programming. IEEE Trans. Cogn. Dev. Syst. 11, 73–84.

Google Scholar

Sünderhauf, N., Brock, O., Scheirer, W., Hadsell, R., Fox, D., Leitner, J., et al. (2018). The Limits and Potentials of Deep Learning for Robotics. Int. J. Robotics Res. 37, 405–420.

Google Scholar

Torres, M., Pelta, D. A., Verdegay, J. L., and Torres, J. C. (2016). Coverage Path Planning with Unmanned Aerial Vehicles for 3D Terrain Reconstruction. Expert Syst. Appl. 55, 441–451. doi:10.1016/j.eswa.2016.02.007

CrossRef Full Text | Google Scholar

Valiente, R., Zaman, M., Fallah, Y. P., and Ozer, S. (2020). “Connected and Autonomous Vehicles in the Deep Learning Era: A Case Study on Computer-Guided Steering,” in Handbook of Pattern Recognition and Computer Vision (Singapore: World Scientific), 365–384. doi:10.1142/9789811211072_0019

CrossRef Full Text | Google Scholar

Wagner, M. P., and Oppelt, N. (2020). Extracting Agricultural fields from Remote Sensing Imagery Using Graph-Based Growing Contours. Remote Sensing 12, 1205. doi:10.3390/rs12071205

CrossRef Full Text | Google Scholar

Wang, C., Chi, W., Sun, Y., and Meng, M. Q.-H. (2019). Autonomous Robotic Exploration by Incremental Road Map Construction. IEEE Trans. Automat. Sci. Eng. 16, 1720–1731. doi:10.1109/tase.2019.2894748

CrossRef Full Text | Google Scholar

Wang, J., Chi, W., Li, C., and Meng, M. Q.-H. (2021). “Efficient Robot Motion Planning Using Bidirectional-Unidirectional RRT Extend Function,” in IEEE Transactions on Automation Science and Engineering. doi:10.1109/tase.2021.3130372

CrossRef Full Text | Google Scholar

Xie, J., Jin, L., and Garcia Carrillo, L. R. (2019). “Optimal Path Planning for Unmanned Aerial Systems to Cover Multiple Regions,” in AIAA Scitech 2019 Forum, 1794. doi:10.2514/6.2019-1794

CrossRef Full Text | Google Scholar

Xiong, N., Zhou, X., Yang, X., Xiang, Y., and Ma, J. (2021). Mobile Robot Path Planning Based on Time Taboo Ant colony Optimization in Dynamic Environment. Front. Neurorobot 15, 642733. doi:10.3389/fnbot.2021.642733

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, S. X., and Luo, C. (2004). A Neural Network Approach to Complete Coverage Path Planning. IEEE Trans. Syst. Man. Cybern. B 34, 718–724. doi:10.1109/tsmcb.2003.811769

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, X.-S. (2010). “A New Metaheuristic Bat-Inspired Algorithm,” in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010) (Berlin, Germany: Springer), 65–74. doi:10.1007/978-3-642-12538-6_6

CrossRef Full Text | Google Scholar

Zhou, J., Chen, P., Liu, H., Gu, J., Zhang, H., Chen, H., et al. (2019). “Improved Path Planning for mobile Robot Based on Firefly Algorithm,” in 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2885–2889. doi:10.1109/robio49542.2019.8961442

CrossRef Full Text | Google Scholar

Zhu, D., Cao, X., Sun, B., and Luo, C. (2017). Biologically Inspired Self-Organizing Map Applied to Task Assignment and Path Planning of an AUV System. IEEE Trans. Cogn. Develop. Syst. 10, 304–313.

Google Scholar

Keywords: Deep learning-based path generation, complete coverage path planning, obstacle approximation and fusion, nature-inspired path planning, velocity-based local navigator, re-joint paradigm

Citation: Lei T, Luo C, Jan GE and Bi Z (2022) Deep Learning-Based Complete Coverage Path Planning With Re-Joint and Obstacle Fusion Paradigm. Front. Robot. AI 9:843816. doi: 10.3389/frobt.2022.843816

Received: 27 December 2021; Accepted: 11 February 2022;
Published: 22 March 2022.

Edited by:

Jason Gu, Dalhousie University, Canada

Reviewed by:

Chaoqun Wang, Shandong University, China
Guoming Li, Iowa State University, United States

Copyright © 2022 Lei, Luo, Jan and Bi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Chaomin Luo, Q2hhb21pbi5MdW9AZWNlLm1zc3RhdGUuZWR1

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.