ORIGINAL RESEARCH article

Front. Built Environ., 21 February 2022

Sec. Structural Engineering and Design

Volume 8 - 2022 | https://doi.org/10.3389/fbuil.2022.855112

Predicting the Response of Laminated Composite Beams: A Comparison of Machine Learning Algorithms

  • 1. Department of Mathematics, University of Patras, Patras, Greece

  • 2. Department of Civil Engineering, University of West Attica, Athens, Greece

Abstract

A comparative study of machine learning regression algorithms for predicting the deflection of laminated composite beams is presented herein. The problem of the scarcity of experimental data is solved by ample numerically prepared data, which are necessary for the training, validation, and testing of the algorithms. To this end, the pertinent geometric and material properties of the beam are discretized appropriately, and a refined higher-order beam theory is employed for the accurate evaluation of the deflection in each case. The results indicate that the Extra-Trees algorithm performs best, demonstrating excellent predictive capabilities.

Introduction

Beams as structural components are crucial in many structural systems. The prediction of their deflection is essential since excessive values can lead to the structural system losing its operational serviceability (Serviceability Limit State—SLS). On the other hand, composite materials are increasingly used in structural engineering due to their enhanced stiffness combined with reduced weight. Several shear deformation theories have been developed so far to evaluate the response of thin, moderately thick, or deep beams. They fall into three main categories: the Euler-Bernoulli beam theory (or Classical Beam Theory—CBT), the Timoshenko beam theory (or First Order Beam Theory—FOBT) and the Higher-Order Beam Theories (HOBTs). CBT is applicable for thin beams with no shear effect. In the FOBT, a constant state of transverse shear strain is assumed that does not satisfy the zero shear stress condition at the top and bottom edges of the beam and thus requires a shear correction factor to compensate for this error (see, e.g., Wang et al., 2000; Eisenberger, 2003; Civalek and Kiracioglu, 2010; Lin and Zhang, 2011; Endo, 2016). In general, the HOBTs adopt a specific function (parabolic, trigonometric, exponential, or hyperbolic) to more accurately represent the shear stress distribution along the beam’s thickness and do not require the shear correction factor (see e.g., Reddy, 1984; Heyliger and Reddy, 1988; Khdeir and Reddy, 1997; Murthy et al., 2005; Vo and Thai, 2012; Pawar et al., 2015; Nguyen et al., 2017; Srinivasan et al., 2019). The literature contains a plethora of publications on the subject, and the interested reader is referred to the excellent review paper of Liew et al. (2019). In this investigation, a refined higher-order beam theory is utilized for the analysis of laminated composite beams based on Reddy-Bickford’s third-order beam theory (Wang et al., 2000) which was derived independently by Bickford (1982) and Reddy (1984).

Utilizing higher-order beam theories for more accurate analyses entails a significant increase in complexity as compared to low-order theories, as the latter are mathematically simpler and more widely used. The main motivation of this work is to bridge this gap and provide a simple computational tool to allow for the fast design of beams while keeping the best of both worlds, i.e., the more accurate results of a refined high-order theory and the ease of application of the low-order theories. In order to achieve that, the geometric and material variables are discretized within fairly wide, yet reasonable ranges. After applying the high-order analyses, the results are collected, tabulated, and used as input for multiple machine learning algorithms, i.e., regression models. These models provide a fast and easy-to-use computational tool that can be used for preliminary design and optimization. Regression analysis also yields important insights regarding the performance of each model, the effect of boundary conditions, and the relative importance of each input variable for the problem at hand.

The rest of the paper is organized as follows. A theoretical formulation of the problem is carried out and explained in detail next, followed by a summary of the regression methods utilized in this work. The numerical results are presented next, along with their discussion. Finally, the conclusions drawn based on the findings of this work are presented.

Theoretical Formulation

Consider an elastic symmetric cross-ply laminated rectangular beam () of length , with being the axial coordinate and being the coordinate along the thickness of the beam. The fibers of each ply are aligned at an angle with respect to the axis (see Figure 1).

FIGURE 1

The beam is subjected to a transverse distributed loading , respectively. Based on the higher-order theory for laminated composite plates introduced by Reddy (1984), the displacement field of an arbitrary point on the beam cross-section is given bywhere is the transverse displacement of the midplane (); is the rotation of a normal to the midplane, and , are the axial and thickness coordinates of the beam.

Splitting the transverse displacement into a bending and a shear component, i.e., Vo and Thai (2012).and introducing the transformation

Equations 13 can be rewritten in the following formwhere . The displacement field given above yields the following nonzero components of the strain tensorwhere , and for reasons of brevity and .

Substituting Eqs 9, 10 into the stress-strain relations for the kth lamina in the lamina coordinate we obtain (Khdeir and Reddy, 1997)with , being the well-known transformed elastic stiffnessesand , , , , and arewhile is the angle between the principal material axis and the coordinate axis.

Applying the Principle of Virtual Workand substituting Eqs 9, 10 yields

Introducing now the following stress resultantsEq. 18 become

Integrating the appropriate terms in the above equation and collecting the coefficients of , and we obtain the following governing equationstogether with the following associated boundary conditions of the form: specify

Substituting Eqs 11, 12 into Eq. 19 and using Eqs 9, 10 yields the stress resultants in terms of the displacements aswhere

Finally, after the substitution of the stress resultants, Eqs 27, 28 into Eqs 21, 22, we arrive at the equilibrium equations in terms of the displacementswhich together with the pertinent boundary conditions (23)–(26) constitute the boundary value problem solved using the Analog Equation Method (AEM), a robust numerical method based on an integral equation technique (Katsikadelis and Tsiatas, 2003; Tsiatas et al., 2018).

Regression Models

In this work, several linear and nonlinear regression models are comparatively examined. Linear regression is a linear model that assumes a linear relationship between the input variables and the output variable, and the predicted value can be calculated from a linear combination of the input variables (Narula and Wellington, 1982). The distance from each data point to the predicted values is calculated and sum all these squared errors together. This quantity is minimized by the ordinary least squares method to estimate the optimal values for the coefficients of each independent variable.

There are extensions of the linear model called regularization methods. These methods seek to both minimize the sum of the squared error of the model on the training set but also to reduce the complexity of the model. Two popular regularization methods for linear regression are the Lasso Regression (Zou et al., 2007) where Ordinary Least Squares is modified to also minimize the absolute sum of the coefficients (L1 regularization), and the Ridge Regression (Hoerl et al., 1985) where Ordinary Least Squares is modified to also minimize the squared absolute sum of the coefficients (L2 regularization). A Bayesian view of ridge regression is obtained by noting that the minimizer can be considered as the posterior mean of a model (Tipping, 2001). The elastic net (Friedman et al., 2010) is a regularized regression method that linearly combines the L1 and L2 penalties of the lasso and ridge methods. Huber’s criterion is a hybrid of squared error for relatively small errors and absolute error for relatively large ones. Lambert-Lacroix and Zwald (2011) proposed Huber regressor to combine Huber’s criterion with concomitant scale and Lasso.

An L1 penalty minimizes the size of all coefficients and allows any coefficient to go to the value of zero, acting as a type of feature selection method since removes input features from the model. Least Angle Regression (Efron et al., 2004) is a forward stepwise version of feature selection for regression that can be adapted for the Lasso not to require a hyperparameter that controls the weighting of the penalty in the loss function since the weighting is discovered automatically by Least Angle Regression method via cross-validation. LassoLars is a lasso model implemented using the Least Angle Regression algorithm, where unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients.

Orthogonal matching pursuit (Pati et al., 1993) tries to find the solution for the L0-norm minimization problem, while Least Angle Regression solves the L1-norm minimization problem. Although these methods solve different minimization problems, they both depend on a greedy framework. They start from an all-zero solution, and then iteratively construct a sparse solution based on the correlation between features of the training set and the output variable. They converge to the final solution when the norm approaches zero.

K Neighbors Regressor (KNN) algorithm uses feature similarity to predict the values of new instances (Altman, 1992). The distance between the new instance and each training instance is calculated, the closest k instances are selected based on the preferred distance and finally, the prediction for the new instance is the average value of the dependent variable of these k instances.

Unlike linear regression, Classification and Regression Tree (CART) does not create a prediction equation, but data are partitioned into subsets at each node according to homogeneous values of the dependent variable and a decision tree is built to be used for making predictions about new instances (Breiman et al., 1984). We can enlarge the tree until always gives the correct value in the training set. However, this tree would overfit the data and not generalize well to new data. The correct policy is to use some combination of a minimum number of instances in a tree node and maximum depth of tree to avoid overfitting.

The basic idea of Boosting is to combine several weak learners into a stronger one. AdaBoost (Freund and Schapire, 1997) fits a regression tree on the training set and then retrains a new regression tree on the same dataset but the weights of each instance are adjusted according to the error of the previous tree predictions. In this way, subsequent regressors focus more on difficult instances.

Random Forests algorithm (Breiman, 2001) builds several trees with the CART algorithm using for each tree a bootstrap replica of the training set with a modification. At each test node, the optimal split is derived by searching a random subset of size K of candidate features without replacement from the full feature set.

Like Random Forests, Gradient Boosting (Friedman, 2001) is an ensemble of trees, however, there are two main differences. Firstly, the Random forests algorithm builds each tree independently while Gradient Boosting builds one tree at a time since it works in a forward stage-wise manner, introducing a weak learner to improve the shortcomings of existing weak learners. Secondly, Random Forests combine results at the end (by averaging the result of each tree) while Gradient Boosting combines results during the process.

LightGBM (Ke et al., 2017) extends the gradient boosting algorithm by adding automatic feature selection and focusing on instances with larger gradients to speed up training and sometimes even improve predictive performance.

The Extra-Trees algorithm (Geurts et al., 2006) creates an ensemble of unpruned regression trees according to the well-known top-down procedure of the regression trees. The main differences concerning other tree-based ensemble methods are that the Extra-Trees algorithm splits nodes by choosing fully at random cut-points and that uses the whole learning set (instead of a bootstrap replica) to grow the trees.

Passive-Aggressive regressor (Crammer et al., 2006) is generally used for large-scale learning since it is an online learning algorithm. In online learning, the input data come sequentially, and the learning model is updated step-by-step, as opposed to batch learning, where the entire dataset is used at once.

Numerical Results and Discussion

The scope of the current study is to exploit predictive models for the maximum deflection of a symmetric cross-ply (/ / ) rectangular beam for various span-to-depth ratios and boundary conditions subjected to a uniformly distributed load . All laminates are of equal thickness and made of the same orthotropic material. The main parameters that influence the response of the composite beams are the moduli of elasticity , the span-to-depth and the ply angles / / . The range of values of the parameters together with the material properties are given as: , , , , , , and (due to symmetry). For the given range of the parameters, Eqs 31, 32 are solved numerically producing a comprehensive database for each one of the examined boundary conditions presented in Table 1. This dataset contains values of which are used in the regression analysis.

TABLE 1

Boundary conditions
Clamped-Clamped (CC), ,
, ,
Simply Supported (SS), ,
, ,
Clamped-Roller (CR), ,
, ,
Clamped-Free (CF), ,
, ,

Boundary conditions examined for the prediction of the maximum deflection .

A plethora of regression algorithms, presented in the previous section, were employed for building corresponding predictive models of the using pyCaret (Ali, 2020), which is an open-source software machine learning library. A 5-fold cross-validation resampling procedure was used for evaluating the performance of the predictive models. The dataset was randomly divided into five folds of equal size and each fold was used for evaluating the performance of the model trained on the rest folds, whereas the final measure was the average value of the computed evaluation metrics on each test fold. Evaluation metrics are a measure of how well a model performs. The most popularly used evaluation metrics for regression problems are the mean absolute error (MAE), the mean absolute percentage error (MAPE), the mean square error (MSE), the root mean square error (RMSE), the root mean squared log error (RMSLE) and the coefficient of determination. The lower the value of these metrics the better the model. The perfect value of metrics is 0, indicating that the prediction model is perfect. To quantify the accuracy of the examined algorithms, the following evaluation metrics are used herein:

Mean absolute error (MAE)

Mean absolute percentage error (MAPE)

Mean square error (MSE)

Root mean square error ()

Root mean squared log error ()

Coefficient of determination ()where refers to predicted values, and refers to true values. is the regression sum of squares (i.e., explained sum of squares), and is the total sum of squares, which is proportional to the variance of the data. The coefficient of determination () is the square of the correlation between the actual and predicted variable and ranges from to . A zero value indicates that the model cannot explain any of the predicted variables. A value of indicates that the regression model explains perfectly the predicted variable.

Apart from the evaluation metrics of the machine learning algorithms, two other useful tools are presented for the predictive analysis of the . First, the feature importance is a technique for assigning scores to input features that indicate the relative importance of each feature for the prediction. The scores can highlight which features are most relevant to the target and the opposite, i.e., which features are the least relevant. Most importance scores are calculated using the most accurate predictive model that has been fit on our data (Louppe et al., 2013). Second, the correlation matrix heatmap illustrates the correlation dependence between the variables of the database. That is, each square of the matrix represents the correlation between the attributes paired on the two axes. A value of (or ) indicates a perfect correlation between two variables, with indicating a positive correlation and a negative (inverse) correlation; a value in the range from to (or from to ) indicates a strong correlation; a value between and (or between and ) indicates a moderate correlation; a value in the range from to (or from to ) indicates a weak correlation.

Clamped-Clamped Beam

First, a clamped-clamped beam is analyzed. The evaluation metrics of the employed regression algorithms are tabulated in Table 2. The Extra-Trees Regressor algorithm is the most effective algorithm reaching a value of 0.9994, followed by the Random Forest Regressor and the Decision Tree Regressor. By examination of the evaluation metrics, it is obvious that there are significant differences in the effectiveness between algorithms. Nevertheless, the algorithms that perform best do so consistently for all problems, as will be demonstrated.

TABLE 2

ModelMAEMSERMSER2RMSLEMAPE
Extra Trees Regressor0.02510.00740.08340.99940.01320.0148
Random Forest Regressor0.03810.01350.11480.99880.01570.0187
Decision Tree Regressor0.05560.03010.17050.99750.02420.0271
Light Gradient Boosting Machine0.05980.02030.14070.99830.02570.1170
Gradient Boosting Regressor0.27710.34690.58810.97060.12710.8407
K Neighbors Regressor0.31461.35401.16300.88560.10170.0909
AdaBoost Regressor1.01112.07251.41990.82520.39443.6655
Huber Regressor1.06857.67802.76940.35210.38314.0658
Elastic Net1.34217.94222.81670.32970.48963.1981
Lasso Regression1.41318.39052.89510.29190.51203.6771
Bayesian Ridge1.42036.22252.49310.47490.53158.7274
Ridge Regression1.42056.22252.49310.47490.53168.7319
Linear Regression1.42066.22252.49310.47490.53178.7329
Least Angle Regression1.42066.22252.49310.47490.53178.7329
Orthogonal Matching Pursuit1.53718.27802.87590.30110.50444.2803
Passive Aggressive Regressor1.994511.17823.32570.06050.702110.8439
Lasso Least Angle Regression1.998611.84253.44020.00010.772411.0424

Evaluation metrics for the clamped-clamped beam.

From the feature importance plot (see Figure 2A), it is observed that the most important parameters for predicting the target attribute is the modulus of elasticity and the span-to-depth ratio . Next comes the ply angle which is more important than , and . Moreover, the correlation matrix heatmap has been evaluated for this problem; in this figure, the blue color indicates a negative correlation between the two parameters, while the red one indicates a positive correlation. Moreover, the intensity of the color implies how strongly these attributes are correlated, meaning that the deeper color corresponds to a stronger correlation. The correlation matrix heatmap of Figure 2B reveals that the maximum deflection is positively correlated with the parameters , and negatively correlated with and . This means that increase of the span-to-depth ratio or increase of the angles of the plies leads to an increase of the maximum deflection. Conversely, an increase of either elastic moduli leads to a decrease in the maximum deflection. Nevertheless, is more strongly correlated with than . Finally, the ply angle seems to be more important than the angle in making the beam stiffer, yet the difference is small.

FIGURE 2

Simply Supported Beam

In this second example, a simply supported beam is analyzed. The Extra-Trees Regressor algorithm outperforms the other regression algorithms once again (see Table 3). The feature importance plot (see Figure 3A) shows an importance sequence different from that of the previous example. That is, the span-to-depth ratio is more important than the modulus of elasticity , while the ply angle is more important than and . Furthermore, the correlation matrix heatmap shown in Figure 3B reveals that, again, the maximum deflection is positively correlated with the parameters , and negatively correlated with and . As previously, the correlation of is significantly stronger than that of The ply angles exhibit weak positive correlations with the maximum deflection, with being the prevailing one.

TABLE 3

ModelMAEMSERMSER2RMSLEMAPE
Extra Trees Regressor0.07490.07670.27180.99940.01570.0135
Random Forest Regressor0.11270.15910.39350.99870.01870.0180
Decision Tree Regressor0.14650.22940.47350.99810.02650.0258
Light Gradient Boosting Machine0.21060.32580.56170.99730.04790.1556
K Neighbors Regressor0.868212.93513.59420.89480.11580.0934
Gradient Boosting Regressor1.14176.01632.44960.95100.29311.8173
AdaBoost Regressor3.060222.09564.67230.81840.59934.5067
Huber Regressor3.520882.98729.10490.32600.62837.3794
Elastic Net4.119677.52248.79980.37050.76205.9456
Lasso Regression4.220971.84248.47130.41660.787610.1221
Bayesian Ridge4.600068.30988.26070.44520.909216.0915
Ridge Regression4.600868.30988.26070.44520.909416.1014
Linear Regression4.601068.30988.26070.44520.909416.1033
Least Angle Regression4.601068.30988.26070.44520.909416.1033
Passive Aggressive Regressor4.665190.83749.50390.26080.882410.9468
Orthogonal Matching Pursuit4.828383.97109.15970.31780.83929.1774
Lasso Least Angle Regression6.4298123.072611.08990.00021.244019.9475

Evaluation metrics for the simply supported beam.

FIGURE 3

Clamped-Roller Beam

In this example, a clamped-roller beam is analyzed. In Table 4 it is shown that the Extra-Trees Regressor algorithm is again the most effective, as compared to the other regression algorithms. The feature importance plot (see Figure 4A) shows once more a similar to the clamped-clamped beam importance sequence. That is, the most important parameter is the modulus of elasticity , followed closely by the span-to-depth ratio . The ply angle is more important than , and . Furthermore, the correlation matrix heatmap shown in Figure 4B reveals that, again, the maximum deflection is positively correlated with the parameters , and negatively correlated with and . The elastic modulus exhibits a stronger correlation with the maximum deflection than . As in the case of the clamped-clamped beam, the ply angle is more important than the angle .

TABLE 4

ModelMAEMSERMSER2RMSLEMAPE
Extra Trees Regressor0.03970.02100.14030.99930.01420.0142
Random Forest Regressor0.06010.03760.19180.99870.01720.0186
Decision Tree Regressor0.08520.07210.26380.99760.02540.0268
Light Gradient Boosting Machine0.09930.06660.25490.99780.03320.1292
K Neighbors Regressor0.46563.27361.80830.89090.10710.0919
Gradient Boosting Regressor0.50881.15181.07000.96150.18901.2013
Huber Regressor1.718819.75714.44250.34240.46785.1283
AdaBoost Regressor1.95496.60042.54930.78020.57164.7368
Lasso Regression2.017618.56444.30620.38210.54704.0458
Elastic Net2.067619.41444.40380.35380.57693.7886
Bayesian Ridge2.261516.16794.01880.46180.669011.0123
Ridge Regression2.261916.16794.01880.46180.669111.0184
Linear Regression2.262016.16794.01880.46180.669111.0197
Least Angle Regression2.262016.16794.01880.46180.669111.0197
Passive Aggressive Regressor2.323622.41244.73120.25280.67998.8763
Orthogonal Matching Pursuit2.414320.65034.54240.31230.61955.8559
Lasso Least Angle Regression3.184230.02595.47780.00010.949313.8389

Evaluation metrics for the clamped-roller beam.

FIGURE 4

Clamped-free Beam

In the case of a clamped-free beam (cantilever), while the evaluation metrics designates once more the Extra-Trees Regressor algorithm superiority (see Table 5), the feature importance plot (see Figure 5A) presents a similar to the simply supported beam importance sequence. That is, the most important parameter is the span-to-depth ratio followed the modulus of elasticity . The ply angle is more important than , and .

TABLE 5

ModelMAEMSERMSER2RMSLEMAPE
Extra Trees Regressor0.55284.88852.16090.99950.02070.0122
Random Forest Regressor0.867111.67993.36230.99870.02300.0165
Decision Tree Regressor1.043617.31244.09090.99820.03140.0228
Light Gradient Boosting Machine1.835925.03354.91300.99730.12800.2147
K Neighbors Regressor7.1088960.792230.97520.89670.14580.0958
Gradient Boosting Regressor10.4671513.599222.61380.94480.73403.1057
AdaBoost Regressor27.16171772.987041.85790.80691.12166.1978
Huber Regressor30.99336409.211080.01450.31201.224211.7426
Passive Aggressive Regressor32.46466467.035580.38160.30541.329814.5124
Elastic Net36.02465755.018775.81990.38231.410911.3018
Lasso Regression39.68775288.430372.68480.43231.617924.8280
Bayesian Ridge40.26985283.672772.65270.43281.641426.0519
Ridge Regression40.27775283.673072.65270.43281.641726.0694
Linear Regression40.27915283.673272.65270.43281.641826.0725
Least Angle Regression40.27915283.673472.65270.43281.641826.0725
Orthogonal Matching Pursuit41.84986342.446979.60550.31891.562415.6819
Lasso Least Angle Regression55.78299312.037996.46360.00022.069332.0586

Evaluation metrics for the clamped-free beam.

FIGURE 5

The correlation matrix heatmap (see Figure 5B) again shows that the is positively correlated with the parameters , and negatively correlated with and . In this case, the ply angle is significantly more strongly correlated with the maximum deflection than the angle .

Friedman Ranking

Finally, to better assess the results obtained from each algorithm, the Friedman test methodology proposed by Demšar (2006) was employed for the comparison of several algorithms over multiple datasets (Table 6). As was expected, the Extra-Trees Regressor algorithm is the most accurate in our case. A simple computational tool, written in JAVA programming language using Weka API (Hall et al., 2009) along with the relevant data, is provided to the interested reader as Supplementary Data to this article.

TABLE 6

ModelRank (w.r.t. MAE)ModelRank (w.r.t. MAPE)ModelRank (w.r.t. )
Extra-Trees Regressor1Extra-Trees Regressor1Extra-Trees Regressor1
Random Forest Regressor2Random Forest Regressor2Random Forest Regressor2
Decision Tree Regressor3Decision Tree Regressor3Light Gradient Boosting Machine3.5
Light Gradient Boosting Machine4K Neighbors Regressor4Decision Tree Regressor3.5
K Neighbors Regressor5.25Light Gradient Boosting Machine5Gradient Boosting Regressor5
Gradient Boosting Regressor5.75Gradient Boosting Regressor6K Neighbors Regressor6
AdaBoost Regressor7.25Elastic Net7.5AdaBoost Regressor7
Huber Regressor7.75AdaBoost Regressor7.75Ridge Regression9.5
Elastic Net9.5Huber Regressor9.5Linear Regression9.5
Lasso Regression10Lasso Regression10Bayesian Ridge9.5
Bayesian Ridge11.25Orthogonal Matching Pursuit10.75Least Angle Regression9.5
Ridge Regression12.25Passive Aggressive Regressor12.5Lasso Regression12.75
Passive Aggressive Regressor13.75Bayesian Ridge12.75Elastic Net13
Linear Regression13.75Ridge Regression13.75Huber Regressor13.75
Least Angle Regression13.75Linear Regression15.25Orthogonal Matching Pursuit14.5
Orthogonal Matching Pursuit15.75Least Angle Regression15.25Passive Aggressive Regressor16
Lasso Least Angle Regression17Lasso Least Angle Regression17Lasso Least Angle Regression17

Friedman ranking.

Conclusion

In this paper, several machine learning regression models were employed for the prediction of the deflection of symmetric laminated composite beams subjected to a uniformly distributed load. Training, validation, and testing of the models require large amounts of data that cannot be provided by the scarce experiments. Instead, ample amounts of data are generated numerically using a refined higher-order beam theory for various span-to-depth ratios and boundary conditions, by appropriate discretization of all pertinent geometric and material properties.

The main conclusion that can be drawn from this investigation are as follows:

  • • Regarding the regression models, the Extra-Trees algorithm is, without doubt, the best performer for all cases of boundary conditions, followed by the Random Forest Regressor, the Decision Tree Regressor, the Light Gradient Boosting Machine, and the K Neighbors Regressor.

  • • The prediction errors of the best-performing models are adequately small for engineering purposes. This allows for the rapid design of the composite beams without resolving to a mathematical implementation of higher-order beam theories. Moreover, these models can be integrated into modern metaheuristic optimization algorithms which use only payoff data (i.e., no derivative data) to allow for the fast and reliable optimization of such beams.

  • • Regarding the relative importance of the design variables for the evaluation of the deflection, the span-to-depth ratio and the modulus of elasticity are unambiguously the most important features. The next level of importance includes the angle ply and the modulus of elasticity . Surprisingly, the angle is the least important variable.

  • • The span-to-depth ratio has the strongest positive correlation to the target attribute for all cases of boundary conditions, as evidenced by the correlation matrices. In all cases, the maximum deflection is positively correlated with the parameters , and negatively correlated with and .

  • • An easy-to-use computational tool has been implemented which is provided as Supplementary Material to the present article.

Statements

Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author contributions

GT had the research idea, drafted the article, and contributed to the theoretical formulation of the beam theory. SK and AC contributed to the conception and design of the work, and the theoretical analysis of the regression techniques. The manuscript was written through the contribution of all authors. All authors discussed the results, reviewed, and approved the final version of the manuscript.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fbuil.2022.855112/full#supplementary-material

References

  • 1

    AliM. (2020). PyCaret: An Open-Source, Low-Code Machine Learning Library in Python. Available at: https://www.pycaret.org

  • 2

    AltmanN. S. (1992). An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression. The Am. Statistician46, 175185. 10.1080/00031305.1992.10475879

  • 3

    BickfordW. B. (1982). A Consistent Higher Order Beam Theory. Dev. Theor. Appl. Mech.11, 137150.

  • 4

    BreimanL.FriedmanJ. H.OlshenR. A.StoneC. J. (1984). Classification and Regression Trees. New York, NY: Routledge. 10.1201/9781315139470

  • 5

    BreimanL. (2001). Random Forests. Mach. Learn.45, 532. 10.1023/A:1010933404324

  • 6

    CivalekÖ.KiraciogluO. (2010). Free Vibration Analysis of Timoshenko Beams by DSC Method. Int. J. Numer. Meth. Biomed. Engng.26, 18901898. 10.1002/CNM.1279

  • 7

    CrammerK.DekelO.KeshetJ.ShaiS.-S.SingerY. (2006). Online Passive-Aggressive Algorithms. J. Mach. Learn. Res.7, 551585.

  • 8

    DemšarJ. (2006). Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res.7, 130.

  • 9

    EfronB.HastieT.JohnstoneI.TibshiraniR.IshwaranH.KnightK.et al (2004). Least Angle Regression. Ann. Statist.32, 407499. 10.1214/009053604000000067

  • 10

    EisenbergerM. (2003). An Exact High Order Beam Element. Comput. Structures81, 147152. 10.1016/S0045-7949(02)00438-8

  • 11

    EndoM. (2016). An Alternative First-Order Shear Deformation Concept and its Application to Beam, Plate and Cylindrical Shell Models. Compos. Structures146, 5061. 10.1016/J.COMPSTRUCT.2016.03.002

  • 12

    FreundY.SchapireR. E. (1997). A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci.55, 119139. 10.1006/JCSS.1997.1504

  • 13

    FriedmanJ.HastieT.TibshiraniR. (2010). Regularization Paths for Generalized Linear Models via Coordinate Descent. J. Stat. Soft.33, 1. 10.18637/jss.v033.i01

  • 14

    FriedmanJ. H. (2001). Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stat.29, 11891232. 10.1214/aos/1013203451

  • 15

    GeurtsP.ErnstD.WehenkelL. (20062006). Extremely Randomized Trees. Mach. Learn.63, 342. 10.1007/S10994-006-6226-1

  • 16

    HallM.FrankE.HolmesG.PfahringerB.ReutemannP.WittenI. H. (2009). The WEKA Data Mining Software. SIGKDD Explor. Newsl.11, 1018. 10.1145/1656274.1656278

  • 17

    HeyligerP. R.ReddyJ. N. (1988). A Higher Order Beam Finite Element for Bending and Vibration Problems. J. Sound Vibration126, 309326. 10.1016/0022-460X(88)90244-1

  • 18

    HoerlA. E.KennardR. W.HoerlR. W. (1985). Practical Use of Ridge Regression: A Challenge Met. Appl. Stat.34, 114120. 10.2307/2347363

  • 19

    KatsikadelisJ. T.TsiatasG. C. (20032003). Large Deflection Analysis of Beams with Variable Stiffness. Acta Mechanica164, 113. 10.1007/S00707-003-0015-8

  • 20

    KeG.MengQ.FinleyT.WangT.ChenW.MaW.et al (2017). LightGBM: A Highly Efficient Gradient Boosting Decision Tree. Adv. Neural Inf. Process. Syst. 30. Available at: https://github.com/Microsoft/LightGBM (Accessed December 14, 2021).

  • 21

    KhdeirA. A.ReddyJ. N. (1997). An Exact Solution for the Bending of Thin and Thick Cross-Ply Laminated Beams. Compos. Structures37, 195203. 10.1016/S0263-8223(97)80012-8

  • 22

    Lambert-LacroixS.ZwaldL. (2011). Robust Regression through the Huber’s Criterion and Adaptive Lasso Penalty. Electron. J. Statist5, 10151053. 10.1214/11-EJS635

  • 23

    LiewK. M.PanZ. Z.ZhangL. W. (2019). An Overview of Layerwise Theories for Composite Laminates and Structures: Development, Numerical Implementation and Application. Compos. Structures216, 240259. 10.1016/J.COMPSTRUCT.2019.02.074

  • 24

    LinX.ZhangY. X. (2011). A Novel One-Dimensional Two-Node Shear-Flexible Layered Composite Beam Element. Finite Elem. Anal. Des.47, 676682. 10.1016/J.FINEL.2011.01.010

  • 25

    LouppeG.WehenkelL.SuteraA.GeurtsP. (2013). Understanding Variable Importances in Forests of Randomized Trees. Adv. Neural Inf. Process. Syst.26, 431439.

  • 26

    MurthyM. V. V. S.Roy MahapatraD.BadarinarayanaK.GopalakrishnanS. (2005). A Refined Higher Order Finite Element for Asymmetric Composite Beams. Compos. Structures67, 2735. 10.1016/J.COMPSTRUCT.2004.01.005

  • 27

    NarulaS. C.WellingtonJ. F. (1982). The Minimum Sum of Absolute Errors Regression: A State of the Art Survey. Int. Stat. Rev./Revue Internationale de Statistique50, 317. 10.2307/1402501

  • 28

    NguyenT.-K.NguyenN.-D.VoT. P.ThaiH.-T. (2017). Trigonometric-Series Solution for Analysis of Laminated Composite Beams. Compos. Structures160, 142151. 10.1016/J.COMPSTRUCT.2016.10.033

  • 29

    PatiY. C.RezaiifarR.KrishnaprasadP. S. (1993). “Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition,” in Conf. Rec. Asilomar Conf. Signals, Syst. Comput., Pacific Grove, CA, November 1–3, 19931, 4044. 10.1109/ACSSC.1993.342465

  • 30

    PawarE. G.BanerjeeS.DesaiY. M. (2015). Stress Analysis of Laminated Composite and Sandwich Beams Using a Novel Shear and Normal Deformation Theory. Lat. Am. J. Sol. Struct.12, 13401361. 10.1590/1679-78251470

  • 31

    ReddyJ. N. (1984). A Simple Higher-Order Theory for Laminated Composite Plates. J. Appl. Mech.51, 745752. 10.1115/1.3167719

  • 32

    SrinivasanR.DattaguruB.SinghG. (2019). Exact Solutions for Laminated Composite Beams Using a Unified State Space Formulation. Int. J. Comput. Methods Eng. Sci. Mech.20, 319334. 10.1080/15502287.2019.1644394

  • 33

    TippingM. E. (2001). Sparse Bayesian Learning and the Relevance Vector Machine. J. Mach. Learn. Res.1, 211244.

  • 34

    TsiatasG. C.SiokasA. G.SapountzakisE. J. (2018). A Layered Boundary Element Nonlinear Analysis of Beams. Front. Built Environ.4, 52. 10.3389/FBUIL.2018.00052/BIBTEX

  • 35

    VoT. P.ThaiH.-T. (2012). Static Behavior of Composite Beams Using Various Refined Shear Deformation Theories. Compos. Structures94, 25132522. 10.1016/J.COMPSTRUCT.2012.02.010

  • 36

    WangC. M.ReddyJ. N.LeeK. H. (2000). Shear Deformable Beams and Plates : Relationships with Classical Solutions. Elsevier.

  • 37

    ZouH.HastieT.TibshiraniR. (2007). On the “Degrees of freedom” of the Lasso. Ann. Stat.35, 21732192. 10.1214/009053607000000127

Summary

Keywords

machine learning, regression models, composite beams, orthotropic material model, higher-order beam theories

Citation

Tsiatas GC, Kotsiantis S and Charalampakis AE (2022) Predicting the Response of Laminated Composite Beams: A Comparison of Machine Learning Algorithms. Front. Built Environ. 8:855112. doi: 10.3389/fbuil.2022.855112

Received

14 January 2022

Accepted

31 January 2022

Published

21 February 2022

Volume

8 - 2022

Edited by

Makoto Ohsaki, Kyoto University, Japan

Reviewed by

Ömer Civalek, Akdeniz University, Turkey

Ahmad N. Tarawneh, Hashemite University, Jordan

Updates

Copyright

*Correspondence: George C. Tsiatas,

This article was submitted to Computational Methods in Structural Engineering, a section of the journal Frontiers in Built Environment

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics