Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 30 August 2021
Sec. Mathematics of Computation and Data Science
This article is part of the Research Topic 2021 Editor’s Pick: Applied Mathematics and Statistics View all 13 articles

A Function Approximation Approach to the Prediction of Blood Glucose Levels

  • 1Institute of Mathematical Sciences, Claremont Graduate University, Claremont, CA, , United States
  • 2The Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austrian Academy of Sciences, Linz, Austria
  • 3Department of Mathematics and Computer Science, Westmont College, Santa Barbara, CA, United States

The problem of real time prediction of blood glucose (BG) levels based on the readings from a continuous glucose monitoring (CGM) device is a problem of great importance in diabetes care, and therefore, has attracted a lot of research in recent years, especially based on machine learning. An accurate prediction with a 30, 60, or 90 min prediction horizon has the potential of saving millions of dollars in emergency care costs. In this paper, we treat the problem as one of function approximation, where the value of the BG level at time t+h (where h the prediction horizon) is considered to be an unknown function of d readings prior to the time t. This unknown function may be supported in particular on some unknown submanifold of the d-dimensional Euclidean space. While manifold learning is classically done in a semi-supervised setting, where the entire data has to be known in advance, we use recent ideas to achieve an accurate function approximation in a supervised setting; i.e., construct a model for the target function. We use the state-of-the-art clinically relevant PRED-EGA grid to evaluate our results, and demonstrate that for a real life dataset, our method performs better than a standard deep network, especially in hypoglycemic and hyperglycemic regimes. One noteworthy aspect of this work is that the training data and test data may come from different distributions.

1 Introduction

Diabetes is a major disease affecting humanity. According to the 2020 National Diabetes Statistics report [3], more than 34,000,000 people had diabetes in the United States alone in 2018, contributing to nearly 270,000 deaths and costing nearly 327 billion dollars in 2017. There is so far no cure for diabetes, and one of the important ingredients in keeping it in control is to monitor blood glucose levels regularly. Continuous glucose monitoring (CGM) devices are used increasingly for this purpose. These devices typically record blood sugar readings at 5 min intervals, resulting in a time series. This paper aims at predicting the level and rate of change of the sugar levels at a medium range prediction horizon; i.e., looking ahead for 30–60 min. A clinically reliable prediction of this nature is extremely useful in conjunction with other communication devices in avoiding unnecessary costs and dangers. For example, hypoglycemia may lead to loss of consciousness, confusion or seizures [9]. However, since the onset of hypoglycemic periods are often silent (without indicating symptoms), it can be very hard for a human to predict in advance. If these predictions are getting communicated to a hospital by some wearable communication device linked with the CGM device, then a nurse could alert the patient if the sugar levels are expected to reach dangerous levels, so that the patient could drive to the hospital if necessary, avoiding a dangerous situation, not to mention an expensive call for an ambulance.

Naturally, there are many papers dealing with prediction of blood glucose (BG) based not just on CGM recordings but also other factors such as meals, exercise, etc. In particular, machine learning has been used extensively for this purpose. We refer to [9] for a recent review of BG prediction methods based on machine learning, with specific focus on predicting and detecting hypoglycemia.

The fundamental problem of machine learning is the following. We have (training) data of the form {(xj,f(xj)+ϵj)}, where, for some integer d1, xjd are realizations of a random variable, f is an unknown real valued function, and theϵj’s are realizations of a mean zero random variable. The distributions of both the random variables are not known. The problem is to approximate f with a suitable model, especially for the points which are not in the training data. For example, in the prediction of a time series x0,x1, with a prediction horizon of h based on d past readings can be (and is in this paper) viewed as a problem of approximating an unknown function f such that xj+h=f(xjd+1,,xj) for each j. Thus, xj=(xjd+1,,xj) and yj=xj+h.

Such problems have been studied in approximation theory for more than a century. Unfortunately, classical approximation theory techniques do not apply directly in this context, mainly because the data does not necessarily become dense on any known domain such as a cube, sphere, etc., and we cannot control the locations of the points xj. This is already clear from the example given above–it is impossible to require that the BG readings be at a pre-chosen set of points.

Manifold learning tries to ameliorate this problem by assuming that the data lies on some unknown manifold, and developing techniques to learn various quantities related to the manifold, in particular, the eigen-decomposition of the Laplace-Beltrami operator. Since these objects must be learnt from the data, these techniques are applicable in the semi-supervised setting, in which we have all the data points xj available in advance. In [8], we have used deep manifold learning to predict BG levels using CGM data. Being in the semi-supervised setting, this approach cannot be used directly for prediction based on data that was not included in the original data set we worked with.

Recently, we have discovered [7] a more direct method to approximate functions on unknown manifolds without trying to learn anything about the manifold itself, and giving an approximation with theoretically guaranteed errors on the entire manifold; not just the points in the original data set. The purpose of this paper is to use this new tool for BG prediction based on CGM readings which can be used on patients not in the same data sets, and for readings for the patients in the data set which are not included among the training data.

In this context, the numerical accuracy of the prediction alone is not clinically relevant. The Clarke Error-Grid Analysis (C-EGA) [2] is devised to predict whether the prediction is reasonably accurate, or erroneous without serious consequences (clinically uncritical), or results in unnecessary treatment (overcorrection), or dangerous (either because it fails to identify a rise or drop in BG or because it confuses a rise in BG for a drop or vice versa). The Prediction Error-Grid Analysis (PRED-EGA), on the other hand, categorizes a prediction as accurate, erroneous with benign (not serious) consequences or erroneous with potentially dangerous consequences in the hypoglycemic (low BG), euglycemic (normal BG), and hyperglycemic (high BG) ranges separately. This stratification is of great importance because consequences caused by a prediction error in the hypoglycemic range are very different from ones in the euglycemic or the hyperglycemic range. In addition, the categorization is based not only on reference BG estimates paired with the BG estimates predicted for the same moments (as C-EGA does), but also on reference BG directions and rates of change paired with the corresponding estimates predicted for the same moments. It is argued in [14] that this is a much better clinically meaningful assessment tool than C-EGA, and is therefore the assessment tool we choose to use in this paper.

Since different papers on the subject use different criterion for assessment it is impossible for us to compare our results with those in most of these papers. Besides, a major objective of this paper is to illustrate the use and utility of the theory in [7] from the point of view of machine learning. So, we will demonstrate the superiority of our results in the hypoglycemic and hyperglycemic regimes over those obtained by a blind training of a deep network using the MATLAB Deep Learning Toolbox. We will also compare our results with those obtained by the techniques used in [6, 8, 10]. However, we note that these methods are not directly comparable to ours. In [10], the authors use the data for only one patient as training data, which leads to a meta-learning model that can be used serendipitously on other patients, although a little bit of further training is still required for each patient to apply this model. In contrast, we require a more extensive “training data,” but there is no training in the classical sense, and the application to new patients consists of a simple matrix vector multiplication. The method in [6] is a linear extrapolation method, where the coefficients are learnt separately on each patient. It does not result in a model that can be trained once for all and applied to other patients in a simple manner. As pointed out earlier, the method in [8] requires that the entire data set be available even before the method can start to work. So, unlike our method, it cannot be trained on a part of the data set and applied to a completely different data set, not available at the beginning.

We will explain the problem and the evaluation criterion in Section 3. Prior work on this problem is reviewed briefly in Section 2. The methodology and algorithm used in this paper are described in Section 4. The results are discussed in Section 5. The mathematical background behind the methods described in Section 4 is summarized in Appendix A.

2 Prior Work

Given the importance of the problem, many researchers have worked on it in several directions. Below we highlight the main trends relevant to our work.

The majority of machine learning models (30% of those studied in the survey paper [9]) are based on artificial neural networks such as convolutional neural networks, recurrent neural networks and deep learning techniques. For example, in [17], the authors develop a deep learning model based on a dilated recurrent neural network to achieve 30-min blood glucose prediction. The authors assess the accuracy of their method by calculating the root mean squared error (RMSE) and mean absolute relative difference (MARD). While they achieve good results, a limitation of the method is the reliance on various input parameters such as meal intake and insulin dose which are often recorded manually and might therefore be inaccurate. In [11], on the other hand, a feed-forward neural network is designed with eleven neurons in the input layer (corresponding to variables such as CGM data, the rate of change of glucose levels, meal intake and insulin dosage), and nine neurons with hyperbolic tangent transfer function in the hidden layer. The network was trained with the use of data from 17 patients and tested on data from 10 other patients for a 75-min prediction, and evaluated using C-EGA. Although good results are achieved in the C-EGA grid in this paper, a limitation of the method is again the large amount of additional input information necessary to design the model, as described above.

A second BG prediction class employs decision trees. The authors of [13], for example, use random forests to perform a 30-min prediction to specifically detect postprandial hypoglycemia, that is, hypoglycemia occurring after a meal. They use statistical analyses like sensitivity and specificity to evaluate their method. In [16], prediction is again achieved by random forests and incorporating a wide variety of additional input features such as physiological and physical activity parameters. Mean Absolute Percentage Error (MAPE) is used as assessment methodology. The authors of [5] use a classification and regression tree (CART) to perform a 15-min prediction using BG data only, again using sensitivity and specificity to test the performance of their method.

A third class of methods employ kernel-based regularization techniques to achieve prediction (for example [10], and references therein), where Tikhonov regularization is used to find the best least square fit to the data, assuming the minimizer belongs to a reproducing kernel Hilbert space (RKHS). Of course, these methods are quite sensitive to the choice of kernel and regularization parameters. Therefore, the authors in [10] develop methods to choose both the kernel and regularization parameter adaptively, or through meta-learning (“learning to learn”) approaches. C-EGA and PRED-EGA are used as performance metrics.

Time series techniques are also employed in the literature. In [12], a tenth-order auto-regression (AR) model is developed, where the AR coefficients are determined through a regularized least square method. The model is trained patient-by-patient, typically using the first 30% of the patient’s BG measurements, for a 30-min or 60-min prediction. The method is tested on a time series containing glucose values measured every minute, and evaluation is again done through the C-EGA grid. The authors in [15] develop a first-order AR model, patient-by-patient, with time-varying AR coefficients determined through weighted least squares. Their method is tested on a time series containing glucose values measured every 3 min, and quantified using statistical metrics such as RMSE. As noted in [10], these methods seem to be sensitive to gaps in the input data.

A summary of the methods described above is given in Table 1.

TABLE 1
www.frontiersin.org

TABLE 1. A summary of related work on BG prediction.

It should be noted that the majority of BG prediction models (63.64% of those analyzed in [9]), including many of the references described above, are dependent on additional input features (such as meal intake, insulin dosage and physical activity) in addition to historical BG data. Since such data could be inaccurate and hard to obtain, we have intentionally made the design decision in our work to base our method on historical BG data only.

3 The Set-up

We use two different clinical data sets provided by the DirectNet Central Laboratory [1], which lists BG levels of different patients taken at 5-min intervals with the CGM device; i.e., for each patient p in the patient set P, we are given a time series {sp(tj)}, where sp(tj) denotes the BG level at time tj. The relevant details of the two data sets are described in Table 2.

TABLE 2
www.frontiersin.org

TABLE 2. Summary statistics for datasets D and J.

Our goal is to predict for each j, the level sp(tj+h), given readings sp(tj),,sp(tjd+1) for appropriate values of h and d. We took d=7 (a sampling horizon tjtjd+1 of 30 min has been suggested as the optimal one for BG prediction in [4, 6]). We tested prediction horizons of 30 min (h=6) (the most common prediction horizon according to the analysis in [9]), 60 min (h=12) and 90 min (h=18).

4 Methodology in the Current Paper

4.1 Data Reformulation

For each patient p in the set P of patients in the data sets, the data is in the form of a time series {sp(tj)}j=1Np of BG levels at time tj, where tjtj1=5 minutes. We re-organize the data in the form

P*={((sp(tjd+1),,sp(tj)),sp(tj+h)):j=d,,Nph,pP}.

For any patient pP, we will abbreviate

xp,j=(sp(tjd+1),,sp(tj)),yp,j=sp(tj+h),

and write

P:={xp,j=(sp(tjd+1),,sp(tj)):j=d,,Nph,pP}.

The problem of BG prediction is then seen as the classical problem of machine learning; i.e., to find a functional relationship f such that f(xp,j)yp,j, for all p and j=d,,Nph.

4.2 Training Data Selection

To obtain training data, we form the training set C by randomly selecting (according to a uniform probability distribution) c % of the patients in P. The training data are now defined to be all the data of each patient in C, that is,

C:={(xp,j,yp,j)=((sp(tjd+1),,sp(tj)),sp(tj+h)):j=d,,Nph,pC}.

Since we define the training and test data in terms of patients, choosing the entire data for a patient in the training (respectively, test) data, we may now simplify the notation by writing (xj,yj) rather than (xp,j,yp,j), with the understanding that if xj represents a data point for a patient p, yj is the corresponding value for the same patient p. For future reference, we also define

C:={xj=(sp(tjd+1),,sp(tj)):pC}

and the testing patient set Q=PC, with

Q:=P\C={xj=(sp(tjd+1),,sp(tj)):pQ}.

4.3 BG Range Classification

To get accurate results in each BG range, we want to be able to adapt the training data and parameters used for predictions in each range–as noted previously, consequences of prediction errors in the separate BG ranges are very different. To this end, we divide the measurements in C into three clusters Co,Ce and Cr, based on yj, the BG value at time tj+h, for each xj:

Co={xjC:0yj70}(hypoglycemia);
Ce={xjC:70<yj180}(euglycemia);
Cr={xjC:180<yj450}(hyperglycemia),

with

C={(xj,yj):xjC},{o,e,r}.

In addition, we also classify each xjQ as follows:

Qo={xjQ:0xj,d70}(hypoglycemia);
Qe={xjQ:70<xj,d180}(euglycemia);
Qr={xjQ:180<xj,d450}(hyperglycemia).

Ideally, the classification of each xjQ should also be done based on the value of yj (as was done for the classification of the training data C). However, since the values yj are only available for the training data and not for the test data, we use the next best available measurement for each xj, namely xj,d, the BG value at time tj.

4.4 Prediction

Before making the BG predictions, it is necessary to scale the input readings so that the components of each xjP are in [1/2,1/2]. This is done through the transformation

xj2xj(M+m)2(Mm),

where M:=max{xj,k:xjC} and m:=min{xj,k:xjC}.

With F^n,α=F^n,α(Y,xj) defined as in (Appendix A) used with training data Y and evaluated at a point xj, we are finally ready to compute the BG prediction for each xjQ by

f(xj):={F^no,αo(Co,xj),xjQo,F^ne,αe(Ce,xj),xjQe,F^nr,αr(Cr,xj),xjQr.

The construction of the estimator F^n,α involves several technical details regarding the classical Hermite polynomials. For ease of reading, the details of this construction are deferred to the Appendix A.

4.5 Evaluation

To evaluate the performance of the final output f(xj),xjQ, we use the PRED-EGA mentioned in Section 3. Specifically, a PRED-EGA grid is constructed by using comparisons of f(xj) with the reference value yj. This involves comparing

f(xj)withyj

as well as the rates of change

f(xj+1)f(xj1)2(tj+1tj1)withyj+1yj12(tj+1tj1),

for all xjQ. Based on these comparisons, PRED-EGA classifies f(xj) as being Accurate, Benign or Erroneous.

We repeat the entire process described in Subsections 4.2–4.5 for a fixed number of trials, after which we report the average of the PRED-EGA grid placements, over all xjQ and over all trials, as the final evaluation.

A Summary of the method is given in Algorithm 1.

www.frontiersin.org

5 Results and Discussion

As mentioned in Section 3, we apply our method to data provided by the DirecNet Central Laboratory. Time series for the 25 patients in data set D and the 25 patients in data set J that contain the largest number of BG measurements are considered. These specific data sets were obtained to study the performance of CGM devices in children with Type I diabetes; as such, all of the patients are less than 18 years old. Summary statistics of the two data sets are provided in Table 2. Our method is a general purpose algorithm, where these details do not play any significant role, except in affecting the outcome of the experiments.

We provide results obtained by implementing our method in MATLAB, as described in Algorithm 1. For our implementation, we employ a sampling horizon tjtjd+1 of 30 min (d=7), 50% training data (c=50) (which is comparable to approaches followed in for example [12]) and a total of 100 trials (T=100). For all the experiments, the function approximation parameters no,ne and nr referenced in Algorithm 1 were chosen in the range {3,,7} with αo=αe=αr=1. We provide results for prediction horizons tj+mtj of 30 min (h=6), 60 min (h=12) and 90 min (h=18) (again, comparable to prediction horizons in for example [12, 13, 17]). After testing our method on all 25 patients in each data set separately, the average PRED-EGA scores (in percent) are displayed in Tables 3, 4.

TABLE 3
www.frontiersin.org

TABLE 3. Average PRED-EGA scores (in percent) for different prediction horizons on dataset D.

TABLE 4
www.frontiersin.org

TABLE 4. Average PRED-EGA scores (in percent) for different prediction horizons on dataset J.

For comparison, Tables 3, 4 also display predictions on the same data sets using four other methods:

i) MATLAB’s built-in Deep Learning Toolbox. We used a network architecture consisting of three fully connected convolutional and output layers and appropriate input, normalization, activation, and averaging layers. As before, we used 50% training data and averaged the experiment over 100 trials, for prediction windows of 30 min, 60 and 90 min and data sets D and J.

ii) The diffusion geometry approach, based on the eigen-decomposition of the Laplace-Beltrami operator, we followed in our 2017 paper [8]. As mentioned in Section 1, this approach can be classified as semi-supervised learning, where we need to have all the data points xj available in advance. Because of this requirement, this approach cannot be used for prediction based on data that was not included in the original data set we worked with. Indeed, this is one of the main motivations for our current method, which is not dependent on this requirement. Nevertheless, we include a comparison of the diffusion geometry approach with our current method for greater justification of the accuracy of our current method. Again, we used 50% training data and averaged the experiment over 100 trials, for prediction windows of 30, 60 and 90 min and data sets D and J.

iii) The Legendre polynomial prediction method followed in our 2013 paper [6]. In this context, the main mathematical problem can be summarized as that of estimating the derivative of a function at the end point of an interval, based on measurements of the function in the past. An important difference between our current method and the Legendre polynoimal approach is that with the latter, learning is based on the data for each patient separately (each patient forms a data set in itself, as explained in Section 1). Therefore, the method is philosophically different from ours. We nevertheless include a comparison for prediction windows of 30 min, 60 and 90 min and data sets D and J.

iv) The Fully Adaptive Regularized Learning (FARL) approach in the 2012 paper [10], which uses meta-learning to choose the kernel and regularization parameters adaptively (briefly described in Section 2). As explained in [10], a strong suit of FARL is that it is expected to be portable from patient to patient without any readjustment. Therefore, the results reported below were obtained by implementing the code and parameters used in [10] directly on data sets D and J for prediction windows of 30 min, 60 and 90 min, with no adjustments for the kernel and regularization parameters. As explained earlier, this method is also not directly compared to our theoretically well founded method.

The percentage accurate predictions and predictions with benign consequences in all three BG ranges, as obtained using all five methods, for a 30 min, 60 and 90 min prediction window, are displayed visually in Figures 1, 2.

FIGURE 1
www.frontiersin.org

FIGURE 1. Percentage accurate predictions and predictions with benign consequences in all three BG ranges (blue: hypoglycemia; orange: euglycemia; grey: hyperglycemia) for dataset D, for all five methods for different prediction windows (30, 60, and 90 min).

FIGURE 2
www.frontiersin.org

FIGURE 2. Percentage accurate predictions and predictions with benign consequences in all three BG ranges (blue: hypoglycemia; orange: euglycemia; grey: hyperglycemia) for dataset J, for all five methods for different prediction windows (30, 60, and 90 min).

By comparing the percentages in Tables 3, 4 and the bar heights in Figures 1, 2, it is clear that our method far outperforms the competitors in the hypoglycemic range, except for the 30 min prediction on data set D, where the diffusion geometry approach is slightly more accurate–but it is important to remember that the diffusion geometry approach has all the data points available from the outset. The starkest difference is between our method and Matlab’s Deep Learning Toolbox–the latter achieves less than 1.5% accurate and benign consequence predictions regardless of the size of the prediction window or data set. There is also a marked difference between our method and the Legendre polynomial approach, especially for longer prediction horizons. Our method achieves comparable or higher accuracy in the hyperglycemic range, except for the 60 and 90 min predictions on data set J, where the diffusion geometry and FARL approaches achieve more accurate predictions. The methods display comparable accuracy in the euglycemic range, except for the 60 and 90 min predictions on dataset J, where Matlab’s toolbox, the diffusion geometry approach and the FARL method achieve higher accuracy.

Tables 5, 6 display the results when testing those methods that are dependent on training data selection (that is, our current method, the Deep Learning Toolbox and the diffusion geometry approach in our 2017 paper) on different training set sizes (namely, 30, 50, 70 and 90%). For these experiments, we used a fixed prediction window of 30 min, and training data was drawn randomly according to a uniform probability distribution in each case. While the Deep Learning Toolbox and diffusion geometry approach sometimes perform slightly better than our method in the euglycemic and hypoglycemic ranges, respectively, our method consistently outperforms both of these in the other two blood glucose ranges. The results are displayed visually as well in Figures 3, 4.

TABLE 5
www.frontiersin.org

TABLE 5. Percentage accurate predictions for different training set sizes and a 30 min prediction horizon on dataset D.

TABLE 6
www.frontiersin.org

TABLE 6. Percentage accurate predictions for different training set sizes and a 30 min prediction horizon on dataset J.

FIGURE 3
www.frontiersin.org

FIGURE 3. Percentage accurate predictions and predictions with benign consequences in all three BG ranges (blue: hypoglycemia; orange: euglycemia; grey: hyperglycemia) with a 30 min prediction window on dataset D, for different training set sizes (30, 50, 70 and 90%).

FIGURE 4
www.frontiersin.org

FIGURE 4. Percentage accurate predictions and predictions with benign consequences in all three BG ranges (blue: hypoglycemia; orange: euglycemia; grey: hyperglycemia) with a 30 min prediction window on dataset J, for different training set sizes (30, 50, 70 and 90%).

As mentioned in Section 1, a feature of our method is that it does not require any information about the data manifold other than its dimension. So, in principle, one could “train” on one data set and apply the resulting model to another data set taken from a different manifold. We illustrate this by constructing our model based on one of the data sets D or J and testing it on the other data set. Building on this idea, we also demonstrate the accuracy of our method as compared to MATLAB’s Deep Learning Toolbox when trained on data set D and applied to perform prediction on data set J, and vice versa, which is the only other method that is directly comparable with the philosophy behind our method. These results are reported in Tables 7, 8. It is clear that we are successful in our goal of training on one data set and predicting on a different data set. The percentage accurate predictions and predictions with benign consequences when training and testing on different datasets are comparable to the cases when we are training and testing on the same dataset; in fact, we obtain even better results in the hypoglycemic BG range when training and testing across different datasets.

TABLE 7
www.frontiersin.org

TABLE 7. Average PRED-EGA scores (in percent) for different prediction horizons on dataset D, with training data selected from dataset J.

TABLE 8
www.frontiersin.org

TABLE 8. Average PRED-EGA scores (in percent) for different prediction horizons on dataset J, with training data selected from dataset D.

Figures 5, 6 display box plots for the 100 trials for the methods dependent on training data selection (that is, our current method, the Deep Learning Toolbox and the 2017 diffusion geometry approach) and prediction windows using data sets D and J, respectively. Our method performs fairly consistently over different trials.

FIGURE 5
www.frontiersin.org

FIGURE 5. Boxplot for the 100 experiments conducted with 50% training data for each prediction method (O = our method, M = MATLAB Deep Learning Toolbox, D = diffusion geometry approach) with a 30 min (top), 60 min (middle) and 90 min (bottom) prediction horizon and dataset D. Each of the graphs show the percentage accurate predictions in the hypoglycemic range (left), euglycemic range (middle) and hyperglycemic range (right).

FIGURE 6
www.frontiersin.org

FIGURE 6. Boxplot for the 100 experiments conducted with 50% training data for each prediction method (O = our method, M = MATLAB Deep Learning Toolbox, D = diffusion geometry approach) with a 30 min (top), 60 min (middle) and 90 min (bottom) prediction horizon and dataset J. Each of the graphs show the percentage accurate predictions in the hypoglycemic range (left), euglycemic range (middle) and hyperglycemic range (right).

6 Conclusion

In this paper, we demonstrate a direct method to approximate functions on unknown manifolds [7] in the context of BG prediction for diabetes management. The results are evaluated using the state-of-the-art PRED-EGA grid [14]. Unlike classical manifold learning, our method yields a model for the BG prediction on data not seen during training, even for test data which is drawn from a different distribution from the training data. Our method outperforms other methods in the literature in 30, 60, and 90 min predictions in the hypoglycemic and hyperglycemic BG ranges, and is especially useful for BG prediction for patients that are not included in the training set.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: http://direcnet.jaeb.org/Studies.aspx.

Author Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Funding

The research of HM is supported in part by NSF grant DMS 2012355 and ARO grant W911NF2110218. The research by SP has been partly supported by BMK, BMDW, and the Province of Upper Austria in the frame of the COMET Programme managed by FFG in the COMET Module S3AI.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

The authors would like to thank Professor Sampath Sivananthan at the Indian Institute of Technology, Delhi, India for providing us with the code used in [10] for the FARL approach.

References

1.DirecNet Central Laboratory (2005). Available at: http://direcnet.jaeb.org/Studies.aspx (Accessed January 1, 2021).

2. Clarke, WL, Cox, D, Gonder-Frederick, L. A, Carter, W, and Pohl, S. L. Evaluating Clinical Accuracy of Systems for Self-Monitoring of Blood Glucose. Diabetes care (1987) 10(5):622–8. doi:10.2337/diacare.10.5.622

PubMed Abstract | CrossRef Full Text | Google Scholar

3.Centers for Disease Control, Prevention. National Diabetes Statistics Report. Atlanta, GA: Centers for Disease Control and Prevention, US Department of Health and Human Services (2020). p. 12 15.

4. Hayes, AC, Mastrototaro, J. J, Moberg, SB, Mueller, JC, Clark, HB, Tolle, MCV, et al. Algorithm Sensor Augmented Bolus Estimator for Semi-closed Loop Infusion System. US Patent (2009) 7(547):281. June 16. Available at: https://patents.google.com/patent/US20060173406A1/en

Google Scholar

5. Jung, M, Lee, Y-B, Jin, S-M, and Park, S-M. Prediction of Daytime Hypoglycemic Events Using Continuous Glucose Monitoring Data and Classification Technique. arXiv preprint arXiv:1704.08769, 2017.

Google Scholar

6. Mhaskar, HN, Naumova, V, and Pereverzyev, SV. Filtered Legendre Expansion Method for Numerical Differentiation at the Boundary point with Application to Blood Glucose Predictions. Appl Maths Comput (2013) 224:835–47. doi:10.1016/j.amc.2013.09.015

CrossRef Full Text | Google Scholar

7. Mhaskar, H. Deep Gaussian Networks for Function Approximation on Data Defined Manifolds (2019). arXiv preprint arXiv:1908.00156.

8. Mhaskar, HN, Sergei, V, and Mariavan der Walt, D. A Deep Learning Approach to Diabetic Blood Glucose Prediction. Front Appl Maths Stat (2017) 3(14). doi:10.3389/fams.2017.00014 Pereverzyev.

CrossRef Full Text | Google Scholar

9. Mujahid, O, Contreras, I, and Vehi, J. Machine Learning Techniques for Hypoglycemia Prediction: Trends and Challenges. Sensors (2021) 21(2):546. doi:10.3390/s21020546

CrossRef Full Text | Google Scholar

10. Naumova, V, Pereverzyev, SV, and Sivananthan, S. A Meta-Learning Approach to the Regularized Learning-Case Study: Blood Glucose Prediction. Neural Networks (2012) 33:181–93. doi:10.1016/j.neunet.2012.05.004

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Pappada, SM, Cameron, BD, Rosman, PM, Bourey, RE, Papadimos, TJ, Olorunto, W, et al. Neural Network-Based Real-Time Prediction of Glucose in Patients with Insulin-dependent Diabetes. Diabetes Tech Ther (2011) 13(2):135–41. doi:10.1089/dia.2010.0104

CrossRef Full Text | Google Scholar

12. Reifman, J, Rajaraman, S, Gribok, A, and Ward, WK. Predictive Monitoring for Improved Management of Glucose Levels. J Diabetes Sci Technol (2007) 1(4):478–86. doi:10.1177/193229680700100405

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Seo, W, Lee, YB, Lee, S, Jin, SM, and Park, SM. A Machine-Learning Approach to Predict Postprandial Hypoglycemia. BMC Med Inform Decis Mak (2019) 19(1):210–3. doi:10.1186/s12911-019-0943-4

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Sivananthan, S, Naumova, V, Man, CD, Facchinetti, A, Renard, E, Cobelli, C, et al. Assessment of Blood Glucose Predictors: the Prediction-Error Grid Analysis. Diabetes Tech Ther (2011) 13(8):787–96. doi:10.1089/dia.2011.0033

CrossRef Full Text | Google Scholar

15. Sparacino, G, Zanderigo, F, Corazza, S, Maran, A, Facchinetti, A, and Cobelli, C. Glucose Concentration Can Be Predicted Ahead in Time from Continuous Glucose Monitoring Sensor Time-Series. IEEE Trans Biomed Eng (2007) 54(5):931–7. doi:10.1109/tbme.2006.889774

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Reza Vahedi, M, MacBride, KB, Woo, W, Kim, Y, Fong, C, Padilla, AJ, et al. “Predicting Glucose Levels in Patients with Type1 Diabetes Based on Physiological and Activity Data,” In Proceedings of the 8th ACM MobiHoc 2018 Workshop on Pervasive Wireless Healthcare Workshop (2018). p. 1–5. doi:10.1145/3220127.3220133

CrossRef Full Text | Google Scholar

17. Zhu, T, Li, K, Chen, J, Herrero, P, and Georgiou, P. Dilated Recurrent Neural Networks for Glucose Forecasting in Type 1 Diabetes. J Healthc Inform Res (2020) 4(3):308–24. doi:10.1007/s41666-020-00068-2

CrossRef Full Text | Google Scholar

Appendix

A Theoretical aspects

In this section, we describe the theoretical background behind the estimator F^n,α(Γ) used in this paper to predict the BG level. We focus only on the essential definitions, referring to [7] for details.

Let dq1 be integers, X be a q dimensional, compact, connected, sub-manifold of d (without boundary), with geodesic distance ρ and volume measure μ*, normalized so that μ*(X)=1. The operator F^n,α does not require any knowledge of the manifold other than q. Our construction is based on the classical Hermite polynomials which are best introduced by the recurrence relations (A.1). The orthonormalized Hermite polynomial hk of degree k is defined recursively by

hk(x):=2kxhk1(x)k1khk2(x),k=2,3,,
h0(x):=π1/4,h1(x):=2π1/4x.

We write

ψk(x):=hk(x)exp(x2/2),x,k+.

The functions {ψk}k=0 are an orthonormal set with respect to the Lebesgue measure:

ψk(x)ψj(x)dx={1, if k=j,0, otherwise.

In the sequel, we fix an infinitely differentiable function H:[0,), such that H(t)=1 if 0t1/2, and H(t)=0 if t1. We define for x, m+:

Pm,q(x):={π1/4(1)m(2m)!2mm!ψ2m(x), if q=1,1π(2q1)/4Γ((q1)/2)=0m(1)Γ((q1)/2+m)(m)!(2)!2!ψ2(x), if q2,

and the kernel Φ˜n,q for x, n+ by

Φ˜n,q(x):=m=0n2/2H(2mn)Pm,q(x).

We assume here a noisy data of the form (y,ϵ), with a joint probability distribution τ and assume further that the marginal distribution of y with respect to τ has the form dν*=f0dμ* for some f0C(X). In place of f(y), we consider a noisy variant (y,ϵ), and denote

f(y):=Eτ((y,)|y).

Remark A.1

In practice, the data may not lie on a manifold, but it is reasonable to assume that it lies on a tubular neighborhood of the manifold. Our notation accommodates this - if z is a point in a neighborhood of X, we may view it as a perturbation of a point yX, so that the noisy value of the target function is (y,ϵ), where ϵ encapsulate the noise in both the y variable and the value of the target function.

Our approximation process is simple: given by

F^n,α(Y;x):=nq(1α)Mj=1M(yj,ϵj)Φ˜n,q(n1α|xyj|2,d),xd,

where 0<α1.

The theoretical underpinning of our method is described by Theorem A.1, and in particular, by Corollary A.1, describing the convergence properties of the estimator. In order to state these results, we need a smoothness class Wγ(X), representing the assumptions necessary to guarantee the rate of convergence of our estimator to the target function. It is beyond the scope of this paper to describe the details of this smoothness class; an interested reader will find them in [7]. We note that the definition of the estimator in (A.3) is universal, and does not require any assumptions. The assumptions are needed only to gurantee the right rates of convergence.

Theorem A.1

Letγ>0, τ be a probability distribution onX×Ωfor some sample spaceΩsuch the marginal distribution of τ restricted toXis absolutely continuous with respect toμ*with densityf0Wγ(X). We assume that

supxX,r>0μ*(B(x,r))rqc.

Let :X×Ω be a bounded function, f defined by (A.2) be in Wγ(X), the probability density f0Wγ(X). Let M1, Y={(y1,ϵ1),,(yM,ϵM)} be a set of random samples chosen i.i.d. from τ. If

0<α<42+γ,α1,

then for every n1, 0<δ<1 and Mnq(2α)+2αγlog(n/δ), we have with τ-probability 1δ:

F^n,α(Y)ff0Xc1f0XX×Ω+ff0Wγ(X)nαγ.

Corollary A.1

With the set-up as in Theorem A.1, let f01 (i.e., the marginal distribution of y with respect to τ is μ*). Then we have with τ-probability 1δ:

||F^n,α(Y)f||Xc1||||X×Ω+||f||Wγ(X)nαγ.

Keywords: hermite polynomial, continuous glucose monitoring, blood glucose prediction, supervised learning, prediction error-grid analysis

Citation: Mhaskar HN, Pereverzyev SV and van der Walt MD (2021) A Function Approximation Approach to the Prediction of Blood Glucose Levels. Front. Appl. Math. Stat. 7:707884. doi: 10.3389/fams.2021.707884

Received: 10 May 2021; Accepted: 09 July 2021;
Published: 30 August 2021.

Edited by:

Hau-Tieng Wu, Duke University, United States

Reviewed by:

Valeriya Naumova, Simula Research Laboratory, Norway
Xin Guo, Hong Kong Polytechnic University, China

Copyright © 2021 Mhaskar, Pereverzyev and van der Walt. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: H. N, Mhaskar, hrushikesh.mhaskar@cgu.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.