Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 15 January 2020
Sec. Mathematical Finance
This article is part of the Research Topic Long-Memory Models in Mathematical Finance View all 5 articles

On Modeling a Class of Weakly Stationary Processes

  • 1Department of Information and Service Management, Aalto University School of Business, Helsinki, Finland
  • 2Department of Mathematics and Systems Analysis, Aalto University School of Science, Helsinki, Finland

In this article, we show that a general class of weakly stationary time series can be modeled applying Gaussian subordinated processes. We show that, for any given weakly stationary time series (zt)z∈ℕ with given equal one-dimensional marginal distribution, one can always construct a function f and a Gaussian process (Xt)t∈ℕ such that (f(Xt))t∈ℕ has the same marginal distributions and, asymptotically, the same autocovariance function as (zt)t∈ℕ. Consequently, we obtain asymptotic distributions for the mean and autocovariance estimators by using the rich theory on limit theorems for Gaussian subordinated processes. This highlights the role of Gaussian subordinated processes in modeling general weakly stationary time series. We compare our approach to standard linear models, and show that our model is more flexible and requires weaker assumptions.

1. Introduction

Time series models are of great significance in numerous areas of applications, e.g., finance, climatology, and signal processing, to name just a few. Central limit theorems play an important role in statistical inference. However, due to dependencies, it is challenging to obtain central limit theorems under general time series models. Moreover, from practical point of view, obtaining central limit theorem is not enough. It is also important to study how fast the convergence takes place, i.e., how far one is from the limiting distribution.

A simple generalization of the classical central limit theorem is the central limit theorem for M-dependent sequence of random variables. That is, the elements in the sequence are independent, if their indices are far away from each other. For general time series with arbitrary dependence structure, the problem becomes more subtle, and it might happen that the limiting distribution is not Gaussian and/or that one has to use different scaling than the classical T, where T is the sample size. Thus, a natural approach to the problem is to study limiting distributions of properly scaled averages of stationary processes with a given autocovariance structure. What happens on the limit is dictated by the dependence structure of the time series. If the dependence is weak enough, then central limit theorem is obtained. See a recent book [1] for a comprehensive introduction to the topic and [2] for functional central limit theorem. Another option is to impose mixing conditions. Limit theorems for strong mixing processes are studied (e.g., [35]). However, specific mixing conditions are often more than difficult to verify.

If we consider stationary time series models, two general classes, linear processes and Gaussian subordinated processes1, are applied extensively in different fields. The class of univariate linear processes consists of stationary processes (zt)t∈ℕ of the form

zt=j = -ϕjξt-j,

where the coefficients ϕj satisfy some specific assumptions and (ξj)j∈ℕ is a sequence of independent and identically distributed random variables. For example, this class covers stationary ARMA-models with I.I.D. errors. For theory of such processes together with central limit theorems, we refer to Brockwell and Davis [6] as well as to more recent articles [7, 8] studying limit theorems of linear processes. Finally, we mention [9], where Berry-Esseen type bounds are derived for linear processes and [10, 11], where estimation of the mean and the autocovariances is studied in the case of long-memory and heavy-tailed linear processes.

The class of univariate Gaussian subordinated processes consists of stationary processes (zt)t∈ℕ of the form zt = f(Xt), where (Xt)t∈ℕ is a d-variate stationary Gaussian process and f is a given function. It is usually assumed that f(X0)L2. Central limit theorems for such time series date back to Breuer and Major [12] and the topic has been studied extensively. Indeed, for Gaussian subordinated processes central and non-central limit theorems have been studied at least in Arcones [13], Avram and Taqqu [14], Bai and Taqqu [15, 16], Dobrushin and Major [17], and Giraitis and Surgailis [18]. Motivated by real-life applications, the non-central limit theorems have been studied mostly in the case of long-memory processes. In this case one has to use stronger normalization, and the limiting distribution is Gaussian only if the so-called Hermite rank of the function f is 1. More generally, in this case, the properly scaled average of zt converges toward a Hermite process of order k, where k is the Hermite rank of f. These central and non-central limit theorems have been considered in statistical applications for long-memory processes at least in Dehling and Taqqu [19] (empirical process and U-statistics), Dehling et al. [20] (change point tests), Lévy-Leduc et al. [21] (estimation of scale and autocovariances in Gaussian setup), and Giraitis and Taqqu [22] (Whittle estimator).

In addition to the study of long-memory case and non-central limit theorems, the central limit theorems for Gaussian subordinated stationary processes have emerged again to the center of mathematical community's interest. The reason behind this is that it has been observed that Stein's method and Malliavin calculus suit together admirably well—straightforwardly giving new tools to study central limit theorems for Gaussian subordinated processes. For recent developments on the topic, we refer to the articles [23, 24] and to the monograph [25]. Also, a stronger version of the Breuer-Major theorem was proven in Nourdin et al. [26]. It was proven that, in addition to the convergence in distribution, the convergence toward normal random variable holds even in stronger topologies, such as Kolmogorov or Wasserstein distance. Moreover, the authors also provided Berry-Esseen type bounds in these metrics. Finally, we mention [27], where the result was generalized to cover non-stationary Gaussian fields.

In this article, we consider a general class of weakly stationary time series (zt)t∈ℕ. We study the asymptotic behavior of the traditional mean and autocovariance estimators under the assumption of equal one-dimensional marginal distributions2 of (zt)t∈ℕ. Our main contribution is to show that for any such weakly stationary time series (zt)t∈ℕ with some given autocovariance structure and with some given equal one-dimensional marginal distributions, one can always construct a univariate Gaussian process (Xt)t∈ℕ and a function f such that (f(Xt))t∈ℕ has, asymptotically, the same autocovariance structure and the same one-dimensional marginal distributions as (zt)t∈ℕ. Relying on that, we complement the above mentioned works on limit theorems in the case of Gaussian subordination. There exists a rich literature on the topic, and we propose to model time series directly with (f(Xt))t∈ℕ. In comparison to the above mentioned literature, where the model is assumed to be (f(Xt))t∈ℕ, we start with a given weakly stationary time series with equal one-dimensional marginals, and we construct a function f and a Gaussian process (Xt)t∈ℕ such that (f(Xt))t∈ℕ is a suitable model for (zt)t∈ℕ. We obtain limiting normal distributions for the traditional mean and autocovariance estimators for any time series within our model that has absolutely summable autocovariance function. This corresponds to the case with short memory. In addition, we show that within our model, as desired, the function f does have Hermite rank equal to 1. Indeed, Hermite rank equal to 1 ensures that even in the long-memory case, the limiting distribution is normal. We also show that if the one-dimensional marginal distribution is symmetric, then the corresponding Hermite ranks for variance and autocovariance estimators are (essentially) equal to 2. As such, our model is particularly suitable for modeling long memory, in which case the exact knowledge on the Hermite ranks is crucially important. We compare our approach and results to the existing literature including comparison to the theory of linear processes that are covered by our model. Note that, our model is not limited to, but covers e.g., stationary ARMA-models. We observe that the assumptions that are usually posed in the literature for obtaining limiting normal distribution, are clearly stronger than the assumptions we require. For example, in the short memory case our assumption of summable covariance function is rather intuitive, as well as easily verified, compared to, e.g., complicated assumptions on the coefficients ϕj of linear processes. These results highlight the applicability of Gaussian subordinated processes in modeling weakly stationary time series.

The rest of the article is organized as follows. In section 2 we recall some basic definitions and preliminaries on Gaussian subordination. In section 3 we introduce and discuss our model. Section 4 is devoted to the study of the standard mean, variance, and autocovariance estimators in the framework of our model. In section 5 we give some concluding remarks and compare our approach to the existing literature.

2. Preliminaries

In this section we review some basic definitions and fundamental results that are later applied in section 3. We start by recalling the definition of weak stationarity.

Definition 2.1. Let (zt)t∈ℕ be a stochastic process. Then (zt)t ∈ ℕ is weakly stationary if for all t, s ∈ ℕ,

1. Ezt = μ < ∞,

2. Ezt2 = σ2<, and

3. Cov(zt, zs) = r(ts) for some function r.

Definition 2.2. We denote g(j) ~ f(j) as ja ∈ [−∞, ∞], if limjg(j)f(j)=C for some constant C ∈ (−∞, ∞).

Remark 2.1. Note that sometimes in the literature the notation g ~ f means limjg(j)f(j)=1. For our purposes, however, we are only interested in the asymptotics up to a multiplicative constant, and for notational simplicity we allow an arbitrary (finite) constant in the limit limjg(j)f(j).

Definition 2.3. Let (zt)t∈ℕ be stationary with autocovariance function r.

1. The process z is called short-range dependent, if

j = 1|r(j)|<.

2. The process z is called long-range dependent, if, as |j| → ∞, we have

r(j)~|j|2H-2    (2.1)

for some H(12,1).

Remark 2.2. The definition of long-range dependence varies in the literature, and different generally accepted definitions are not equivalent. Here we have adopted the definition given in Samorodnitsky [28] (Equation 5.15). Note also that Samorodnitsky [28] (Equation 5.15) involves an additional slowly varying function L(j) on the asymptotic behavior (2.1) of r(j). For the sake of simplicity of the presentation, we have omitted this factor in our definition. However, it is straightforward to check that all our results remain valid in the general case as well. For alternative definitions of long-range dependence, we refer to Samorodnitsky [28], especially the discussions on page 197.

We now recall Hermite polynomials and the Hermite ranks of functions.

The Hermite polynomials Hk are defined recursively as follows:

H0(x)=1,H1(x)=x,and Hk+1(x)=xHk(x)-kHk-1(x).

The kth Hermite polynomial Hk is clearly a polynomial of degree k. Moreover, it is well-known that Hermite polynomials form an orthogonal basis of the Hilbert space of functions f satisfying

-[f(x)]2e-x22dx<,

or equivalently, E[f(X)]2 < ∞, where X ~ N(0, 1). Every f that belongs to that Hilbert space has a Hermite decomposition

f(x)=k = 0αkHk(x),    (2.2)

and for X ~ N(0, 1), Y ~ N(0, 1), we have that

E[f(X)f(Y)]=k = 0k!αk2[Cov(X,Y)]k.    (2.3)

Definition 2.4 (Hermite rank). Let (Xt)t∈ℕ, Xt=(Xt(1),Xt(2),,Xt(d)), be a d-dimensional stationary Gaussian process. Let f:ℝd → ℝ, f(Xt)L2. The function f is said to have Hermite rank q with respect to Xt, if E[(f(Xt) − Ef(Xt))pm(Xt)] = 0 for all polynomials pm:d that are of degree mq − 1, and if there exists a polynomial pq of degree q such that E[(f(Xt) − Ef(Xt))pq(Xt)] ≠ 0.

Remark 2.3. Note that the Hermite rank of a function f is the smallest number q ≥ 1 such that αq ≠ 0 in decomposition (2.2).

Processes of form f(Xt) are called Gaussian subordinated processes, and there exists a rich theory on the statistical inference for subordinated processes. It turns out that the Hermite rank plays a crucial role. This fact is already visible in the following Breuer-Major theorem [12].

Theorem 2.1. [12, Theorem 1] Let (Xt)t∈ℕ, Xt=(Xt(1),Xt(2),,Xt(d)), be a d-dimensional stationary Gaussian process. Assume that f:ℝd → ℝ, f(Xt)L2, has a Hermite rank q ≥ 1. Denote

rXk,i(τ)=E[Xτ(k)X0(i)].

If

τ = 0|rXk,i(τ)|q<,  k,i=1,2,,d,

then σ2=Var[f(X0)]+2t = 1Cov[f(X0),f(Xt)] is well-defined and

1Tt = 1T[f(Xt)-Ef(Xt)] dN(0,σ2),

as T → ∞.

A stronger version of Theorem 2.1 was proven in a recent article [26]. It was shown that the convergence holds even in stronger topologies than the convergence in distribution, e.g., the convergence holds in Wasserstein distance and in Kolmogorov distance. Furthermore, applying Theorem 2.1 of Nourdin et al. [26], it is possible to study the rate of convergence. Obviously, one could apply these results in our setting as well, but for a general function f, the bounds are rather complicated. For an interested reader, we refer to Nourdin et al. [26]. It is also known [29] that, under the additional assumption that f(Xt)L2+ϵ for some ϵ > 0, a functional version of Theorem 2.1 holds, i.e.,

1Tt = 1nT[f(Xt)-Ef(Xt)]

converges weakly toward σ times a Brownian motion in the Skorokhod space.

The following result provides a generalization into the long memory case, where the summability condition does not hold. For details, we refer to Bai and Taqqu [30] and the references therein.

Theorem 2.2. Assume that f:ℝ → ℝ, f(Xt)L2, has a Hermite rank q ≥ 1, and let X be a stationary Gaussian process such that, as t → ∞,

rX(t)q~|t|2H-2

for some H(12,1). Then

1TH-1t = 1T[f(Xt)-Ef(Xt)] dZq,

as T → ∞, where Zq is the so-called Hermite random variable of order q multiplied with a constant.

Remark 2.4. The normalization in Theorem 2.2 stems from the fact that

Var(1Tk = 1T[f(Xt)-Ef(Xt)])~1Tk = 1TrX(t)q.

Remark 2.5. We stress that Z1 is just a normal random variable, and consequently the only difference compared to Theorem 2.1 is the normalization. However, in the corresponding functional version, the limiting Gaussian process is the fractional Brownian motion instead of the standard Brownian motion.

3. On Modeling Weakly Stationary Time Series

Let (zt)t∈ℕ be a given weakly stationary univariate time series with an expected value μ = E[zt] and a given autocovariance function rz(τ)=E[zτz0]-μ2. Without loss of generality and in order to simplify the presentation, we assume that μ = 0 and Var(zt) = 1. Assume that the one-dimensional marginals of (zt)t∈ℕ are all equal. By equal one-dimensional marginal distributions we mean that the distribution of zt is the same for all time indices t. The corresponding one-dimensional variable is denoted by z, and its cumulative distribution function is denoted by Fz.

We begin with the following result stating that Gaussian subordinated processes can have arbitrary one-dimensional marginals. The claim is based on inverse sampling, and is rather widely accepted folklore in the Gaussian subordination literature. However, since in many textbooks the claim is stated only in the case of continuous distributions Fz, for the sake of clarity we present the proof. We stress that the proof is standard, and we do not claim originality here.

Proposition 3.1. Let (zt)t∈ℕ be an arbitrary process with equal square integrable one-dimensional marginals Fz. Then there exists a function f and a standardized ℝ-valued Gaussian process (Xt)t∈ℕ such that f(Xt)L2 has the same one-dimensional marginal distributions as the process (zt)t∈ℕ. In particular, f has a Hermite decomposition

f(x)=j = 0αjHj(x).

Proof. For y ∈ (0, 1), denote by

Fz-1(y)=infx{Fz(x)y}

the quantile function of Fz. It is well-known that if U is a uniformly distributed random variable on [0, 1], then Fz-1(U) is distributed as z. Let Φ denote the distribution function of the standard normal distribution. Then Φ(X) is uniformly distributed, from which it follows that Fz-1(Φ(X)) is distributed as z, and hence we may set f(·)=Fz-1(Φ(·)). Furthermore, since zL2, we also have f(X) ∈ L2. From this it follows also that f has the Hermite decomposition. This concludes the proof. 

Remark 3.1. We emphasize that we are only claiming that the one-dimensional distributions of (F1((Φ(Xt)))t are equal to the one-dimensional distributions of (zt)t∈ℕ. The multidimensional distributions are not necessarily the same.

Remark 3.2. In general, Fz-1 is only the left-inverse of Fz, i.e. Fz-1[Fz(y)]=y but Fz-1[Fz(y)]=y is not necessarily true. Thus, we cannot recover X from the transformation x=Φ-1(Fz(z)). On the other hand, if Fz is continuous and strictly increasing, then Fz-1 is a proper inverse function and X=Φ-1(Fz(z)).

By Proposition 3.1, for any stationary process z (with equal one-dimensional marginals) one can always choose f such that f(X) has the correct one-dimensional distributions. As the analysis of weakly stationary processes boils down to the analysis of the covariance, one would like to construct a Gaussian process X such that, for a given sequence of coefficients αk, the process

Zt=k = 1αkHk(Xt)

have also the same covariance structure than (zt)t∈ℕ. As Gaussian processes can have arbitrary covariance structures, this question can be rephrased whether each covariance function rz have a representation

rz(τ)=k = 1k!αk2rX(τ)k,    (3.1)

where rX(τ) is arbitrary covariance function. Unfortunately, this is not the case as the following example shows.

Example 3.1. Let Zt=16H3(Xt). In order for (3.1) to hold for arbitrary covariance function would require that every positive semidefinite matrix RZ has a representation

RZ=RXRXRX,    (3.2)

where RX is positive semidefinite as well and ◦ denotes the Hadamard, i.e., element-wise, product of matrices. This clearly does not hold for general matrices, and it is straightforward to construct counterexamples. For instance,

RZ=(1b30b31b30b31)

with 14<b<(14)13 is positive definite, and leads to

RX=(1b0b1b0b1)

which is not positive definite.

This example reveals that given the marginal distribution Fz and the covariance rz of z, it might be that Fz-1(Φ(Xt)) does not have the same covariance than z. On the other hand, in many applications one is only interested in modeling long scale behavior such as long range dependence. Luckily it turns out that for this purpose, Fz-1(Φ(Xt)) provides a good model.

Proposition 3.2. Suppose that (zt)t∈ℕ is a long-range dependent stationary process with equal one-dimensional marginals. Then there exists a Gaussian process and a function f such that the process Zt = f(Xt) has same one-dimensional marginals and

rZ(t)~rz(t)

as t → ∞.

Proof. Again, we set f(·)=Fz-1(Φ(·)). Then the marginals of Z are given by Fz. Moreover, using (2.3) we obtain

rZ(τ)=k = qk!αk2rXk(τ),

where q is the Hermite rank of Fz-1(Φ(x)). Since rz(τ)~|τ|2H-2, it remains to take any stationary Gaussian process that satisfies rX(τ)~|τ|2H-2q. Indeed, it is clear that then, in view of Definition 2.2, we have

rZ(τ)~rXq(τ)

whenever rXk(τ)>0 and converges to zero. Such Gaussian process clearly exists. 

Remark 3.3. We can easily extend the result beyond long memory case, provided that the decay of rz is of certain type. Indeed, we always have rZ(τ)~rXq(τ) or equivalently, [rZ(τ)]1q~rX(τ) as τ → ∞. While Example 3.1 shows that we cannot construct such X for arbitrary covariance function rz(τ), we note that our construction is possible for a wide range of covariance functions rz, including short-range dependent processes. For example, this is possible if rz has exponential decay, already covering many interesting short-range dependent examples.

Remark 3.4. It is well-known that given the asymptotics of the autocovariance rX(τ), the term |rX(τ)|q determines the asymptotics of the autocovariance of (f(Xt)))t∈ℕ [31, p. 223]. We stress that here we do the opposite; given the autocovariance rz(τ) we construct (Xt)t∈ℕ such that (f(Xt)))t∈ℕ has the autocovariance function rz(τ)~|rX(τ)|q.

4. On Model Calibration

In this section we suppose that the process (zt)t∈ℕ is given by

zt=f(Xt).    (4.1)

In particular, motivated by Proposition 3.1 and Proposition 3.2, we consider the case f(x) = F−1(Φ(x)).

We are interested in the mean and the autocovariance estimators given by

mz=1Tt = 1Tzt,

and

r^z(τ)=1Tt = 1T-τ[zt-mz][zt+τ-mz].

For simplicity, we divide by T instead of T − τ. Consequently, the estimators r^z(τ) are only asymptotically consistent. On the other hand, in this case, the sample autocovariance function preserves the desired property of positive semidefinitiness. Obviously the asymptotic behavior of r^z(τ) is the same as for

r~z(τ)=1T-τt = 1T-τ[zt-mz][zt+τ-mz].

Finally, for the case μ = 0, a simpler version

r¯z(τ)=1Tt = 1T-τztzt+τ

is often used. If one is only interested in consistency of the estimator, the use of r¯z(τ) is justified by the following simple lemma which states that asymptotically the difference between r^z(τ) and r¯z(τ) is negligible.

Lemma 4.1. Assume that mz = mz(T) → μ in probability, as T → ∞. Then

r^z(τ)=r¯z(τ)-[mz(T)]2+Op(T-1).

Proof. We have that

r^z(τ)=1Tt=1T-τztzt+τ-mz1Tt = 1T-τzt-mz1Tt = 1T-τzt+τ+mz2T-τT          =r¯z(τ)-mz2+RT,

where

RT=mzTt = T-τ+1Tzt+mzTt = 1τzt-τTmz2

for τ ≥ 1 and RT = 0 for τ = 0. Now, since (zt)t∈ℕ has finite second moments, the both sums in RT are bounded in probability. Similarly, the last term is Op(T-1), as T → ∞. 

The problem with the usage of r¯z(τ) instead of r^z(τ) is in the rate of convergence that can play a crucial role under long memory. In order to study the rate of convergence (and possible limiting distributions) for autocovariance estimators one needs to study the Hermite rank of g(Xt, Xt + τ) = f(Xt)f(Xt + τ) which, in general, can be larger or smaller than the rank of f. This fact is illustrated with the following simple examples.

Example 4.1. Let f(x) = x. Then f has Hermite rank 1, while [f(x)]2 = x2 has Hermite rank 2.

Example 4.2. Let f(x) = H2(x). Then f has Hermite rank 2 as well as [f(x)]2 = x4 − 2x2 + 1.

Example 4.3. Let f(x) = H3(x) + H2(x). Then f has Hermite rank 2, while [f(x)]2 = x6 + 2x5 − 5x4 − 8x3 + 7x2 + 6x + 1 has Hermite rank 1.

More generally, for an arbitrary pair (q, p) ∈ ℕ2 it is straightforward to construct examples of f where f has rank q and f2 has rank p. In view of Remark 2.4, this means that the mean estimator mz is of order

mz=Op(1Tk = 1TrX(t)q)

while the variance estimator is of order

r¯z(0)=Op(1Tk = 1TrX(t)p).

Thus, the asymptotic properties of the estimators r^z(0) and r¯z(0) can be very different. Similarly, one can construct examples of f where the rank of f2 is q and the (two-dimensional) rank of f(Xτ)f(X0), for fixed τ, is p. Thus, even the asymptotical properties and the rate of convergence for variance estimator r¯z(0) and autocovariance r¯z(τ) can be different, and it is crucially important to have knowledge on the exact ranks of f(X0) and f(Xτ)f(X0). This is problematic, since in practice the function f is usually not known. On the other hand, in our case we have f(x) = F−1(Φ(x)), where the quantile function F−1 can be estimated from the observations. In this case it turns out that the Hermite rank is known as well.

Proposition 4.1. Let F be an arbitrary distribution function with finite variance. Then

f(·)=F-1(Φ(·))

has Hermite rank 1.

Proof. In order to prove the claim we have to show that

E[f(X)X]0

for X ~ N(0, 1). We have

-F-1(Φ(x))xe-x22dx=-0F-1(Φ(x))xe-x22dx+0F-1(Φ(x))xe-x22dx=-0F-1(Φ(-x))xe-x22dx+0F-1(Φ(x))xe-x22dx=0[F-1(Φ(x))-F-1(Φ(-x))]xe-x22dx.

Since F is non-decreasing, also F−1 is non-decreasing and hence

F-1(Φ(x))-F-1(Φ(-x))0

for all x ≥ 0. Furthermore, the inequality is strict for large enough x, giving

E[f(X)X]=0[F-1(Φ(x))-F-1(Φ(-x))]xe-x22dx>0.

 

Remark 4.1. Hermite rank 1 makes the mean and the autocovariance estimators stable, and one usually obtains Gaussian limits with suitable normalizations. For detailed discussion on the stability in the case of Hermite rank 1, we refer to Bai and Taqqu [30].

Remark 4.2. We stress again that while z = F−1(Φ(X)) has distribution F, in general it is not true that F−1(Φ(X)) X is distributed as zX. For example, if z = g(X) with suitable g the distribution of g(X)X is not the same as the distribution of Fg(X)-1(Φ(X))X. A simple example of such case is χ2(1) distribution, where g(x) = x2 but FX2-1(Φ(x))x2. Clearly, g(X) has Hermite rank 2 while FX2-1(Φ(x)) has Hermite rank 1. This fact highlights our proposal to model z with z = F−1(Φ(x)) directly. It is also worth to note that if g(x) is bijective, then the distributions of Fg(X)-1(Φ(X))X and g(X)X are equal.

Proposition 4.1 allows us to study asymptotic properties of the mean estimator mz. Moreover, we get asymptotic properties also for variance and autocovariance estimators in the case of short memory processes, that are, in view of Remark 3.3, also interesting in our model.

Theorem 4.1. Let (zt)t∈ℕ be given by

zt=Fz-1(Φ(Xt))

and Ezt4=c<. Assume further that

τ = 1|rz(τ)|<.    (4.2)

Then

T[mz-μ]N(0,σ2)    (4.3)

with σ2=Var(z0)+2τ = 1rz(τ), and for any k ≥ 0

T[r^z(0)-rz(0),r^z(1)-rz(1),,r^z(k)-rz(k)]N(0,Σ),    (4.4)

where Σ = (Σij), i, j = 0, 1, …, k is given by

(Σ)ij=Cov(z0zi,z0zj)+2τ = 1Cov(zτzi+τ,z0zj).

Proof. The convergence (4.3) follows directly from Theorem 2.1 together with the fact that, by Proposition 4.1, we have rz(t) ~ rX(t). For the convergence (4.4), first note that without loss of generality and for the sake of simplicity, we may and will assume that μ = 0 and use the estimators r¯z(k) instead. Indeed, the general case then follows easily from (4.3), Lemma 4.1, and the Slutsky's theorem. In order to prove (4.4) we have to show that, for any n ≥ 1 and any (α1,,αn)n, the linear combination

Tk = 0nαk[r¯z(k)-rz(k)],    (4.5)

converges toward a Gaussian random variable. We define an n + 1-dimensional stationary Gaussian process X¯t=(Xt,Xt+1,,Xt+n) and a function

G(X¯t)=k = 0nαk[f(Xt)f(Xt+k)-rz(k)],

where f(·)=Fz-1(Φ(·)). With this notation we have

Tk = 0nαk[r¯z(k)-rz(k)]=1Tt = 1TG(X¯t)+R(T),

where

R(T)=-1Tk = 0nαkt = T-k+1Tztzt+k.

Since Ezt4=c<, it follows from Cauchy-Schwarz inequality that G(X¯)L2. Thus, assumption (4.2) together with Theorem 2.1 implies that

1Tt = 1TG(X¯t)N(0,σ2).

For the term R(T), we observe that the sum

k = 0nαkt = T-k+1Tztzt+k

is bounded in L2, and hence R(T) → 0 in probability. Thus, the convergence of any linear combination of the form (4.5) toward a normal random variable follows directly from Slutsky's theorem. Finally, the covariance matrix Σ is derived by considering convergence of

T[r^z(i)-rz(i)+r^z(j)-rz(j)]

together with Theorem 2.1 and by direct computations. 

In the presence of long memory, one needs to also compute the ranks of [Fz-1(Φ(·))]2 (for variance estimation) and Fz-1(Φ(Xτ))Fz-1(Φ(X0)) (for autocovariance estimation). Unfortunately, given a general Fz these can be again arbitrary. It turns out however, that if the distribution Fz is symmetric (around 0), then we can always compute the corresponding ranks.

Recall that a distribution F is symmetric if F(x) = 1 − F(−x) for all x ∈ ℝ. This translates into

F-1(y)=-F-1(1-y),  y[0,1].

In view of the symmetry of the normal distribution, this further implies

F-1(Φ(x))=-F-1(Φ(-x)).    (4.6)

Proposition 4.2. Let X ~ N(0, 1) and let F be an arbitrary symmetric distribution function with finite variance. Then;

1. For odd numbers k ≥ 1 we have

E[F-1(Φ(X))Xk]>0.

2. For even numbers k ≥ 0 we have

E[F-1(Φ(X))Xk]=0

In particular,

f(·)=F-1(Φ(·))

has Hermite rank 1 and a decomposition

F-1(Φ(Xt))=k1αkHk(Xt),    (4.7)

where, for j = 0, 1, 2, …, we have α2j = 0.

Proof. Let k be fixed. Computing as in the proof of Proposition 4.1, we get

-F-1(Φ(x))xke-x22dx=-0F-1(Φ(x))xke-x22dx+0F-1(Φ(x))xke-x22dx=(-1)k0F-1(Φ(-x))xke-x22dx+0F-1(Φ(x))xke-x22dx=0[F-1(Φ(x))+(-1)kF-1(Φ(-x))]xke-x22dx.

As in the proof of Proposition 4.1, this shows the claim for odd numbers k. Similarly, the claim for even k follows from (4.6). 

Proposition 4.3. Let Fz be symmetric and let τ ∈ ℤ be fixed. Then the Hermite rank of Fz-1(Φ(Xτ))Fz-1(Φ(X0)) is at least 2. Moreover, if r(k) = rX(k) is non-degenerate, i.e. for all j ∈ ℕ we have r(m) = r(j) for at most finitely many m ∈ ℕ, then the set

S={τ:Fz-1(Φ(Xτ))Fz-1(Φ(X0)) has rank above two}

is finite. In particular, if r(k) → 0 as k → ∞, then the set S is finite.

Proof. From

Hk(X)X=Hk+1(X)+kHk-1(X)

we obtain

f(X0)X0=k1[αkHk+1(X0)+αkkHk-1(X0)]                  =α1+k2[αk-1+(k+1)αk+1]Hk(X0).

Here we have only even terms H2k while f(Xt) consists of odd terms H2k+1, giving

E[f(Xt)f(X0)X0]=0.

Thus, q > 1 meaning that the rank is at least two. Let us next prove that the set S is finite. We first note that now

E[f(Xt)f(X0)XtX0]=α12+k2[αk-1+(k+1)αk+1]2k!rk.

Let τn → ∞ be an arbitrary sequence. Since all bounded sequences have a convergent subsequence, we may without loss of generality assume that rn) → r ∈ [−1, 1]. Furthermore, without loss of generality we can assume rn) ≠ r. We now argue by contradiction and suppose that S is not finite. Then, by passing to a subsequence if necessary, we can find a sequence τnS such that τn → ∞ and rn) → r, rn) ≠ r. Since τnS, the Hermite rank of f(Xτn)f(X0) with E(XτnX0) = rn) → r is q > 2 for all n. This means that

E[f(Xτn)f(X0)XτnX0]=E[f(Xtn)f(X0)]r(τn)                                              =k1αk2k!r(τn)k+1.

However, we can regard

g1(r)=α12+k2[αk-1+(k+1)αk+1]2k!rk

and

g2(r)=k1αk2k!rk+1

as real-analytic functions. Consequently, since they coincides for all rn) converging to r, by the identity theorem we conclude that they are equal everywhere. In particular, this gives us

g1(0)=α12=g2(0)=0

which leads to a contradiction since, by Proposition 4.1, we have α1 ≠ 0. This concludes the proof. 

Remark 4.3. Note that if for some N we have r(j) = 0 for all jN, the statement is still valid while our assumption on the non-degeneracy of r is violated.

Now it is straightforward to obtain the following result on the long memory case, analogous to Theorem 4.1.

Theorem 4.2. Let (zt)t∈ℕ be given by

zt=Fz-1(Φ(Xt)),

where Fz is symmetric and Ezt4=c<. Assume further that z is long-range dependent for some H(12,34). Then there exists a constant σ2 > 0 and a positive semidefinite matrix Σ such that

T1-H[mz-μ]N(0,σ2)    (4.8)

and, for any k ≥ 0,

T[r^z(0)-rz(0),r^z(1)-rz(1),,r^z(k)-rz(k)]N(0,Σ).    (4.9)

Proof. The convergence (4.8) follows from Theorem 2.2 and Proposition 4.1, and the convergence (4.9) can be proved by following the proof of Theorem 4.1 and exploiting the facts that, by Proposition 4.3, the rank is at least two, and that

k=1rX(t)2<

for H<34. The details are left to the reader. 

Remark 4.4. We remark that here we have used the convention that zero vector can be viewed as N(0, Σ) distributed random variable with zero variance. This corresponds to the case when the ranks of r^z(j)-rz(j) are above two for all jk. Note also that, by Proposition 4.3, we always obtain a non-trivial limiting distribution by choosing k large enough.

5. Discussion

In this article, we argued why it is advantageous to model weakly stationary time series with equal one-dimensional marginals by using Gaussian subordinated processes, especially in the case of long memory. Under our model, we are able to provide limit theorems for the standard mean and autocovariance estimators. Furthermore, even functional versions of the central limit theorems and Berry-Esseen type bounds in different metrics are available. In our modeling approach (zt)t∈ℕ = (f(Xt))t∈ℕ, the Hermite rank of the function f is equal to 1. This is especially useful in the case of long memory processes as the limiting distribution is normal if and only if the Hermite rank of f is equal to 1. For the variance and autocovariance estimators, we also proved that the corresponding Hermite ranks are (essentially) two provided that the distribution is symmetric. While in general one can always symmetrize the distribution, one might lose essential information on the transformation. This can be viewed as the price to pay in the trade where we gain more knowledge on the Hermite ranks, allowing to obtain precise asymptotic results for different estimators.

We end this paper by comparing our approach to the existing literature. Linear processes of the form

zt=j = 0ϕjξt-j,

where (ξt)t∈ℤ is an independent and identically distributed sequence, are widely applied models for stationary time series. To obtain central limit theorems for the mean and the autocovariance estimators, conditions on the coefficients (ϕj)j∈ℤ are required. A sufficient condition for obtaining central limit theorems is

j = 0|ϕj|<    (5.1)

together with Eξt4< [see Theorem 7.1.2. and Theorem 7.2.1. in [6]]. As the sequence (ξt)t∈ℤ is independent and identically distributed, it follows that the one-dimensional marginals of the process are equal. Moreover, it is customary to pose assumptions for (ϕj)j∈ℤ giving exponential decay for the covariance. Consequently, such linear processes are covered by our modeling approach. Moreover, it is easy to see that Eξt4< implies Ezt4<, and (5.1) is strictly stronger than the assumption of absolutely summable autocovariance function. Thus, our modeling approach is more flexible and requires weaker assumptions.

Data Availability Statement

All datasets generated for this study are included in the article/supplementary material.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1. ^Here we adopt the terminology Gaussian subordination from the Gaussian literature meaning Y = f(X). This should not be confused with a Levý process that is a subordinator (with a different meaning).

2. ^By one-dimensional marginal distributions we refer to the distributions of zt for fixed time indices t.

References

1. Rio E. Asymptotic Theory of Weakly Dependent Random Processes. Berlin; Heidelberg: Springer-Verlag (2017).

Google Scholar

2. Herrndorf N. A functional central limit theorem for weakly dependent sequences of random variables. Ann Probab. (1984) 28:141–53. doi: 10.1214/aop/1176993379

CrossRef Full Text | Google Scholar

3. Dedecker J, Rio E. On mean central limit theorems for stationary sequences. Ann Inst Henri Poincaré. (2008) 44:693–726. doi: 10.1214/07-AIHP117

CrossRef Full Text | Google Scholar

4. Doukhan P, Massart P, Rio E. The functional central limit theorem for strongly mixing processes. Ann Inst Henri Poincaré. (1994) 30:63–82.

Google Scholar

5. Merleváde F, Peligrad M. The functional central limit theorem under the strong mixing condition. Ann Probab. (2000) 28:1336–52. doi: 10.1214/aop/1019160337

CrossRef Full Text | Google Scholar

6. Brockwell PJ, Davis RA. Time Series: Theory and Methods. Vol. 2. New York, NY: Springer-Verlag (1991).

Google Scholar

7. Ho H, Hsing T. Limit theorems for functionals of moving averages. Ann Probab. (1997) 25:1636–69. doi: 10.1214/aop/1023481106

CrossRef Full Text | Google Scholar

8. Wu WB. Central limit theorems for functionals of linear processes and their applications. Stat Sin. (2002) 12:635–50.

Google Scholar

9. Cheng T, Ho H. On Berry–Esseen bounds for non-instantaneous filters of linear processes. Bernoulli. (2008) 14:301–21. doi: 10.3150/07-BEJ112

CrossRef Full Text | Google Scholar

10. Jach A, McElroy T. Subsampling inference for the autocovariances and autocorrelations of long-memory heavy-tailed linear time series. J Time Ser Anal. (2012) 33:935–53. doi: 10.1111/j.1467-9892.2012.00808.x

CrossRef Full Text | Google Scholar

11. Jach A, McElroy T, Politis DN. Subsampling inference for the mean of heavy-tailed long-memory time series. J Time Ser Anal. (2012) 33:96–111.

Google Scholar

12. Breuer P, Major P. Central limit theorems for nonlinear functionals of Gaussian fields. J Multivar Anal. (1983) 13:425–41. doi: 10.1016/0047-259X(83)90019-2

CrossRef Full Text | Google Scholar

13. Arcones MA. Limit theorems for nonlinear functionals of a stationary Gaussian sequence of vectors. Ann Probab. (1994) 22:2242–74. doi: 10.1214/aop/1176988503

CrossRef Full Text | Google Scholar

14. Avram F, Taqqu M. Noncentral limit theorems and Appell polynomials. Ann Probab. (1987) 15:767–75. doi: 10.1214/aop/1176992170

CrossRef Full Text | Google Scholar

15. Bai S, Taqqu M. Multivariate limit theorems in the context of long-range dependence. J Time Ser Anal. (2013) 34:717–43. doi: 10.1111/jtsa.12046

CrossRef Full Text | Google Scholar

16. Bai S, Taqqu M. How the instability of ranks under long memory affects large-sample inference. Statist. Sci. (2018) 33:96–116. doi: 10.1214/17-STS633

CrossRef Full Text | Google Scholar

17. Dobrushin RL, Major P. Non-central limit theorems for non-linear functional of Gaussian fields. Z Wahrsch Verw Gebiete. (1979) 50:27–52. doi: 10.1007/BF00535673

CrossRef Full Text | Google Scholar

18. Giraitis L, Surgailis D. CLT and other limit theorems for functionals of Gaussian processes. Z Wahrsch Verw Gebiete. (1985) 70:191–212. doi: 10.1007/BF02451428

CrossRef Full Text | Google Scholar

19. Dehling H, Taqqu M. The empirical process of some long-range dependent sequences with an application to U-statistics. Ann Stat. (1989) 17:1767–83. doi: 10.1214/aos/1176347394

CrossRef Full Text | Google Scholar

20. Dehling H, Rooch A, Taqqu M. Non-parametric change-point tests for long-range dependent data. Scand J Stat. (2013) 40:153–73. doi: 10.1111/j.1467-9469.2012.00799.x

CrossRef Full Text | Google Scholar

21. Lévy-Leduc L, Boistard H, Moulines E, Reisen VA, Taqqu M. Robust estimation of the scale and of the autocovariance function of Gaussian short and long-range dependent processes. J Time Ser Anal. (2011) 32:135–56. doi: 10.1111/j.1467-9892.2010.00688.x

CrossRef Full Text | Google Scholar

22. Giraitis L, Taqqu M. Whittle estimator for finite-variance non-Gaussian time series with long memory. Ann Stat. (1999) 27:178–203. doi: 10.1214/aos/1018031107

CrossRef Full Text | Google Scholar

23. Nourdin I, Peccati G. Stein's method on Wiener chaos. Probab Theory Relat Fields. (2009) 145:75–118. doi: 10.1007/s00440-008-0162-x

CrossRef Full Text | Google Scholar

24. Nourdin I, Peccati G. Stein's method and exact Berry-Esseen asymptotics for functionals of Gaussian fields. Ann Probab. (2010) 37:2231–61. doi: 10.1214/09-AOP461

CrossRef Full Text | Google Scholar

25. Nourdin I, Peccati G. Normal Approximations Using Malliavin Calculus: From Stein's Method to Universality. Cambridge, UK: Cambridge University Press (2012).

Google Scholar

26. Nourdin I, Peccati G, Podolskij M. Quantitative Breuer–major theorems. Stoc Proc Appl. (2011) 121:793–812. doi: 10.1016/j.spa.2010.12.006

CrossRef Full Text | Google Scholar

27. Barder J, Surgailis D. Moment bounds and central limit theorems for Gaussian subordinated arrays. J Multivar Anal. (2013) 114:457–73. doi: 10.1016/j.jmva.2012.08.002

CrossRef Full Text | Google Scholar

28. Samorodnitsky G. Long range dependence. Found Trends Stochast Syst. (2007) 1:163–257. doi: 10.1561/0900000004

CrossRef Full Text | Google Scholar

29. Nourdin I, Nualart D. The functional Breuer–Major theorem. Probab Theory Relat Fields. (2019). doi: 10.1007/s00440-019-00917-1. [Epub ahead of print].

CrossRef Full Text | Google Scholar

30. Bai S, Taqqu M. Generalized Hermite processes, discrete chaos and limit theorems. Stochast Process Appl. (2014) 124:1710–39. doi: 10.1016/j.spa.2013.12.011

CrossRef Full Text | Google Scholar

31. Beran J, Feng Y, Ghosh S, Kulik R. Long-Memory Processes: Probabilistic Properties and Statistical Methods. Berlin; Heidelberg: Springer-Verlag (2013).

Google Scholar

Keywords: weak stationarity, autocovariance function, Gaussian subordinated processes, estimation, central limit theorem

Citation: Viitasaari L and Ilmonen P (2020) On Modeling a Class of Weakly Stationary Processes. Front. Appl. Math. Stat. 5:68. doi: 10.3389/fams.2019.00068

Received: 01 November 2019; Accepted: 18 December 2019;
Published: 15 January 2020.

Edited by:

Elisa Alos, Pompeu Fabra University, Spain

Reviewed by:

Yuliia Mishura, Taras Shevchenko National University of Kyiv, Ukraine
Nicolas Marie, Special School of Mechanics and Electricity, France

Copyright © 2020 Viitasaari and Ilmonen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lauri Viitasaari, lauri.viitasaari@aalto.fi

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.