Skip to main content

ORIGINAL RESEARCH article

Front. Phys., 15 October 2020
Sec. Statistical and Computational Physics

New Estimates for the Jensen Gap Using s-Convexity With Applications

  • 1Department of Mathematics, University of Peshawar, Peshawar, Pakistan
  • 2Department of Mathematics, Huzhou University, Huzhou, China
  • 3School of Mathematics and Statistics, Changsha University of Science and Technology, Changsha, China

In this article, we use s-convex and Green functions to obtain a bound for the Jensen gap in discrete form and a bound for the Jensen gap in integral form. We present two numerical examples to verify the main results and to examine the tightness of the bounds. Then, as an application of the discrete result, we derive a converse of the Hölder inequality. Based on the integral result, we obtain a bound for the Hermite-Hadamard gap and present a converse of the Hölder inequality in its integral form. Also, we obtain bounds for the Csiszár and Rényi divergences as applications of the discrete result. Finally, we utilize the bound obtained for the Csiszár divergence to deduce new estimates for some other divergences in information theory.

1. Introduction

Convex functions and their generalizations play a significant role in scientific observation and calculation of various parameters in modern analysis, especially in the theory of optimization. Moreover, convex functions have some nice properties, such as differentiability, monotonicity, and continuity, which are useful in applications [15]. Interest in mathematical inequalities for convex and generalized convex functions has been growing exponentially, and research in this respect has had a significant impact on modern analysis [620]. Several mathematical inequalities have been established for s-convex functions in particular [2128], one of the most important being the Jensen inequality. In this paper, we study the Jensen inequality in a more standard framework for s-convex functions.

Definition 1.1 (s-convexity [29]). For s > 0 and a convex subset B of a real linear space S, a function Γ : B → ℝ is said to be s-convex if the inequality

Γ(κ1ε1+κ2ε2)κ1sΓ(ε1)+κ2sΓ(ε2)    (1.1)

holds for all ε1, ε2B and κ1, κ2 ≥ 0 with κ1 + κ2 = 1.

The function Γ is said to be s-concave if the inequality (1.1) holds in the reverse direction. Obviously, for s = 1 an s-convex function becomes a convex function, which shows that s-convexity of a function is a generalization of ordinary convexity of that function.

Lemma 1.2 ([29]). Let B be a convex subset of a real linear space S and let Γ : B → ℝ be a convex function. Then the following two statements hold:

(a) Γ is s-convex for 0 < s ≤ 1 if Γ is non-negative;

(b) Γ is s-convex for 1 ≤ s < ∞ if Γ is non-positive.

The Green function [30]

G1(t,x)={α1-x,α1xt,α1-t,txα2    (1.2)

defined on [α1, α2] × [α1, α2] and the integral identity

Γ(t)=Γ(α1)+(t-α1)Γ(α2)+α1α2G1(t,x)Γ(x) dx    (1.3)

for the function ΓC2[α1,α2] will be used to obtain the main results. Note that G1 is convex and continuous with respect to both variables.

This paper is organized as follows. In section 2 we give a bound for the Jensen gap in discrete form, which pertains to functions for which the absolute value of the second derivative is s-convex. We also derive a bound for the integral version of the Jensen gap. Then we conduct two numerical experiments that provide evidence for the tightness of the bound in the main result. We deduce a converse of the Hölder inequality from the discrete result and a bound for the Hermite-Hadamard gap from the integral result. Moreover, as a consequence of the integral result we obtain a converse of the Hölder inequality in its corresponding integral version. At the beginning of section 3 we present bounds for the Csiszár and Rényi divergences in the discrete case. Finally, we give estimates for the Shannon entropy, Kullback-Leibler divergence, χ2 divergence, Bhattacharyya coefficient, Hellinger distance, and triangular discrimination as applications of the bound obtained for the Csiszár divergence. Conclusions are presented in the final section.

2. Main Results

Using the concept of s-convexity, we derive a bound for the Jensen gap in discrete form, which is presented in the following theorem.

Theorem 2.1. Suppose |Γ|″ is s-convex for a function ΓC2[α1,α2] and that zi ∈ [α1, α2] and κi ∈ [0, ∞) for i = 1, …, n with i=1nκi=K>0. Then the following inequality holds:

|1Ki=1nκiΓ(zi)Γ(1Ki=1nκizi)||Γ(α1)|(s+1)(s+2)(α2α1)s(1Ki=1nκi(α2zi)s+2(α21Ki=1nκizi)s+2) +|Γ(α2)|(s+1)(s+2)(α2α1)s(1Ki=1nκi(ziα1)s+2(1Ki=1nκiziα1)s+2).    (2.4)

Proof: Using (1.3), we get

1Ki=1nκiΓ(zi)=1Ki=1nκi(Γ(α1)+(ziα1)Γ(α2)                                 +α1α2G1(zi,x)Γ(x) dx)    (2.5)

and

Γ(1Ki=1nκizi)=Γ(α1)+(1Ki=1nκizi-α1)Γ(α2)                                  +α1α2G1(1Ki=1nκizi,x)Γ(x) dx.    (2.6)

Equations (2.5) and (2.6) give

1Ki=1nκiΓ(zi)-Γ(1Ki=1nκizi)=α1α2(1Ki=1nκiG1(zi,x)- G1(1Ki=1nκizi,x))Γ(x) dx.    (2.7)

Taking the absolute value of (2.7), we get

|1Ki=1nκiΓ(zi)Γ(1Ki=1nκizi)|=|α1α2(1Ki=1nκiG1(zi,x)G1(1Ki=1nκizi,x))Γ(x) dx|α1α2|1Ki=1nκiG1(zi,x)(G11Ki=1nκizi,x)||Γ(x)| dx.    (2.8)

By applying a change of variable x = 1 + (1 − t2 for t ∈ [0, 1] and using the convexity of G1(t, x), the inequality (2.8) is transformed to

|1Ki=1nκiΓ(zi)Γ(z¯)|(α2α1)01(1Ki=1nκiG1(zi,tα1+(1t)α2) G1(z¯,tα1+(1t)α2))                                   ×|Γ(tα1+(1t)α2)|dt,    (2.9)

where z-=1Ki=1nκizi. The inequality (2.9) leads to the following by using s-convexity of the function |Γ|″:

|1Ki=1nκiΓ(zi)Γ(z¯)|(α2α1)01(1Ki=1nκiG1(zi,tα1+(1t)α2) G1(z¯,tα1+(1t)α2))                                   ×(ts|Γ(α1)|+(1t)s|Γ(α2)|) dt.=(α2α1)01(1Ki=1nκiG1(zi,tα1+(1t)α2)ts|Γ(α1)|                                +1Ki=1nκiG1(zi,tα1+(1t)α2)(1t)s|Γ(α2)|                                G1(z¯,tα1+(1t)α2)ts|Γ(α1)|                                G1(z¯,tα1+(1t)α2)(1t)s|Γ(α2)|) dt=(α2α1)(|Γ(α1)|1Ki=1nκi01tsG1(zi,tα1+(1t)α2) dt                                +|Γ(α2)|1Ki=1nκi01(1t)sG1(zi,tα1+(1t)α2) dt                                |Γ(α1)|01tsG1(z¯,tα1+(1t)α2) dt                                |Γ(α2)|01(1t)sG1(z¯,tα1+(1t)α2) dt).    (2.10)

Now, by using the change of variable x = 1 + (1 − t2 for t ∈ [0, 1], we obtain

01tsG1(zi,tα1+(1t)α2) dt=1(α2α1)s+1((α2zi)s+2(s+1)(s+2)(α2α1)s+2(s+1)(s+2)).    (2.11)

Upon replacing zi by z- in (2.11), we get

01tsG1(z¯,tα1+(1t)α2) dt=1(α2α1)s+1((α2z¯)s+2(s+1)(s+2)(α2α1)s+2(s+1)(s+2)).    (2.12)

Also,

01(1t)sG1(zi,tα1+(1t)α2) dt=1(α2α1)s+1((ziα1)s+2(s+1)(s+2)(ziα1)(α2α1)s+1(s+1)).    (2.13)

Upon replacing zi by z- in (2.13), we get

01(1t)sG1(z¯,tα1+(1t)α2)dt=1(α2α1)s+1((z¯α1)s+2(s+1)(s+2)(z¯α1)(α2α1)s+1(s+1)).    (2.14)

The result (2.4) is then obtained by substituting the values from (2.11)–(2.14) into (2.10).

Remark 2.2. If we use the Green function G2, G3, or G4 instead of G1 in Theorem 2.1, where G2, G3, and G4 are given in [30], we obtain the same result (2.4).

In the following theorem, we give a bound for the Jensen gap in integral form.

Theorem 2.3. Suppose |Γ″| is an s-convex function for ΓC2[α1,α2], and let ξ1 and ξ2 be real-valued functions defined on [c1, c2] with ξ1(y) ∈ [α1, α2] for all y ∈ [c1, c2] and such that ξ2, ξ1ξ2, and (Γ ◦ ξ1) ξ 2 are all integrable functions on [c1, c2]. Then the inequality

|c1c2(Γξ1)(y)ξ2(y) dyξΓ(c1c2ξ1(y)ξ2(y) dyξ)||Γ(α1)|(s+1)(s+2)(α2α1)s{c2c2ξ2(y)(α2ξ1(y))s+2 dyξ                   α2(c1c2ξ2(y)ξ1(y)dyξ)s+2} +|Γ(α2)|(s+1)(s+2)(α2α1)s{c1c2ξ2(y) (ξ1(y)α1)s+2 dyξ                   (c1c2ξ2(y)ξ1(y) dyξα1)s+2}    (2.15)

holds provided that c1c2ξ2(y)dy:=ξ>0 when ξ2(y) ∈ [0, ∞) for all y ∈ [c1, c2].

Proof: Using the same procedure as in the proof of Theorem 2.1, (2.15) can be obtained.

Example 1. Let Γ(y)=415y52, ξ1(y)=y2, and ξ2(y) = 1 for all y ∈ [0, 1]. Then Γ(y)=y12>0 for all y ∈ [0, 1]. This shows that Γ is a convex function while |Γ″| is 12-convex. Also, ξ1(y) ∈ [0, 1] for all y ∈ [0, 1] and we have1, α2] = [c1, c2] = [0, 1]. Now, the left-hand side of inequality (2.15) gives 01Γ(ξ1(y))dy-Γ(01ξ1(y)dy)=0.0444-0.0171=0.0273=E1, which shows how sharp the Jensen inequality is. The right-hand side of (2.15) gives 0.0274, which is very close to the true discrepancy E1. That is, from inequality (2.15) we have

0.0273<0.0274.    (2.16)

The difference 0.0274 − 0.0273 = 0.0001 between the two sides of (2.16) shows that the bound for the Jensen gap given by inequality (2.15) is very close to the true value.

Example 2. Let Γ(y)=100231y2110, ξ1(y) = y, and ξ2(y) = 1 for all y ∈ [0, 1]. Then Γ(y)=y110>0 for all y ∈ [0, 1], which shows that Γ is a convex function while |Γ″| is s-convex with s=110. Also, ξ1(y) ∈ [0, 1] for all y ∈ [0, 1] and we have1, α2] = [c1, c2] = [0, 1]. Therefore, from the left-hand side of inequality (2.15) we obtain 01Γ(ξ1(y))dy-Γ(01ξ1(y)dy)=0.1396-0.1010=0.0386=E2, which shows that the Jensen inequality is quite sharp. The right-hand side of (2.15) gives 0.0387, a value very close to the true discrepancy E2. Finally, from inequality (2.15) we have

0.0386<0.0387.    (2.17)

The difference 0.0387 − 0.0386 = 0.0001 between the two sides of (2.17) provides further evidence of the tightness of the bound for the Jensen gap given by inequality (2.15).

As an application of Theorem 2.1, we derive a converse of the Hölder inequality, stated in the following proposition.

Proposition 2.4. Let q2 > 1 and q1 ∉ (2, 3) be such that 1q1+1q2=1, and let s ∈ (0, 1]. Also, let1, α2] be a positive interval and let (d1, …, dn) and (b1, …, bn) be two positive n-tuples such that i=1ndibii=1nbiq2, with dibi-q2q1[α1,α2] for i = 1, …, n. Then

(i=1ndiq1)1q1(i=1nbiq2)1q2i=1ndibi[q1(q11)(s+1)(s+2)(α2α1)s{α1q12(n=1nbiq2(α2dibiq2q1)s+2n=1nbiq2(α2n=1ndibin=1nbiq2)s+2)  +α2q12(n=1nbiq2(dibiq2q1α1)s+2n=1nbiq2(n=1ndibin=1nbiq2α1)s+2)}]1q1i=1nbiq2.    (2.18)

Proof: Let Γ(x)=xq1 for x ∈ [α1, α2]; then Γ(x)=q1(q1-1)xq1-2>0 and |Γ|(x)=q1(q1-1)(q1-2)(q1-3)xq1-4>0, which shows that Γ and |Γ″| are convex functions. The function |Γ″| is also non-negative, so by Lemma 1.2 it is also an s-convex function for s ∈ (0, 1]. Thus, using (2.4) with Γ(x)=xq1, κi=biq2, and zi=dibi-q2q1, we derive

((i=1ndiq1)(i=1nbiq2)q11(i=1ndibi)q1)1q1[q1(q11)(s+1)(s+2)(α2α1)s{α1q12(i=1nbiq2(α2dibiq2q1)s+2i=1nbiq2(α2i=1ndibii=1nbiq2)s+2)  +α2q12(i=1nbiq2(dibiq2q1α1)s+2i=1nbiq2(i=1ndibii=1nbiq2α1)s+2)}]1q1i=1nbiq2.    (2.19)

By using the inequality xγyγ ≤ (xy)γ for 0 ≤ yx and γ ∈ [0, 1] with x=(i=1ndiq1)×(i=1nbiq2)q1-1, y=(i=1ndibi)q1, and γ=1q1, we obtain

(i=1ndiq1)1q1(i=1nbiq2)1q2i=1ndibi((i=1ndiq1)(i=1nbiq2)q11(i=1ndibi)q1)1q1.    (2.20)

The inequality (2.18) follows from (2.19) and (2.20).

In the following proposition, we provide a converse of the Hölder inequality in integral form as an application of Theorem 2.3.

Proposition 2.5. Let q2 > 1 and q1 ∉ (2, 3) be such that 1q1+1q2=1. Also, let ζ1,ζ2:[c1,c2]+ be two functions such that ζ1q1(y), ζ2q2(y), and ζ1(y2(y) are integrable on [c1, c2] with ζ1(y)ζ2-q2/q1(y)[α1,α2] when1, α2] ⊂ ℝ. Then the inequality

(c1c2ζ1q1(y) dy)1q1 (c1c2ζ2q2(y) dy)1q2c1c2ζ1(y)ζ2(y) dy[q1(q11)(s+1)(s+2)(α2α1)s{α1q12(1c1c2ζ2q2(y) dyc1c2ζ2q2(y)(α2ζ1(y)ζ2q2q1(y))s+2 dy (α21c1c2ζ2q2(y) dyc1c2ζ1(y)ζ2(y) dy)s+2) +α2q12(1c1c2ζ2q2(y) dyc1c2ζ2q2(y)(ζ1(y)ζ2q2q1(y)α1)s+2 dy (1c1c2ζ2q2(y) dyc1c2ζ1(y)ζ2(y) dyα1)s+2)}]1q1c1c2ζ2q2(y) dy    (2.21)

holds for s ∈ (0, 1].

Proof: Using (2.15) with Γ(x)=xq1 for x[α1,α2], ξ2(y)=ζ2q2(y), and ξ1(y)=ζ1(y)ζ2-q2q1(y) and following the procedure of Proposition 2.4, we deduce (2.21).

As an application of Theorem 2.3, in the following corollary we establish a bound for the Hermite-Hadamard gap.

Corollary 2.6. Let ψC2[c1,c2] be a function such that |ψ″| is s-convex; then

|1c2c1c2c1ψ(y) dyψ(c1+c22)|(c2c1)2(s+1)(s+2)(|ψ(c1)|+|ψ(c2)|)(1s+312s+2).    (2.22)

Proof: The inequality (2.22) can be obtained by using (2.15) with ψ = Γ, [α1, α2] = [c1, c2], ξ2(y) = 1, and ξ1(y) = y for y ∈ [c1, c2].

3. Applications to Information Theory

Definition 3.1 (Csiszár f-divergence [31]). Let t=(t1,,tn)n and r=(r1,,rn)+n with tiri[α1,α2] (i=1,,n) for1, α2] ⊂ ℝ. For a function f :[α1, α2] → ℝ, the Csiszár f-divergence functional is defined as

D-c(t,r)=i=1nrif(tiri).

Theorem 3.2. Let fC2[α1,α2] be a function such that |f″| is s-convex. Then for t=(t1,,tn)n and r=(r1,,rn)+n the inequality

|1i=1nriD¯c(t,r)f(i=1ntii=1nri)||f(α1)|(s+1)(s+2)(α2α1)s{1i=1nrii=1nri(α2tiri)s+2       (α2i=1ntii=1nri)s+2}       +|f(α2)|(s+1)(s+2)(α2α1)s{1i=1nrii=1nri(tiriα1)s+2       (i=1ntii=1nriα1)s+2}    (3.23)

holds provided that i=1ntii=1nri,tiri[α1,α2] for i = 1, …, n.

Proof: The inequality (3.23) can easily be deduced from (2.4) by taking Γ=f, zi=tiri, and κi=rii=1nri.

Definition 3.3 (Rényi divergence [31]). For μ ≥ 0 with μ ≠ 1 and two positive probability distributions t = (t1, …, tn) and r = (r1, …, rn), the Rényi divergence is defined as

Dre(t,r)=1μ-1log(i=1ntiμri1-μ).

Corollary 3.4. Let 0 < s ≤ 1 and [α1,α2]+. Then for positive probability distributions t = (t1, …, tn) and r = (r1, …, rn), the inequality

 Dre(t,r)1μ1i=1ntilog(tiri)μ1 1(μ1)α12(α2α1)s(s+1)(s+2)    ×{i=1nti(α2(tiri)μ1)s+2(α2i=1nviμwi1μ)s+2} +1(μ1)α22(α2α1)s(s+1)(s+2)    ×{i=1nti((tiri)μ1α1)s+2(i=1nviμwi1μα1)s+2}.    (3.24)

holds provided that i=1nri(tiri)μ,(tiri)μ-1[α1,α2] for i = 1, …, n with μ > 1.

Proof: Let Γ(x)=-1μ-1logx for x ∈ [α1, α2]. Then Γ(x)=1(μ-1)x2>0 and |Γ|(x)=6(μ-1)x4>0, which shows that Γ and |Γ″| are convex functions with |Γ″| ≥ 0; so by Lemma 1.2 the function |Γ″| is s-convex for s ∈ (0, 1]. Therefore, using (2.4) with Γ(x)=-1μ-1logx, κi=ti, and zi=(tiri)μ-1, we derive (3.24).

Definition 3.5 (Shannon entropy [31]). Let r = (r1, …, rn) be a positive probability distribution; then the Shannon entropy is defined as

Es(r)=-i=1nrilogri.

Corollary 3.6. Let [α1,α2]+, and let r = (r1, …, rn) be a positive probability distribution such that 1ri[α1,α2] for i = 1, …, n with 0 < s ≤ 1. Then

log nEs(r)1α12(α2α1)s(s+1)(s+2)                                    {i=1nri(α21ri)s+2(α2n)s+2}                                    +1α22(α2α1)s(s+1)(s+2)                                    {i=1nri(1riα1)s+2(nα1)s+2}.    (3.25)

Proof: Let f(x) = −log x for x ∈ [α1, α2]. Then f(x)=1x2>0 and |f|(x)=6x4>0, which shows that f and |f″| are convex functions. Also, |f″| is non-negative and so by Lemma 1.2 we conclude that it is s-convex for s ∈ (0, 1]. Therefore, using (3.23) with f(x) = −log x and (t1, …, tn) = (1, …, 1), we get (3.25).

Definition 3.7 (Kullback-Leibler divergence [31]). For two positive probability distributions t = (t1, …, tn) and r = (r1, …, rn), the Kullback-Leibler divergence is defined as

Dkl(t,r)=i=1ntilogtiri.

Corollary 3.8. Let 0 < s ≤ 1 and 0 < α1 < α2, and let t = (t1, …, tn) and r = (r1, …, rn) be positive probability distributions such that tiri[α1,α2] for i = 1, …, n. Then

Dkl(t,r)1α1(α2α1)s(s+1)(s+2)                     {i=1nri(α2tiri)s+2(α21)s+2}                     +1α2(α2α1)s(s+1)(s+2)                     {i=1nri(tiriα1)s+2(1α1)s+2}.    (3.26)

Proof: Let f(x) = x log x for x ∈ [α1, α2]. Then f(x)=1x>0 and |f|(x)=2x3>0, which shows that f and |f″| are convex functions. Also, |f″| ≥ 0, and so Lemma 1.2 guarantees the s-convexity of |f″| for s ∈ (0, 1]. Therefore, using (3.23) with f(x) = x log x, we get (3.26).

Definition 3.9 (χ2 divergence [31]). The χ2 divergence Dχ2(t,r) for two positive probability distributions t = (t1, …, tn) and r = (r1, …, rn) is defined as

Dχ2(t,r)=i=1n(ti-ri)2ri.

Corollary 3.10. Let 0 < s ≤ 1 and 0 < α1 < α2, and let t = (t1, …, tn) and r = (r1, …, rn) be positive probability distributions such that tiri[α1,α2] for i = 1, …, n. Then

Dχ2(t,r)2(α2α1)s(s+1)(s+2)                     {i=1nri(α2tiri)s+2(α21)s+2}                     +2(α2α1)s(s+1)(s+2)                     {i=1nri(tiriα1)s+2(1α1)s+2}.    (3.27)

Proof: Let f(x) = (x − 1)2 for x ∈ [α1, α2]. Then f″(x) = 2 > 0 and |f″|″(x) = 0, which shows that f and |f″| are convex functions. Also, the function |f″| is non-negative, and so Lemma 1.2 confirms its s-convexity for s ∈ (0, 1]. Therefore, using (3.23) with f(x) = (x − 1)2, we obtain (3.27).

Definition 3.11 (Bhattacharyya coefficient [31]). For two positive probability distributions t = (t1, …, tn) and r = (r1, …, rn), the Bhattacharyya coefficient is defined as

Cb(t,r)=i=1ntiri.

Corollary 3.12. Let 0 < s ≤ 1 and [α1,α2]+, and let t = (t1, …, tn) and r = (r1, …, rn) be two positive probability distributions such that tiri[α1,α2] for i = 1, …, n.Then

1Cb(t,r)14α132(α2α1)s(s+1)(s+2)                             {i=1nri(α2tiri)s+2(α21)s+2}                             +14α232(α2α1)s(s+1)(s+2)                             {i=1nri(tiriα1)s+2(1α1)s+2}.    (3.28)

Proof: Let f(x)=-x for x ∈ [α1, α2]. Then f(x)=14x32>0 and |f|(x)=1516x72>0, which shows that f and |f″| are convex functions. Also, |f″| ≥ 0 implies its s-convexity for s ∈ (0, 1] by Lemma 1.2. Therefore, using (3.23) with f(x)=-x, we obtain (3.28).

Definition 3.13 (Hellinger distance [31]). The Hellinger distance Dh2(t,r) between two positive probability distributions t = (t1, …, tn) and r = (r1, …, rn) is defined as

Dh2(t,r)=12i=1n(ti-ri)2.

Corollary 3.14. Let 0 < α1 < α2 and 0 < s ≤ 1, and let t = (t1, …, tn) and r = (r1, …, rn) be positive probability distributions such that tiri[α1,α2] for i = 1, …, n. Then

Dh2(t,r)14α132(α2α1)s(s+1)(s+2)                      {i=1nri(α2tiri)s+2(α21)s+2}                      +14α232(α2α1)s(s+1)(s+2)                      {i=1nri(tiriα1)s+2(1α1)s+2}.    (3.29)

Proof: Let f(x)=12(1-x)2 for x ∈ [α1, α2]. Then f(x)=14x32>0 and |f|(x)=1516x72>0, which shows that f and |f″| are convex functions. Also, |f″| ≥ 0, and so from Lemma 1.2 we conclude its s-convexity for s ∈ (0, 1]. Therefore, using (3.23) with f(x)=12(1-x)2, we deduce (3.29).

Definition 3.15 (Triangular discrimination [31]). For two positive probability distributions t = (t1, …, tn) and r = (r1, …, rn), the triangular discrimination is defined as

D(t,r)=i=1n(ti-ri)2ti+ri.

Corollary 3.16. Let 0 < s ≤ 1 and 0 < α1 < α2, and let t = (t1, …, tn) and r = (r1, …, rn) be positive probability distributions such that tiri[α1,α2] for i = 1, …, n. Then

D(t,r)8(α1+1)3(α2α1)s(s+1)(s+2)                      {i=1nri(α2tiri)s+2(α21)s+2}                      +8(α2+1)3(α2α1)s(s+1)(s+2)                      {i=1nri(tiriα1)s+2(1α1)s+2}.    (3.30)

Proof: Let f(x)=(x-1)2(x+1) for x ∈ [α1, α2]. Then f(x)=8(x+1)3>0 and |f|(x)=96(x+1)5>0, which shows that f and |f″| are convex functions. Also, |f″| is non-negative, and thus s-convexity of the function |f″| for s ∈ (0, 1] follows from Lemma 1.2. Therefore, using (3.23) with f(x)=(x-1)2(x+1), we get (3.30).

Remark 3.17. Analogously, bounds for various divergences in integral form can be derived as applications of Theorem 2.3.

4. Conclusion

The Jensen inequality has numerous applications in engineering, economics, computer science, information theory, and coding; it has been derived for convex and generalized convex functions. This paper presents a novel approach to bounding the Jensen gap. Some bounds are obtained for the Jensen gap via s-convex functions. Numerical experiments not only confirm the sharpness of the Jensen inequality but also provide evidence for the tightness of the bound given in (2.15) for the Jensen gap. These experiments also show that the bound in (2.15) gives very close estimates for the Jensen gap even when the functions are not convex. The bounds are used to obtain new estimates for the Hermite-Hadamard and Hölder inequalities. Furthermore, based on the main results, various divergences are estimated. These estimates for divergences can be applied to signal processing, magnetic resonance image analysis, image segmentation, pattern recognition, and other areas. The ideas in this paper can also be used with other inequalities and for some other classes of convex functions.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author/s.

Author Contributions

MA gave the main idea. MA and SK worked on Main Results while Y-MC worked on Introduction. All authors checked carefully the whole manuscript and approved.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This work was supported by the Natural Science Foundation of China (Grant Nos. 61673169, 11301127, 11701176, 11626101, and 11601485).

References

1. Asplund E. Fréchet differentiability of convex functions. Acta Math.(1968) 121:31–47. doi: 10.1007/BF02391908

CrossRef Full Text | Google Scholar

2. Phelps RR. Convex Functions, Monotone Operators and Differentiability, Vol. 1364. Lecture Notes in Mathematics. Berlin: Springer-Verlag. (1989).

Google Scholar

3. Udrişte C. Continuity of convex functions on Riemannian manifolds. Bull Math Soc Sci. (1977) 21:215–8.

Google Scholar

4. Ger R, Kuczma M. On the boundedness and continuity of convex functions and additive functions. Aequ Math. (1970) 4:157–62. doi: 10.1007/BF01817756

CrossRef Full Text | Google Scholar

5. Minty GJ. On the monotonicity of the gradient of a convex function. Pac J Math. (1964) 14:243–7.

Google Scholar

6. Khan S, Adil Khan M, Chu Y-M. Converses of the Jensen inequality derived from the Green functions with applications in information theory. Math Method Appl Sci. (2020) 43:2577–87. doi: 10.1002/mma.6066

CrossRef Full Text | Google Scholar

7. Khan S, Adil Khan M, Chu Y-M. New converses of Jensen inequality via Green functions with applications. RACSAM. (2020) 114:1–14. doi: 10.1007/s13398-020-00843-1

CrossRef Full Text | Google Scholar

8. Adil Khan M, Pečarić Ð, Pečarić J. New refinement of the Jensen inequality associated to certain functions with applications. J Inequal Appl. (2020) 2020:1–11. doi: 10.1186/s13660-020-02343-7

CrossRef Full Text | Google Scholar

9. Bakula MK, Özdemir ME, Pečarić J. Hadamard type inequalities for m-convex and (α,m)-convex functions. J Inequal Pure Appl Math. (2008) 9:1–12.

Google Scholar

10. Bombardelli M, Varošanec S. Properties of h-convex functions related to the Hermite-Hadamard-Fejér inequalities. Comput Math Appl. (2009) 58:1869–77. doi: 10.1016/j.camwa.2009.07.073

CrossRef Full Text | Google Scholar

11. Khan J, Adil Khan M, Pečarić J. On Jensen's type inequalities via generalized majorization inequalities. Filomat. (2018) 32:5719–33. doi: 10.2298/FIL1816719K

CrossRef Full Text | Google Scholar

12. Dragomir SS, Pearce CEM. Jensen's inequality for quasi-convex functions. Numer Algebra Control Opt. (2012) 2:279–91. doi: 10.3934/naco.2012.2.279

CrossRef Full Text | Google Scholar

13. Wang M-K, Zhang W, Chu Y-M. Monotonicity, convexity and inequalities involving the generalized elliptic integrals. Acta Math Sci. (2019) 39B:1440–50. doi: 10.1007/s10473-019-0520-z

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Wu S-H, Chu Y-M. Schur m-power convexity of generalized geometric Bonferroni mean involving three parameters. J Inequal Appl. (2019) 2019:1–11. doi: 10.1186/s13660-019-2013-y

CrossRef Full Text | Google Scholar

15. Jain S, Mehrez K, Baleanu D, Agarwal P. Certain Hermite-Hadamard inequalities for logarithmically convex functions with applications. Mathematics. (2019) 7:1–12. doi: 10.3390/math7020163

CrossRef Full Text | Google Scholar

16. Agarwal P, Jleli M, Tomar M. Certain Hermite-Hadamard type inequalities via generalized k-fractional integrals. J Inequal Appl. (2017) 2017:1–10. doi: 10.1186/s13660-017-1318-y

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Agarwal P. Some inequalities involving Hadamard-type k-fractional integral operators. Math Method Appl Sci. (2017) 40:3882–91. doi: 10.1002/mma.4270

CrossRef Full Text | Google Scholar

18. Liu Z, Yang W, Agarwal P. Certain Chebyshev type inequalities involving the generalized fractional integral operator. J Comput Anal Appl. (2017) 22:999–1014.

Google Scholar

19. Choi J, Agarwal P. Certain inequalities involving pathway fractional integral operators. Kyungpook Math J. (2016) 56:1161–8. doi: 10.5666/KMJ.2016.56.4.1161

CrossRef Full Text | Google Scholar

20. Mehrez K, Agarwal P. New Hermite-Hadamard type integral inequalities for convex functions and their applications. J Comput Appl Math. (2019) 350:274–85. doi: 10.2140/pjm.1964.14.243

CrossRef Full Text | Google Scholar

21. Chen X. New convex functions in linear spaces and Jensen's discrete inequality. J Inequal Appl. (2013) 2013:1–8. doi: 10.1186/1029-242X-2013-472

CrossRef Full Text | Google Scholar

22. Set E. New inequalities of Ostrowski type for mappings whose derivatives are s-convex in the second sense via fractional integrals. Comput Math Appl. (2012) 63:1147–54. doi: 10.1016/j.camwa.2011.12.023

CrossRef Full Text | Google Scholar

23. Sarikaya MZ, Set E, Özdemir ME. On new inequalities of Simpson's type for s-convex functions. Comput Math Appl. (2010) 60:2191–9. doi: 10.1016/j.camwa.2010.07.033

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Alomari M, Darus M, Dragomir SS, Cerone P. Ostrowski type inequalities for functions whose derivatives are s-convex in the second sense. Appl Math Lett. (2010) 23:1071–6. doi: 10.1016/j.aml.2010.04.038

CrossRef Full Text | Google Scholar

25. Chen J, Huang X. Some new inequalities of Simpson's type for s-convex functions via fractional integrals. Filomat. (2017) 31:4989–97. doi: 10.2298/FIL1715989C

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Özcan S, Işcan I. Some new Hermite-Hadamard type inequalities for s-convex functions and their applications. J Inequal Appl. (2019) 2019:1–11. doi: 10.1186/s13660-019-2151-2

CrossRef Full Text | Google Scholar

27. Almutairi O, kiliçman A. Integral inequalities for s-convexity via generalized fractional integrals on fractal sets. Mathematics. (2020) 8. doi: 10.3390/math8010053

CrossRef Full Text | Google Scholar

28. Özdemir ME, Yildiz Ç, Akdemir AO, Set E. On some inequalities for s-convex functions and applications. J Inequal Appl. (2013) 2013:1–11. doi: 10.1186/1029-242X-2013-333

CrossRef Full Text | Google Scholar

29. Adil Khan M, Hanif M, Khan ZAH, Ahmad K, Chu Y-M. Association of Jensen's inequality for s-convex function with Csiszár divergence. J Inequal Appl. (2019) 2019:1–14. doi: 10.1186/s13660-019-2112-9

CrossRef Full Text | Google Scholar

30. Butt SI, Mehmood N, Pečarić J. New generalizations of Popoviciu type inequalities via new green functions and Fink's identity. Trans A Razmadze Math Inst. (2017) 171:293–303. doi: 10.1016/j.trmi.2017.04.003

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Lovričević N, Pečarić Ð, Pečarić J. Zipf-Mandelbrot law, f−divergences and the Jensen-type interpolating inequalities. J Inequal Appl. (2018) 2018:1–20. doi: 10.1186/s13660-018-1625-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: Jensen inequality, s-convex function, green function, Csiszár divergence, Hölder inequality

Citation: Adil Khan M, Khan S and Chu Y-M (2020) New Estimates for the Jensen Gap Using s-Convexity With Applications. Front. Phys. 8:313. doi: 10.3389/fphy.2020.00313

Received: 26 March 2020; Accepted: 09 July 2020;
Published: 15 October 2020.

Edited by:

Mustafa Inc, Firat University, Turkey

Reviewed by:

Praveen Agarwal, Anand International College of Engineering, India
Mustapha Raissouli, Taibah University, Saudi Arabia

Copyright © 2020 Adil Khan, Khan and Chu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yu-Ming Chu, Y2h1eXVtaW5nMjAwNSYjeDAwMDQwOzEyNi5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.