Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 04 September 2018
Sec. Mathematics of Computation and Data Science
This article is part of the Research Topic Recent Developments in Signal Approximation and Reconstruction View all 6 articles

On the Construction of Sparse Matrices From Expander Graphs

\r\nBubacarr Bah,*Bubacarr Bah1,2*Jared TannerJared Tanner3
  • 1The Research Centre, African Institute for Mathematical Sciences, Muizenberg, South Africa
  • 2Department of Mathematical Sciences, University of Stellenbosch, Stellenbosch, South Africa
  • 3Mathematics Institute, University of Oxford, Oxford, United Kingdom

We revisit the asymptotic analysis of probabilistic construction of adjacency matrices of expander graphs proposed in Bah and Tanner [1]. With better bounds we derived a new reduced sample complexity for d, the number of non-zeros per column of these matrices (or equivalently the left-degree of the underlying expander graph). Precisely d=O(logs(N/s)); as opposed to the standard d=O(log(N/s)), where N is the number of columns of the matrix (also the cardinality of set of left vertices of the expander graph) or the ambient dimension of the signals that can be sensed by such matrices. This gives insights into why using such sensing matrices with small d performed well in numerical compressed sensing experiments. Furthermore, we derive quantitative sampling theorems for our constructions which show our construction outperforming the existing state-of-the-art. We also used our results to compare performance of sparse recovery algorithms where these matrices are used for linear sketching.

1. Introduction

Sparse binary matrices, say A ∈ {0, 1}n×N, with nN are widely used in applications including graph sketching [2, 3], network tomography [4, 5], data streaming [6, 7], breaking privacy of databases via aggregate queries [8], compressed imaging of intensity patterns [9], and more generally combinatorial compressed sensing [1015], linear sketching [7], and group testing [16, 17]. In all these areas we are interested in the case where nN, in which case A is used as an efficient encoder of sparse signals x ∈ ℝN with sparsity sn, where they are known to preserve ℓ1 distance of sparse vectors [18]. Conditions that guarantee that a given encoder, A, also referred to as a sensing matrix in compressed sensing, typically include the the nullspace, coherence, and the restricted isometry conditions, see [19] and references there in. The goal is for A to satisfy one or more of these conditions with the minimum possible n, the number of measurements. For uniform guarantees over all A, it has been established that n has to be Ω(s2), but that with high probability on the draw of random A, n can be O(slogN/n) for A with entries drawn from a sub-gaussian distribution, see [19] for a review of such results. Matrices with entries drawn from a Bernoulli distribution fall in the family of sub-gaussian but these are dense as opposed the the sparse binary matrices considered here. For computational advantages, such as faster application and smaller storage, it is advantageous to use sparse A in application [1, 14, 18].

Herein we consider the n achievable when A is an adjacency matrix of an expander graph [18], expander graph will be defined in the next section. Since the construction of such matrices can be construed as either a linear algebra problem or equivalently a graph theory one (in this manuscript we will focus more on the linear algebra discourse). There has been significant research on expander graphs in pure mathematics and theoretical computer science, see [20] and references therein. Both deterministic and probabilistic constructions of expander graphs have been suggested. The best known deterministic constructions achieve n=O(s1+α) for some α > 0 [21]. On the other hand random constructions, first proven in Bassalygo and Pinsker [22], achieve the optimal n=O(slog(N/s)), precisely n=O(sd), with d=O(log(N/s)), where d is the left degree of the expander graph but also the number of ones in each column of A, to be defined in the next section. However, to the best of our knowledge, it was [1] that proposed a probabilistic construction that is not only optimal but also more suitable to making quantitative statements where such matrices are applied.

This work follows the probabilistic construction proposed in Bah and Tanner[1] but with careful computation of the bounds, is able to achieve n=O(slog(N/s)) with d=O(log(N/s)logs). We retain the complexity of n but got a smaller complexity for d, which is novel. Related results with a similar d were derived in Indyk and Razenshteyn [23] and Bah et al.[24] but for structure sparse signals in the framework of model-based compressing sensing or sketching. In that framework, one has second order information about x beyond simple sparsity, which is first order information about x. It is thus expected and established that it is possible to get a small n and hence a smaller d. Arguably, such a small complexity for d justifies in hindsight fixing d to a small number in simulations with such A as in Bah and Tanner [1], Bah et al. [24], and Mendoza-Smith and Tanner [14], just to mention a few.

The results derived here are asymptotic, though finite dimensional bounds follow directly. We focus on for what ratios of the problem dimensions (s, n, N) does these results hold. There is almost a standard way of interrogating such a question, i.e., phase transitions, probably introduced to the compressed sensing literature by Donoho and Tanner [25]. In other words, we derive sampling theorems numerically depicted by phase transition plots about problem size spaces for which our construction holds. This is similar to what was done in Bah and Tanner [1] but for comparison purposes we include phase transition plots from probabilistic constructions by Buhrman et al. [26] and Berinde [27]. The plots show improvement over these earlier works. Furthermore, we show implications of our results for compressed sensing by using our results with the phase transition framework to compare the performance of selected combinatorial compressed sensing algorithms as is done in Bah and Tanner [1] and Mendoza-Smith and Tanner [14].

The manuscript is organized as follows. Section 1 gives the introduction; while section 2 sets the notation and defines some useful terms. The main results are stated in section 3 and the details of the construction is given in section 4. This is followed by a discussion in section 5 about our results, comparing them to existing results and using the results to compare the performance of some combinatorial compressed sensing algorithms. In section 6 we state the remaining proofs of theorems, lemmas, corollaries, and propositions used in this manuscript. After this section is the conclusion in section 7. We include an Appendix in Supplementary Materials, where we summarized key relevant materials from Bah and Tanner [1], and showed the derivation of some bounds used in the proofs.

2. Preliminaries

2.1. Notation

Scalars will be denoted by lowercase letters (e.g., k), vectors by lowercase boldface letters (e.g., x), sets by uppercase calligraphic letters (e.g., S) and matrices by uppercase boldface letters (e.g., A). The cardinality of a set S is denoted by |S| and [N]: = {1, …, N}. Given S[N], its complement is denoted by Sc:=[N]\S and xS is the restriction of x ∈ ℝN to S, i.e., (xS)i=xi if iS and 0 otherwise. For a matrix A, the restriction of A to the columns indexed by S is denoted by AS. For a graph, Γ(S) denotes the set of neighbors of S, that is the nodes that are connected to the nodes in S, and eij = (xi, yj) represents an edge connecting node xi to node yj. The ℓp norm of a vector x ∈ ℝN is defined as xp:=(i=1Nxip)1/p.

2.2. Definitions

Below we give formal definitions that will be used in this manuscript.

Definition 2.1 (ℓp-norm restricted isometry property). A matrix A satisfies the ℓp-norm restricted isometry property (RIP-p) of order s and constant δs < 1 if it satisfies the following inequality.

(1-δs)xppAxpp(1+δs)xpp,   s–sparse x.    (1)

The most popular case is RIP-2 and was first proposed in Cand‘es et al. [28]. Typically when RIP is mentioned without qualification, it means RIP2. In the discourse of this work though RIP-1 is the most relevant. The RIP says that A is a near-isometry and it is a sufficient condition to guarantee exact sparse recovery in the noiseless setting (i.e., y = Ax); or recovery up to some error bound, also referred to as optimality condition, in the noisy setting (i.e., y = Ax + e, where e is the bounded noise vector). we define optimality condition more precisely below.

Definition 2.2 (Optimality condition). Given y = Ax + e and x^=Δ(Ax+e) for a reconstruction algorithm Δ, the optimal error guarantee is

x^-xpC1σs(x)q+C2xp,    (2)

where C1, C2 > 0 depend only on the RIP constant (RIC), i.e., δs, and not the problem size, 1 ≤ qp ≤ 2, and σs(x)q denote the error of the best s-term approximation in the ℓq-norm, that is

σs(x)q:=mins-sparse zz-xq.    (3)

Equation (2) is also referred to as the ℓp/ℓq optimality condition (or error guarantee). Ideally, we would like ℓ2/ℓ2, but the best provable is ℓ2/ℓ1 [28], weaker than this is the ℓ1/ℓ1 [18], which is what is possible with the A considered in this work.

To aid translation between the terminology of graph theory and linear algebra we define the set of neighbors in both notation.

Definition 2.3 (Definition 1.4 in Bah and Tanner [1]). Consider a bipartite graph G=([N],[n],E) where E is the set of edges and eij = (xi, yj) is the edge that connects vertex xi to vertex yj. For a given set of left vertices S[N] its set of neighbors is Γ(S)={yj|xiS and eijE}. In terms of the adjacency matrix, A, of G=([N],[n],E) the set of neighbors of AS for |S|=s, denoted by As, is the set of rows with at least one non-zero.

Definition 2.4 (Expander graph). Let G=([N],[n],E) be a left-regular bipartite graph with N left vertexes, n right vertexes, a set of edges E and left degree d. If, for any ϵ ∈ (0, 1/2) and any S[N] of size |S|s, we have that |Γ(S)|(1-ϵ)d|S|, then G is referred to as a (s, d, ϵ)-expander graph.

The ϵ is referred to as the expansion coefficient of the graph. A (s, d, ϵ)-expander graph, also called an unbalanced expander graph [18] or a lossless expander graph [29], is a highly connected bipartite graph. We denote the ensemble of n × N binary matrices with d ones per column by B(N,n;d), or just B to simplify notation. We also will denote the ensemble of n × N adjacency matrices of (s, d, ϵ)-expander graphs as 𝔼(N, n; s, d, ϵ) or simply 𝔼.

3. Results

Before stating the main results of the paper, we summarise here how we improve upon existing results in Bah and Tanner [1], which is generally true of standard random construction of such matrices. Essentially in the random constructions we want one minus the probability in (5) to go to zero as the problem size (s, n, N) → ∞. In Theorem 4.1 (which was Theorem 1.6 in Bah and Tanner [1]) for this to happen we require d=O(log(N/s)) based on the bound (7). In this work we derived a tighter bound than (7), which is (12), that requires d to be smaller than the standard bound by a factor of log(s).

The main result of this work is formalized in Theorem 3.1, which is an asymptotic result, where the dimensions grow while their ratios remain bounded. This is also referred to as the proportional growth asymptotics [30, 31]. We point out that it has been observed in practice that applying these matrices (including the simulations we do in this manuscript), which necessitate having finite dimensions, produces results that agree with the theory. However, the theoretical analysis used in this work cannot be applied to the finite setting.

Theorem 3.1. Consider ϵ(0,12) and let d, s, n, N ∈ ℕ, a random draw of an n×N matrix A from B, i.e., for each column of A uniformly assign ones in d out of n positions, as (s, n, N) → ∞ while s/n ∈ (0, 1) and n/N ∈ (0, 1), with probability approaching 1 exponentially, the matrix A ∈ 𝔼 with

d=O(log(N/s)ϵlogs),   and   n=O(slog(N/s)ϵ2).    (4)

The proof of this theorem is found in section 6.1. It is worth emphasizing that the complexity of d is novel and it is the main contribution of this work.

Furthermore, in the proportional growth asymptotics, i.e., as (s, n, N) → ∞ while s/n → ρ and n/N → δ with ρ, δ ∈ (0, 1), for completeness, we derived a phase transition function (curve) in δρ-space below which Theorem 3.1 holds, but above which Theorem 3.1 fails to hold. This is formalized in the following lemma.

Lemma 3.1. Fix ϵ(0,12) and let d, s, n, N ∈ ℕ, as (s, n, N) → ∞ while s/n → ρ ∈ (0, 1) and n/N → δ ∈ (0, 1) then for ρ < (1−γ)ρBT(δ; d, ϵ) and γ > 0, a random draw of A from B implies A ∈ 𝔼 with probability approaching 1 exponentially.

The proof of this lemma is given in section 6.2. The phase transition function ρBT(δ; d, ϵ) turned out to be significantly higher that those derived from existing probabilistic constructions, hence our results are significant improvement over earlier works. This will be graphically demonstrated with some numerical simulations in section 5.

4. Construction

The standard probabilistic construction is for each column of A to uniformly assign ones in d out of n positions; while the standard approach to derive the probability bounds is to randomly selected s columns of A indexed by S and compute the probability that |As| < (1 − ϵ)ds, then do a union bound over all sets S of size s. Our work in Bah and Tanner [1] computed smaller bounds than previous works based on a dyadic splitting of S and derived the following bound. We changed the notation and format of Theorem 1.6 in Bah and Tanner [1] slightly to be consistent with the notation and format in this manuscript.

Theorem 4.1 (Theorem 1.6, Bah and Tanner [1]). Consider d, s, n, N ∈ ℕ, fix S with |S|s, let an n×N matrix A be drawn from B, then

Prob(|As|as)<pn(s,d)·e[n·Ψn(as,,a1)]    (5)

with a1: = d, and the functions defined as

pn(s,d)=2252πs3d3,   and    (6)
Ψn(as,,a1)=1n[3slog(5d)+iΩs2iψi],    Ω={2j}j=0log2(s)-1,    (7)

where ψi is given by the following expression

(n-ai)·H(a2i-ain-ai)+ai·H(a2i-aiai)-n·H(ain)    (8)

and H(·) is the Shannon entropy in base e logarithms, and the index set Ω={2j}j=0log2(s)-1.

a) If no restriction is imposed on as then the ai for i > 1 take on their expected value âi given by

â2i=âi(2-âin),   for i{2j}j=0log2(s)-1.    (9)

b) If as is restricted to be less than âs, then the ai for i > 1 are the unique solutions to the following polynomial system

a2i3-2aia2i2+2ai2a2i-ai2a4i=0, for i{2j}j=0log2(s)-2,    (10)

with a2i ≥ ai for each i.

In this work, based on the same approach as in Bah and Tanner [1], we derive new expressions for the pn(s, d) and Ψn(as, …, a1) in Theorem 4.1, i.e., (6) and (7) respectively, and provide a simpler bound for the improved expression of Ψn(as, …, a1).

Lemma 4.1. Theorem 4.1 holds with the functions

pn(s,d)=2-3s9/2e1/4,    (11)
Ψn(as,,a1)=1n[3log22log22s+(log2s32)logas                                    +iΩs2iψi],   for Ω={2j}j=0log2(s)1.    (12)

The proof of the lemma is given in section 6.3. Asymptotically, the argument of the exponential term in the bound of the probability in (5) of Theorem 4.1, i.e., Ψn(as, …, a1) in (12) is more important than the polynomial pn(s, d) in (11) since the exponential factor will dominate the polynomial factor. The significance of the lemma is that Ψn(as, …, a1) in (12) is smaller than Ψn(as, …, a1) in (7) since 3log22log22s+(log2s-32)logas in (12) is asymptotically smaller than 3slog(5d) in (7), because the former is O(polylogs) while the latter is O(s), since we consider as=O(s).

Recall that we are interested in computing Prob(|As| ≤ as) when as = (1 − ϵ)ds. This means having to solve the polynomial equation (10) to compute as small a bound of Ψn((1 − ϵ)ds, …, d) as possible. We derive an asymptotic solution to (10) for as = (1 − ϵ)ds and use that solution to get the following bounds.

Theorem 4.2. Consider d, s, n, N ∈ ℕ, fix S with |S|s, for η > 0, β ≥ 1, and ϵ(0,12), let an n × N matrix A be drawn from B, then

Prob(|As|(1-ϵ)ds)<pn(s,d,ϵ)·exp[n·Ψn(s,d,ϵ)]    (13)

where

pn(s,d,ϵ)=e4·slog2(1-ϵ)+326(1-ϵ)3d3,    (14)
Ψn(s,d,ϵ)12n[ηβ1(β1)(1ϵ)dslog2(s/2)                           (5log2)log22s2logdlog2s].    (15)

The proof of this theorem is also found in section 6.5. Since Theorem 4.2 holds for a fixed S of size at most s, if we want this to hold for all S of size at most s, we do a union bound over all S of size at most s. This leads to the following probability bound.

Theorem 4.3. Consider d, s, n, N ∈ ℕ, and all S with |S|s, for τ > 0, and ϵ(0,12), let an n × N matrix A be drawn from B, then

Prob(|As|(1-ϵ)ds)<pN(s,d,ϵ)· e[N·ΨN(s,d,ϵ)],    (16)

where

pN(s,d,ϵ)=5·e4·slog2(1-ϵ)+5/2210(1-ϵ)3d3π(1-s/N),    (17)
ΨN(s,d,ϵ)-sNlog(sN)+sN                               -τ(1-ϵ)d2log2sNlog(s2)+o(N).    (18)

Proof: Applying the union bound over all S of size at most s to (13) leads to the following.

Prob(|As|(1-ϵ)ds)<(Ns)pn(s,d,ϵ) e[n·Ψn(s,d,ϵ)].    (19)

Then we used the upper bound of (143) to bound the combinatorial term (Ns) in (19). After some algebraic manipulations, we separated the the polynomial term, given in (17), from the exponential terms whose exponent is

ΨN(s,d,ϵ):=H(sN)+nNΨn(s,d,ϵ).    (20)

We upper bound ΨN(s, d, ϵ) in (18) by upper bounding H(sN) with -sNlog(sN)+sN and the upper bound of Ψn(s, d, ϵ) in (15). The o(N) decays to zeros with N and its a result of dividing the polylogarithmic terms of s in (15), and τ = ηβ−1(β − 1) in (15). This concludes the proof.

The next corollary easily follows from Theorem 4.3 and it is equivalent to Theorem 3.1. Its statement is that if the conditions therein holds, then the probability that the cardinality of the set of neighbors of any S with |S|s is less than (1 − ϵ)ds goes to zero as dimensions of A grows. On the other hand, the probability that the cardinality of the set of neighbors of any S with |S|s is greater than (1 − ϵ)ds goes to one as dimensions of A grows. Implying that A is the adjacency matrix of a (s, d, ϵ)-expander graph.

Corollary 4.1. Given d, s, n, N ∈ ℕ, and ϵ(0,12), for dcdlog(N/s)ϵlogs and ncnslog(N/s)ϵ2, with cd, cn > 0. Let an n × N matrix A be drawn from B, in the proportional growth asymptotics

Prob(|As|(1-ϵ)ds)0.    (21)

Proof: It suffice to focus on the exponent of (16), more precisely on the bound of ΨN(s, d, ϵ) in (18), i.e.,

-sNlog(sN)+sN-τ(1-ϵ)d2log2sNlog(s2)+o(N).    (22)

We can ignore the o(N) term as this goes to zero as N grows, and show that the remaining sum is negative. The remaining sum is

-sNlog(sN)+sN-τ(1-ϵ)d2log2sNlog(s2)  =sN[-log(sN)+1-τ(1-ϵ)d2log2log(s2)].    (23)

Hence, we can further focus on the sum in the square brackets, and find conditions on d that will make it negative. We require

-log(sN)+1-τ(1-ϵ)d2log2log(s2)<0,        d>2log2(log(N/s)+1)τ(1-ϵ)log(s/2)    (24)
   dcdlog(N/s)ϵlogs,  cd>0.    (25)

Recall τ = ηβ−1(β−1) and β is a function of ϵ, with β(ϵ) ≈ 1+ϵ. Therefore, τ is a function of ϵ, and τ(ϵ)=O(ϵ), hence there exists a cd > 0 for (25) to hold.

With regards to the complexity of n, we go back to the right hand side (RHS) of (24) and we substitute Cdlog(N/s)ϵlog(s/2) with Cd > 0 for d in the RHS of (24) to get the following.

-log(sN)+1-Cdτ(1-ϵ)log(N/s)2ϵlog2<0.    (26)

Now we assume n=Cnslog(N/s)ϵ2 with Cn > 0 for n and substitute this in (26) to get the following.

-log(sN)+1-Cdτ(1-ϵ)ϵn2Cnslog2<0,       n>2Cnslog2(log(N/s)+1)Cdτϵ(1-ϵ)    (27)
   ncnslog(N/s)ϵ2,  cn>0.    (28)

Again since τ(ϵ)=O(ϵ), hence there exists a cn > 0 for (28) to hold. The bound of n in (28) agrees with our earlier assumption, thus concluding the proof.

5. Discussion

5.1. Comparison to Other Constructions

In addition to being the first probabilistic construction of adjacency matrices of expander graphs with such a small degree, quantitatively our results compares favorably to existing probabilistic constructions.We use the standard tool of phase transitions to compare our construction to the construction proposed in Berinde [27] and those proposed in Buhrmanet al. [26]. The phase transition curve ρBT(δ; d, ϵ) we derived in Lemma 3.1 is the ρ that solves the following equation.

-ρlog(δρ)+ρ-τ(1-ϵ)dρlog(δρ)2log2-τϵ(1-ϵ)d2cnlog2                           +τ(1-ϵ)dρ2=0,    (29)

where cn > 0 is as in (28). Equation (29) comes from taking the limit, in the proportional growth asymptotics, of the bound in (18), setting that to zero and simplifying. Similarly, for any S with |S|s, Berinde [27] derived the following bound on the set of neigbors of S, i.e., As.

Prob(|As|(1-ϵ)ds)<(Ns)(dsϵds)(dsn)ϵds.    (30)

We then express the bound in (30) as the product of a polynomial term and an exponential term as we did for the right hand side of (16). A bound of the exponent is carefully derived as in the derivations above. We set the limit, in the proportional growth asymptotics, of this bound to zero and simplify to get the following.

(ϵd-1)logρ-logδ+(1+ϵd)-ϵdlog(ϵ/d)=0.    (31)

We refer to the ρ that solves (31) as the phase transition for the construction proposed by Berinde [27] and denote this ρ (the phase transition function) as ρBI(δ; d, ϵ). Another probabilistic construction was proposed by Burhman et al. [26]. In conforminty with the notation used in this manuscript their bound is equivalent to the following, also stated in a similar form by Indyk and Razenshteyn [23].

Prob(|As|(1ϵ)ds)<(Ns)(ν ϵnds)ϵds,    (32)

where ν > 0. We again express the bound in (32) as the product of a polynomial term and an exponential term. A bound of the exponent is carefully derived as in the derivations above. We set the limit, in the proportional growth asymptotics, of this bound to zero and simplify to get the following.

(ϵd-1)logρ-logδ+1-ϵdlog(νϵ/d)=0.    (33)

Similarly, we refer to the ρ that solves (33) as the phase transition for the construction proposed by Burhman et al. [26] and denote this ρ as ρBM(δ; d, ϵ). We compute numerical solutions to (29), (31), and (33) to derive the phase transitions, for the existence (probabilistic construction) of expanders, ρBT(δ; d, ϵ), ρBI(δ; d, ϵ), and ρBM(δ; d, ϵ) respectively. These are plotted in the left panel of Figure 1. It is clear that our construction has a much higher phase transition than the others, as expected from our previous results, see Figure 9 in Bah and Tanner [1]. Recall that the phase transition curves in these plots depict construction of adjacency matrices of (s, d, ϵ)-expanders with high probability for ratios of s, n, and N (since ρ: = s/n, and δ: = n/N) below the curve; and the failure to construct adjacency matrices of (s, d, ϵ)-expanders with high probability for ratios of s, n, and N above the curve. Essentially, the larger the area under the curve the better.

FIGURE 1
www.frontiersin.org

Figure 1. Plots of phase transitions for probabilistic construction of expanders, with fixed d = 25, ϵ = 1/6, and δ ∈ [10−6, 1] on a logarithmically spaced grid of 100 points. (Left) A comparison of ρBT(δ; d, ϵ), ρBI(δ; d, ϵ), and ρBM(δ; d, ϵ), where cn = 2. (Right) A comparison of ρBT(δ; d, ϵ) denoted as ρBT2(δ; d, ϵ) to our previous ρBT(δ; d, ϵ) denoted as ρBT1(δ; d, ϵ) in Bah and Tanner [1] with different values of N, (i.e., 210, 212, and 220) and cn = 2/3.

Remark 5.1. It is easy to see that ρBI(δ; d, ϵ) is a special case of ρBM(δ; d, ϵ) since the two phase transitions will coincide, or equivalently (31) and (33) will be the same, when ν = e−1. One could argue that Berinde's derivation in [27] suffers from over counting.

Given that this work is an improvement of our work in Bah and Tanner [1] in terms of simplicity in computing ρBT(δ; d, ϵ), for completeness we compare our new phase transition ρBT(δ; d, ϵ) denoted as ρBT2(δ; d, ϵ) to our previous ρBT(δ; d, ϵ) denoted as ρBT1(δ; d, ϵ) in the right panel of Figure 1. Each computation of ρBT1(δ; d, ϵ) requires the specification of N, which is not needed in the computation of ρBT2(δ; d, ϵ), hence the simplification. However, the simplification led to a lower phase transition as expected, which is confirmed by the plots in the right panel of Figure 1.

Remark 5.2. These simulations also inform us about the size of cn. See from the plots of ρBT(δ; d, ϵ) and ρBT2(δ; d, ϵ) that the smaller the value of cn the higher the phase transition but since ρBT2(δ; d, ϵ) has to be a lower bound of ρBT1(δ; d, ϵ), for values of cn much smaller than 2/3, the lower bound will fail to hold. This informed the choice of cn = 2 in the plot of ρBT(δ; d, ϵ) in the left panel of Figure 1.

5.2. Implications for Combinatorial Compressed Sensing

When the sensing matrices are restricted to the sparse binary matrices considered in this manuscript, compressed sensing is usually referred to as combinatorial compressed sensing a term introduced in Berinde et al. [18] and used extensively in Mendoza-Smith and Tanner [14] and Mendoza-Smith et al. [15]. In this setting, compressed sensing is more-or-less equivalent to linear sketching. The implications of our results on combinatorial compressed sensing are two-fold. One is on the ℓ1-norm RIP, we donate as RIP-1; while the second is in the comparison of performance of recovery algorithms for combinatorial compressed sensing.

5.2.1. RIP-1

As can be seen from (2), the recovery errors in compressed sensing depend on the RIC, i.e., δs. The following lemma deduced from Theorem 1 of Berinde et al.[18] shows that a scaled A drawn from 𝔼 have RIP with δs = 2ϵ.

Lemma 5.1. Consider ϵ(0,12) and let A be drawn from 𝔼, then Φ = A/d satisfies the following RIP-1 condition

(1-2ϵ)x1Φx1x1, ssparse x.    (34)

The interested reader is referred to the proof of Theorem 1 in Berinde et al. [18] for the proof of this lemma. Key to the holding of Lemma 5.1 is the existence of (s, d, ϵ)-expander graphs, hence one can draw corollaries from our results on this.

Corollary 5.1. Consider ϵ(0,12) and let d, s, n, N ∈ ℕ. In the proportional growth asymptotics with a random draw of an n × N matrix A from B, the matrix Φ: = A/d has RIP-1 with probability approaching 1 exponentially, if

d=O(log(N/s)ϵlogs), and n=O(slog(N/s)ϵ2).    (35)

Proof: Note that the upper bound of (34) holds trivially for any Φ = A/d where A has d ones per column, i.e., AB. But for the lower bound of (34) to hold for any Φ = A/d, we need A to be an (s, d, ϵ)-expander matrix, i.e., A ∈ 𝔼. Note that the event |As| ≥ (1 − ϵ)ds is equal to the event ||ASx||1 ≥ (1 − 2ϵ)d||x|1, which is equivalent to ΦSx1(1-2ϵ)x|1, for a fixed S, with

|S|s. For A to be in 𝔼, we need expansion for all sets S, with |S|s, i.e., A∈𝔼. The key thing to remember is that

Prob(ΦX1(12є)X|1)           =Prob(|As|(1є)ds),   for all {S : |S|s}.    (36)

The probability in (36) going to 1 exponentially in the proportional growth asymptotics, i.e., the existence of A∈𝔼 with parameters as given in (35), is what is stated in Theorem 3.1. Therefore, the rest of the proof follows from the proof of Theorem 3.1, hence concluding the proof of the corollary.

Notably, Lemma 5.1 holds with Φ having much smaller number of non-zeros per column due to our construction. More over, we can derive sampling theorems for which Lemma 5.1 holds as thus.

Corollary 5.2. Fix ϵ(0,12) and let d, s, n, N ∈ ℕ. In the proportional growth asymptotics, for any ρ < (1−γ)ρBT(δ; d, ϵ) and γ > 0, a random draw of A from B implies Φ: = A/d has RIP-1 with probability approaching 1 exponentially.

Proof. The proof of this corollary follows from the proof of Corollary 5.1 above, and it is related to the proof of Lemma 3.1 as the proof of Corollary 5.1 is to the proof of Theorem 3.1. The details of the proof is thus skipped.

5.2.2. Performance of Algorithms

We wish to compare the performance of selected combinatorial compressed sensing algorithms in terms of the possible problem sizes (s, n, N) that these algorithms can reconstruct sparse/compressible signals/vectors up to their respective error guarantees. The comparison is typically done in the framework of phase transitions, which depict a boundary curve where ratios of problems sizes above this curve are recovered with probability approaching 0 exponentially; while problems sizes below the curve are recovered with probability approaching 1 exponentially. The list of combinatorial compressed sensing algorithms includes Expander Matching Pursuit (EMP) [32], Sparse Matching Pursuit [33], Sequential Sparse Matching Pursuit (SSMP) [13], Left Degree Dependent Signal Recovery (LDDSR) [11], Expander Recovery (ER) [12], Expander Iterative Hard-Thresholding (EIHT) [19, Section 13.4], and Expander ℓ0-decoding (ELD) with both serial and parallel versions [14]. For reason similar to those used in Bah and Tanner [1] and Mendoza-Smith and Tanner [14], we selected out of this list four of the algorithms: (i) SSMP, (ii) ER, (ii) EIHT, (iv) ELD. Descriptions of these algorithms is skipped here but the interested reader is referred to the original papers or their summarized details in Bah and Tanner [1] and Mendoza-Smith and Tanner [14]. We were also curious as to how ℓ1-minimization's performance compares to these selected combinatorial compressed sensing algorithms, since ℓ1-minimization (ℓ1-min) can be used to solve the combinatorial problem solved by these algorithms, see [18, Theorem 3].

The phase transitions are based on conditions on the RIC of the sensing matrices used. Consequent to Lemma 5.1, this becomes conditions on the expansion coefficient (i.e., ϵ) of the underlying (s, d, ϵ)-expander graphs of the sparse sensing matrices used. Where this condition on ϵ is not explicitly given it is easily deducible from the recovery guarantees given for each algorithms. The conditions are summarized in the table below.

The theoretical values are what will be found in the reference given in the table; while the computational values are what we used in our numerical experiments to compute the phase transition curves of the algorithms. The value for e was set to be 10−15, to make the ϵk as large as possible under the given condition. With these values we computed the phase transitions in Figure 2.

FIGURE 2
www.frontiersin.org

Figure 2. Phase transitions plots of algorithms with fixed d = 25, ϵ as in the fourth column of Table 1 with e = 10−15, cn = 2 and δ ∈ [10−6, 1] on a logarithmically spaced grid of 100 points. (Left) k = 3s for ρSSMP(δ; d, ϵ). (Right) k = ⌈(2+e)s⌉ for ρSSMP(δ; d, ϵ).

TABLE 1
www.frontiersin.org

Table 1. Recovery conditions on ϵ.

The two figures are the same except for the different sparsity value used. The performance of the algorithms in this framework are thus ranked as follows: ELD, ER, ℓ1-min, EIHT, and SSMP.

Remark 5.3. We point out that there are many way to compare performance of algorithms, this is just one way. For instance, we can compare runtime complexities or actual computational runtimes as in Mendoza-Smith and Tanner [14]; phase transitions of different probabilities, here the probability of recovery is 1 but this could be set to something else, like 1/2 in the simulations in Mendoza-Smith and Tanner [14]; one could also compare number of iterations and iteration cost as was also done in Mendoza-Smith and Tanner [14]. We further point out that such comaprisons could be conducted with real-world data simulations, as was done in Mendoza-Smith and Tanner [14].

6. Proofs of Theorems, Lemmas, and Propositions

6.1. Theorem 3.1

The proof of the theorem follows trivially from Corollary 4.1. Based on (21) of Corollary 4.1 we deduce that A ∈ 𝔼 with probability approaching 1 exponentially with dcdlog(N/s)ϵlogs and ncnslog(N/s)ϵ2, with cd, cn > 0, hence concluding the proof.

6.2. Lemma 3.1

The phase transition curve ρBT(δ; d, ϵ) is based the bound of the exponent of (16), which is

-sNlog(sN)+sN-τ(1-ϵ)d2log2sNlog(s2)+o(N).    (37)

In the propotional growth asymptotics (s, n, N) → ∞ while s/n → ρ ∈ (0, 1) and n/N → δ ∈ (0, 1). This implies that o(N) → 0 and (37) becomes

-ρlog(δρ)+ρ-τ(1-ϵ)dρlog(δρ)2log2-τϵ(1-ϵ)d2cnlog2+τ(1-ϵ)dρ2,    (38)

where cn > 0 is as in (28). If (38) is negative then as the problem size grows we have

Prob(|As|(1-ϵ)ds)0.    (39)

Therefore, setting (38) to zero and solving for ρ gives us a critical ρ below which (38) is negative and positive above it. The critical ρ is the phase transition ρ, i.e., ρBT(δ; d, ϵ), where below ρBT(δ; d, ϵ) is parameterized by the γ in the lemma. This concludes the proof.

6.3. Lemma 4.1

By the dyadic splitting proposed in Bah and Tanner [1], we let As=As21As22 such that |As|=|As21As22| and therefore

Prob(|As|as)=Prob(|As21As22|as)    (40)
                                   =ls=dasProb(|As21As22|=ls)    (41)

In (41) we sum over all possible events, i.e., all possible sizes of ls. In line with the splitting technique, we simplify the probability to the product of the probabilities of the cardinalities of |As21| and |As22| and their intersection. Using the definition of Pn(·) in Lemma 8.2 (Appendix), thus leads to the following.

Prob(|As|as)      =ls=2dasls21=das2ls22=das2Pn(ls,ls21,ls22)          ×Prob(|As21|=ls21)Prob(|As22|=ls22).    (42)

In a slight abuse of notation we write ljj[m] to denote applying the sum m times. We also drop the limits of the summation indices henceforth. Now we use Lemma 8.1 in Appendix to simplify (42) as follows.

lQ0j1j1[q0]lQ1j2j2[q1]lR1j3j3[r1]Pn(lQ0j1,lQ022j11,lQ022j1) × j2=1q1Prob(|AQ1j2|=lQ1j2)q1+r1j3=q1+1Prob(|AR1j3|=lR1j3)    (43)

Now we proceed with the splitting—note (43) stopped only at the first level. At the next level, the second, we will have q2 sets with Q2 columns and r2 sets with R2 columns which leads to the following expression.

lQ0j1j1[q0]lQ1j2j2[q1]lR1j3j3[r1]Pn(lQ0j1,lQ022j11,lQ022j1)[lQ2j4j4[q2]lR2j5j5[r2]Pn(lQ1j2,lQ122j21,lQ122j2) ×Pn(lR1j3,lR122j31,lR122j3) j4=1q2Prob(|AQ2j4|=lQ2j4)j5=q2+1q2+r2Prob(|AR2j5|=lR2j5)].    (44)

We continue this splitting of each instance of Prob(·) for ⌈log2s⌉−1 levels until reaching sets with single columns where, by construction, the probability that the single column has d non-zeros is one. Note that at this point we drop the subscripts ji, as they are no longer needed. This process gives a complicated product of nested sums of Pn(·) which we express as

lQ0lQ1lR1Pn(lQ0,lQ02,lQ02)[    lQ2lR2Pn(lQ1,lQ12,lQ12)Pn(lR1,lR12,lR12)    [×[lQlog2s-1Pn(l4,l2,l2)Pn(l3,l2,d)Pn(l2,d,d)]].    (45)

Using the expression for Pn(·) in (133) of Lemma 8.2 (Appendix) we bound (45) by bounding each Pn(·) as in (134) with a product of a polynomial, π(·), and an exponential with exponent ψn(·).

lQ0lQ1lR1π(lQ0,lQ02,lQ02)eψn(lQ0,lQ02,lQ02)×[    lQ2lR2π(lQ1,lQ12,lQ12)×    eψn(lQ1,lQ12,lQ12)π(lR1,lR12,lR12)·eψn(lR1,lR12,lR12)×[    ×[lQlog2s-1π(l4,l2,l2)×eψn(l4,l2,l2)π(l3,l2,d)×    eψn(l3,l2,d)π(l2,d,d)·eψn(l2,d,d)]].    (46)

Using Lemma 8.4 in Appendix we maximize the ψn(·) and hence the exponentials in (46). We maximize each ψn(·) by choosing l(·) to be a(·). Then (46) will be upper bounded by the following.

lQ0lQ1lR1π(lQ0,lQ02,lQ02)eψn(aQ0,aQ02,aQ02)×[   lQ2lR2π(lQ1,lQ12,lQ12)×eψn(aQ1,aQ12,aQ12)   π(lR1,lR12,lR12)·eψn(aR1,aR12,aR12)[×   [lQlog2s-1π(l4,l2,l2)×eψn(a4,a2,a2)π(l3,l2,d)eψn(a3,a2,d)   π(l2,d,d)·eψn(a2,d,d)]].    (47)

We then factor the product of exponentials. This product becomes an exponential where exponent is the summation of the ψn(·), we will denote this exponent as Ψ~n(as,,a2,d). Then (47) simplifies to the following.

eΨ~n(as,,a2,d)·lQ0lQ1lR1π(lQ0,lQ02,lQ02)×    [lQ2lR2π(lQ1,lQ12,lQ12)×π(lR1,lR12,lR12)×[    ×[lQlog2s-1π(l4,l2,l2)π(l3,l2,d)π(l2,d,d)]]    (48)

We denote the factor multiplying the exponential term by Π(ls, …, l2, d), therefore we have the following bound.

Prob(|As|as)Π(ls,,l2,d)· e(Ψ~n(as,,a2,d)),    (49)

where Ψ~n(as,,a2,d) is exactly Ψn(as, …, a2, d) given by (78) in [1, Proof of Theorem 1.6]. Consequently, we state the bound of Ψ~n(as,,a2,d) and skip the proof, which is as thus.

Ψ~n(as,,a2,d)iΩs2iψi,   for Ω={2j}j=0log2(s)-1,    (50)

where ψi is given by (8). The upper bound of Π(ls, …, l2, d) is given by the following proposition.

Proposition 6.1. Given lsas, ls/2⌉as/2⌉, ⋯ , l2a2, we have

Π(ls,,l2,d)2-3·e14·s32log2s+92·(as)log2s-32.    (51)

The proof of the proposition is found in section 6.4. Taking log of right hand side of (51) and then exponentiating the results yields

Π(ls,,l2,d)       exp[143log 2+(92+32log 2s)log s       +(log 2s32)logas]       (52)
   =2-3s9/2e1/4· e[32log2slogs+(log2s-32)logas]    (53)
   =2-3s9/2e1/4· e[3log22log22s+(log2s-32)logas].    (54)

Combining (54) and (50) gives the following bound for (49)

Prob(|As|as)23s9/2e1/4·exp[log82log22s                                      +(log2s32)logas+Ψ˜n(as,,d)]    (55)

It follows therefore that pn(s,d)=2-3s9/2e1/4 as in (11) and the exponent in (55) is n · Ψn(as, …, a1), which implies (12). This concludes the proof of the lemma.

6.4. Proposition 6.1

By definition, from (48), we have

(ls,,l2,d): =lQ0lQ1lR1π(lQ0,lQ02,lQ02) ×     [lQ2lR2π(lQ1,lQ12,lQ12) × π(lR1,lR12,lR12) · [     ×[lQlog2s1π(l4,l2,l2)π(l3,l2,d)π(l2,d,d)]].    (56)

From (138) we see that π(·) is maximized when all the three arguments are the same and using Corollary 8.1 we take largest possible arguments that are equal in the range of the summation. Before we write out the resulting bound for Π(ls, …, l2, d), we simplify notation by denoting π(x, x, x) as π(x), and noting that Qlog2s⌉−1 = 2. Therefore, the bound becomes the following.

Π(ls,,l2,d)   lQ0   lQ1   lR1π(lQ0)[   lQ2   lR2π(lQ1)                 × π (lR1)[[   l2π(l4) π (l3) π (l2)]].    (57)

Properly aligning the π(·) with their relevant summations simplifies the right hand side (RHS) of (57) to the following.

[   lQ0π(lQ0)][   lQ1π(lQ1)]·[   lR1π(lR1)]×     ×[   l4π(l4)]·[   l3π(l3)]·[   l2π(l2)].    (58)

From (138) we have π (y) defined as

π(y,y,y)=(54)22πy(n-y)n<(54)22πy.    (59)

We use the RHS of (59) to upper bound each term in (58), leading to the following bound.

[2π(54)2   lQ1lQ0][2π(54)2   lQ1lQ1].[2π(54)2   lR1lR1]××[2π(54)2   l4l4]×[2π(54)2   l3l3][2π(54)2   l2l2].    (60)

For each Qi and Ri, i = 1, …, ⌈log2 s⌉−2, which means we have ⌈log2 s⌉ − 2 pairs plus one Q0, hence (60) simplifies to the following.

[2π(54)2]2log2s-3[lQ0lQ0]·[lQ1lQ1]× ×[l3l3]·[l2l2]    (61)
[2π(54)2]2log2s-3·(q0aQ0)·(q1aQ1)     ×(qlog2s-2a3)·(rlog2s-2a2)    (62)
=[2π(54)2]2log2s-3·(q0q1r1qlog2s-2rlog2s-2)·      (aQ0aQ1aR1a3a2)1/2.    (63)

From (61) to (62) we upper each sum by taking the largest possible value of l(·), which is a(·), and multiplied it with the total number terms in the summation given by Lemma 8.1 in Appendix. We did upper bound the following two terms of (63).

q0q1r1q2r2q3r3qlog2s-2rlog2s-2slog2s-1,    (64)
(aQ0aQ1aR1aQ2aR2aQ3aR3a3a2)1/22-1·e14·(as)log2s-32×     s12log2s+32    (65)

Details of the derivation of the bounds (64) and (65) is in the Appendix. Using these bounds from (63) we have the following upper bound for Π(ls, …, l2, d).

Π(ls,,l2,d)[2π(54)2]2log2s-3·(slog2s-1)×      (2-1·e14·(as)log2s-32·s12log2s+32)    (66)
2-3·e14·(as)log2s-32·s32log2s+92.    (67)

From (66) to (67) we used the following upper bound.

[2π(54)2]2log2s-3[2π(54)2]2(log2s+1)-3 42log2s-1=24log2s-2=2-2s4.    (68)

The bound (67) coincides with (51), hence concluding the proof.

6.5. Theorem 4.2

The following lemma is a key input in this proof.

Lemma 6.1. Let 0 < α ≤ 1, and εn > 0 such that εn → 0 as n → ∞. Then for as < âs,

a2i=2ai+cai2, for c=-βn-1,    (69)

where

β=1+1-4(1-εn)2(1-e-αd)e-αd2(1-εn)(1-e-αd).    (70)

The proof of the lemma is found in section 6.6. Recall from Theorem 4.1 that

Ψn(as,,a1)=1n[3log22log22s+(log2s32)logas+           iΩs2iψi],     forΩ={2j}j=0log2(s)1,    (71)

where ψi is given by the following expression

(n-ai)·H(a2i-ain-ai)+ai·H(a2i-aiai)-n·H(ain)    (72)

We use Lemma 6.1 to upper bound ψi in (72) away from zero from above as n → 0. We formalize this bound in the following proposition.

Proposition 6.2. Let η > 0 and β > 1 as defined in Lemma 6.1. Then

ψi-aiη(β-1)β-1(1-βain)-1.    (73)

The proof of Proposition 6.2 is found in section 6.7. Using the bound of ψi in Proposition 6.2, we upper Ψ(as, …, a1) as follows.

Ψ(as,,a1)1n[3log22log22s+(log2s32)logas       iΩs2i·aiη(β1)β(1βain)],    (74)
η(β1)βiΩ(s2iain)+1n[3log22log22s+        (log2s32)logas].    (75)

Then setting as = (1 − ϵ)ds and substituting in (75), the factor multiplying 1n becomes

3log22log22s+(log2s-32)log[(1-ϵ)ds]    (76)
=3log22log22s+log(1-ϵ)log2s+logdlog2s+     log2slogs-32log[(1-ϵ)d]-32logs    (77)
=5log22log22s+(log2(1-ϵ)-3/2)logs+logdlog2s+     log[(1-ϵ)-3/2d-3/2]    (78)
=5log22log22s+logdlog2s+log[(1-ϵ)-3/2d-3/2]     +logslog2(1-ϵ)-3/2.    (79)

The last two terms of (79) become polynomial in s, d and ϵ, when exponentiated hence they are incorporated into pn(s, d, ϵ) in (14), which means

pn(s,d,ϵ) =pn(s,d)exp[log[(1ϵ)3/2d3/2]+                                   logslog2(1ϵ)3/2]    (80)
=2-3s9/2e1/4·(1-ϵ)-3/2d-3/2slog2(1-ϵ)-3/2    (81)
=e4·slog2(1-ϵ)+326(1-ϵ)3d3,    (82)

which is (14). The first two terms of (79) will grow faster than a polynomial in s, d and ϵ when exponentiated, hence they replace in (75), the factor multiplying 1n. Therefore, (79) is modified as thus

-η(β-1)βiΩ(s2i·ain)+1n[5log22log22s+logdlog2s]    =:Ψn(s,d,ϵ).    (83)

The factor iΩ(s2i·ain) in (83) is lower bounded as follows, see proof in section 6.8.

iΩ(s2i·ain)log2(s/2)2n(1-ϵ)ds.    (84)

Using this bound in (83) gives (15), thus concluding the proof.

6.6. Lemma 6.1

Recall that we have a formula for the expected values of the ai as

a^2i=a^i(2-a^in) for i{2j}j=0log2(s)-1,    (85)

which follow a relatively simple formulas, and then the coupled system of cubics as

a2i3-2aia2i2+2ai2a2i-ai2a4i=0 for i{2j}j=0log2(s)-2,    (86)

for when the final as is constrained to be less than âs. To simplify the notation of the indexing in (86), observe that if i = 2j for a fixed j, then 2i = 2j+1 and 4i = 2j+2. Therefore, it suffice to use the index aj, aj+1, and aj+2 rather than ai, a2i, and a4i. Moving the second two terms in (86) to the right and dividing the quadratic multiples we get the relation

aj+2-2aj+1aj+12=aj+1-2ajaj2,    (87)

which is the same expression on the right and left, but with j increased by one on the left. This implies that the fraction is independent of j, so

aj+1-2ajaj2=c,      aj+1=aj(2+caj),    (88)

for some constant c independent of j (though not necessarily of n). This is in fact the relation (85), if we set c to be equal to −1/n. One can then wonder what is the behavior of c if we fix the final as. Moreover, (88) is equivalent to

caj+1+1=(caj+1)2,    (89)

which inductively leads to

cal+1=(ca0+1)2l,   l>0,    (90)

so that one has a relation of the lth stage in terms of the first stage. Note this does not require the as to be fixed, (90) is how one simply computes all al for l > 0 once one has a0 and c. The point is that c to match the as one has to select c appropriately. So the way we calculate c is by knowing a0 and as, then solving (90) for l = s. Unfortunately there is not an easy way to solve for c in (90) so we need to do some asymptotic approximation. Let's assume that al is close to âl. So we do an asymptotic expansion in terms of the difference from âl.

To simplify things a bit lets insert a0 = d (since a0 is a1 in our standard notation) and then we insert what we know for âl. For âl we have c = −n−1, see (85). We then have from (90) that

al=c-1(cd+1)2l-c-1   and   âl=-n(-d/n+1)2l+n.    (91)

So if we write al = (1 − εnl and consider the case of εn → 0 as n → 0. The point of this is that instead of working with âl we can now work in terms of εn. Setting al = (1 − εnl gives

c-1(cd+1)2l-c-1=-n(1-εn)[(-d/n+1)2l-1].    (92)

We now solve for c as a function of εn and d. As εn goes to zero we should have c converging to −n−1.

Let αn = 2l, for 0 < α ≤ 1 and c = −β(εn, d)/n, then, dropping the argument of β(·, ·), (92) becomes

nβ(1-βdn)αn-nβ=n(1-εn)(1-dn)αn-n(1-ϵ).    (93)

Multiplying through by β/n and performing a change of variables of k = αn, (93) becomes

(1-αβdk)k-1=β(1-εn)(1-αdk)k-β(1-εn).    (94)

The left hand side of (94) simplifies to

e-αβd-1-e-αβdα2β2d22k+O(k-2).    (95)

The right hand side of (94) simplifies to

β(1-εn)e-αd-β(1-εn)-β(1-εn)e-αdα2d22k+O(k-2).    (96)

Matching powers of k in (95) and (96) for k0 and k−1 yields the following.

e-αβd-1=β(1-εn)e-αd-β(1-εn),   and    (97)
α2β2d2e-αβd=β(1-εn)α2d2e-αd.    (98)

Both of which respectively simplify to the following.

e-αβd-1=β(1-εn)(e-αd-1),   and    (99)
βe-αβd=(1-εn)e-αd.    (100)

Multiply (99) by β and subtract the two equations, (99) and (100), to get

(1-εn)(1-e-αd)β2-β+(1-εn)e-αd=0.    (101)

This yields

β=1±1-4(1-εn)2(1-e-αd)e-αd2(1-εn)(1-e-αd).    (102)

To be consistent with what c ought to be as εn → 0, we choose

β(εn,d)=1+1-4(1-εn)2(1-e-αd)e-αd2(1-εn)(1-e-αd),    (103)

as required – concluding the proof.

6.7. Proposition 6.2

We use Lemma 6.1 to express ψi in (72) as follows

ψi=-n[H(ain)-H(ai+cai2n-ai)]                 +ai[H(ai+cai2ai)-H(ai+cai2n-ai)]    (104)
=-n[H(ain)-H(ain·1+cai1-ain)]                 +ai[H(1+cai)-H(ain·1+cai1-ain)].    (105)

Note that for regimes of small s/n considered

-βn=c-1n,      cai-ain,   and   1+cai1-ain.    (106)

We need the following expressions for the Shannon entropy and it's first and second derivatives

H(z)=-zlogz-(1-z)log(1-z),    (107)
H(z)=log(1-zz),   and    (108)
H(z)=-1z(1-z).    (109)

But also H(z)=H(1-z) due to the symmetry about z = 1/2. Similarly, H(z) is symmetric about z = 1/2; while H(z) is anti-symmetric, i.e., H(z)=-H(1-z). Using the symmetry of H(z) we rewrite ψi in (105) as follows.

ψi=-n[H(ain)-H(ain·1+cai1-ain)]               +ai[H(-cai)-H(ain·1+cai1-ain)].    (110)

From (106), we deduce the following ordering

ain·1+cai1-ainain-cai1/2.    (111)

The last equality in (111) follows from that we assume 2βain, where 1 < β < 2 but very close to 1. This is a valid constraint on the cardinality of the supports ai. To simplify notation, let x1=ain·1+cai1-ain, x2=ain, and x3 = −cai, which implies that x1x2x3 ≤ 1/2. Therefore, from (110), we have

ψi=-n[H(x2)-H(x1)]+ai[H(x3)-H(x1)]    (112)
=n[(x2)(x1)]+ai[(x3)(x2)            +(x2)(x1)]    (113)
=-(n-ai)[H(x2)-H(x1)]+ai[H(x3)-H(x2)]    (114)
-(n-ai)(x2-x1)H(x2)+ai(x3-x2)H(x2)    (115)
=[ai(x3-x2)-(n-ai)(x2-x1)]H(x2).    (116)

Observe that the expression in the square brackets on the right hand side of (116) is zero, which implies that

ai(x3-x2)=(n-ai)(x2-x1).    (117)

This is very easy to check by substituting the values of x1, x2, and x3. So instead of bound (115), we alternatively upper bound (114) as follows

ψi-(n-ai)(x2-x1)H(ξ21)+ai(x3-x2)H(ξ32),    (118)

where ξ21 ∈ (x1, x2), and ξ32 ∈ (x2, x3), which implies

ξ21<ξ32,   and   H(ξ21)>H(ξ32).    (119)

Using relation (117), bound (118) simplifies to the following.

ψi-ai(x3-x2)H(ξ21)+ai(x3-x2)H(ξ32)    (120)
=-ai(x3-x2)[H(ξ21)-H(ξ32)]    (121)
-ai(x3-x2)(ξ21-ξ32)H(ξ31),    (122)

for ξ31 ∈ (x1, x3). Since ξ21 < ξ32, we rewrite bound (122) as follows.

ψiai(x3-x2)(ξ32-ξ21)H(ξ31)    (123)
aiη(x3-x2)H(x3),    (124)

where η = ξ32 − ξ21 > 0, and the last bound is due to the fact that x3 > ξ31.

Going back to our normal notation, we rewrite bound (124) as follows.

ψiaiη(-cai-ain)H(-cai)    (125)
=aiηain(β-1)-1βain(1-βain)    (126)
=-aiη(β-1)β(1-βain),    (127)

This concludes the proof.

6.8. Inequality 84

The series bound (84) is derived as follows.

iΩ(s2i·ain)=(s2·a1n)+(s4·a2n)++(ss·as/2n)    (128)
(s2n·(1-ϵ)ds2log2s)+(s4n·(1-ϵ)ds2log2(s/2))+   +(ssn·(1-ϵ)ds2)    (129)
=[(s2n·1s)+(s4n·2s)+       +(ssn·12)](1ϵ)ds    (130)
=(12n+12n++12n)(1-ϵ)ds    = log2(s/2)2n(1-ϵ)ds.    (131)

That is the required bounds, hence concluding the proof.

7. Conclusion

We considered the construction of sparse matrices that are invaluable for dimensionality reduction with application in diverse fields. These sparse matrices are more efficient computationally compared to their dense counterparts also used for the purpose of dimensionality reduction. Our construction is probabilistic based on the dyadic splitting method we introduced in Bah and Tanner [1]. By better approximation of the bounds we achieve a novel result, which is a reduced complexity of the sparsity per column of these matrices. Precisely, a complexity that is a state-of-the-art divided by logs, where s is the intrinsic dimension of the problem.

Our approach is one of a few that gives quantitative sampling theorems for existence of such sparse matrices. Moreover, using the phase transition framework comparison, our construction is better than existing probabilistic constructions. We are also able to compare performance of combinatorial compressed sensing algorithms by comparing their phase transition curves. This is one perspective in algorithm comparison amongst a couple of others like runtime and iteration complexities.

Evidently, our results holds true for the construction of expander graphs, which is a graph theory problem and is of interest to communities in theoretical computer science and pure mathematics.

Author Contributions

JT contributed to conception of the idea. Both JT and BB worked on the derivation of the initial results, which was substantially improved by BB. BB wrote the first draft of the manuscript. JT did some revisions and gave feedback.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

BB acknowledges the support from the funding by the German Federal Ministry of Education and Research (BMBF) for the German Research Chair at AIMS South Africa, funding for which is administered by Alexander von Humboldt Foundation (AvH). JT acknowledges support from The Alan Turing Institute under the EPSRC grant EP/N510129/1.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fams.2018.00039/full#supplementary-material

References

1. Bah B, Tanner J. Vanishingly sparse matrices and expander graphs, with application to compressed sensing. IEEE Trans Inform Theor. (2013) 59:7491–508. doi: 10.1109/TIT.2013.2274267

CrossRef Full Text | Google Scholar

2. Ahn KJ, Guha S, McGregor A. Graph sketches: sparsification, spanners, and subgraphs. In: Proceedings of the 31st Symposium on Principles of Database Systems. Scottsdale, AZ: ACM (2012). p. 5–14.

Google Scholar

3. Gilbert AC, Levchenko K. Compressing network graphs. In: Proceedings of the LinkKDD Workshop at the 10th ACM Conference on KDD. Seatle, WA (2004).

Google Scholar

4. Vardi Y. Network tomography: estimating source-destination traffic intensities from link data. J Am Stat Assoc. (1996) 91:365–77.

Google Scholar

5. Castro R, Coates M, Liang G, Nowak R, Yu B. Network tomography: recent developments. Stat Sci. (2004) 19:499–517. doi: 10.1214/088342304000000422

CrossRef Full Text | Google Scholar

6. Muthukrishnan S. Data Streams: Algorithms and Applications. Now Publishers Inc (2005).

Google Scholar

7. Indyk P. Sketching, Streaming and Sublinear-Space Algorithms. Graduate course notest (2007).

Google Scholar

8. Dwork C, McSherry F, Talwar K. The price of privacy and the limits of LP decoding. In: Proceedings of the Thirty-ninth Annual ACM Symposium on Theory of Computing. San Diego, CA: ACM (2007). p. 85–94.

Google Scholar

9. Donoho DL, Johnstone IM, Hoch JC, Stern AS. Maximum entropy and the nearly black object. J R Stat Soc Ser B (1992) 54:41–81.

Google Scholar

10. Donoho DL. Compressed sensing. IEEE Trans Inform Theor. (2006) 52:1289–306. doi: 10.1109/TIT.2006.871582

CrossRef Full Text | Google Scholar

11. Xu W, Hassibi B. Efficient compressive sensing with deterministic guarantees using expander graphs. In: Information Theory Workshop, 2007. ITW'07. Tahoe City, CA: IEEE (2007). p. 414–9.

Google Scholar

12. Jafarpour S, Xu W, Hassibi B, Calderbank R. Efficient and robust compressed sensing using optimized expander graphs. IEEE Trans Inform Theor. (2009) 55:4299–308. doi: 10.1109/TIT.2009.2025528

CrossRef Full Text | Google Scholar

13. Berinde R, Indyk P. Sequential sparse matching pursuit. In: 47th Annual Allerton Conference on Communication, Control, and Computing. IEEE (2009). p. 36–43.

Google Scholar

14. Mendoza-Smith R, Tanner J. Expander ℓ0-decoding. Appl. Comput. Harmon. Anal. (2017) 45:642–67. doi: 10.1016/j.acha.2017.03.001

CrossRef Full Text | Google Scholar

15. Mendoza-Smith R, Tanner J, Wechsung F. A robust parallel algorithm for combinatorial compressed sensing. arXiv [preprint] arXiv:170409012 (2017).

Google Scholar

16. Du DZ, Hwang FK. Combinatorial Group Testing and Its Applications. World Scientific (2000).

17. Gilbert AC, Iwen MA, Strauss MJ. Group testing and sparse signal recovery. In: Signals, Systems and Computers, 2008 42nd Asilomar Conference on. Pacific Grove, CA: IEEE (2008). p. 1059–63.

Google Scholar

18. Berinde R, Gilbert AC, Indyk P, Karloff H, Strauss MJ. Combining geometry and combinatorics: A unified approach to sparse signal recovery. In: Communication, Control, and Computing, 2008 46th Annual Allerton Conference on. Urbana-Champaign, IL: IEEE (2008). p. 798–805.

Google Scholar

19. Foucart S, Rauhut H. A Mathematical Introduction to Compressive Sensing. New York, NY: Springer (2013).

Google Scholar

20. Hoory S, Linial N, Wigderson A. Expander graphs and their applications. Bull Am Math Soc. (2006) 43:439–562. doi: 10.1090/S0273-0979-06-01126-8

CrossRef Full Text | Google Scholar

21. Guruswami V, Umans C, Vadhan S. Unbalanced expanders and randomness extractors from Parvaresh–Vardy codes. J ACM (2009) 56:20. doi: 10.1145/1538902.1538904

CrossRef Full Text | Google Scholar

22. Bassalygo LA, Pinsker MS. Complexity of an optimum nonblocking switching network without reconnections. Problemy Peredachi Informatsii (1973) 9:84–87.

Google Scholar

23. Indyk P, Razenshteyn I. On model-based RIP-1 matrices. In: International Colloquium on Automata, Languages, and Programming. Berlin; Heidelberg: Springer (2013). p. 564–75.

Google Scholar

24. Bah B, Baldassarre L, Cevher V. Model-based sketching and recovery with expanders. In: SODA. SIAM (2014). p. 1529–43.

Google Scholar

25. Donoho DL, Tanner J. Thresholds for the recovery of sparse solutions via l1 minimization. In: Information Sciences and Systems, 2006 40th Annual Conference on. IEEE (2006). p. 202–6.

Google Scholar

26. Buhrman H, Miltersen PB, Radhakrishnan J, Venkatesh S. Are bitvectors optimal? SIAM J Comput. (2002) 31:1723–44. doi: 10.1137/S0097539702405292

CrossRef Full Text | Google Scholar

27. Berinde R. Advances in Sparse Signal Recovery Methods. Massachusetts Institute of Technology (2009).

28. Candès EJ, Romberg J, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math. (2006) 59:1207–23. doi: 10.1002/cpa.20124

CrossRef Full Text | Google Scholar

29. Capalbo M, Reingold O, Vadhan S, Wigderson A. Randomness conductors and constant-degree lossless expanders. In: Proceedings of the Thirty-fourth Annual ACM Symposium on Theory of Computing. New York, NY: ACM (2002). p. 659–68.

Google Scholar

30. Blanchard JD, Cartis C, Tanner J. Compressed sensing: how sharp is the restricted isometry property? SIAM Rev. (2011) 53:105–25. doi: 10.1137/090748160

CrossRef Full Text | Google Scholar

31. Bah B, Tanner J. Improved bounds on restricted isometry constants for Gaussian matrices. SIAM J Matrix Anal Appl. (2010) 31:2882–98. doi: 10.1137/100788884

CrossRef Full Text | Google Scholar

32. Indyk P, Ruzic M. Near-optimal sparse recovery in the ℓ1-norm. In: Foundations of Computer Science, 2008. FOCS'08. IEEE 49th Annual IEEE Symposium on. Philadelphia, PA: IEEE (2008). p. 199–207.

Google Scholar

33. Berinde R, Indyk P, Ruzic M. Practical near-optimal sparse recovery in the ℓ1-norm. In: 46th Annual Allerton Conference on Communication, Control, and Computing. Piscataway, NJ: IEEE (2008). p. 198–205.

Google Scholar

Keywords: compressed sensing (CS), expander graphs, probabilistic construction, sample complexity, sampling theorems, combinatorial compressed sensing, linear sketching, sparse recovery algorithms

Citation: Bah B and Tanner J (2018) On the Construction of Sparse Matrices From Expander Graphs. Front. Appl. Math. Stat. 4:39. doi: 10.3389/fams.2018.00039

Received: 27 June 2018; Accepted: 10 August 2018;
Published: 04 September 2018.

Edited by:

Haizhao Yang, National University of Singapore, Singapore

Reviewed by:

Alex Jung, Aalto University, Finland
Xiuyuan Cheng, Duke University, United States
Zhihui Zhu, Johns Hopkins University, United States

Copyright © 2018 Bah and Tanner. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Bubacarr Bah, YnViYWNhcnJAYWltcy5hYy56YQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.