- 1African Institute for Mathematical Sciences (AIMS), Stellenbosch University, Muizenberg, South Africa
- 2School of Mathematics and Statistics, Beijing Institute of Technology, Beijing, China
Editorial on the Research Topic
Recent Developments in Signal Approximation and Reconstruction
Since the seminal works of Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho on compressed sensing, the field has emerged as one of the most fruitful research areas in the computational and applied mathematics communities with just a dozen papers published with the title containing the keyword compressed sensing and 162 containing sparse approximation in 2007 to over 1500 and close to 700 respectively for the year 2018. It combines ideas from probability theory (in particular, but not limited to concentration inequalities and random matrix theory), optimization (convex, non-convex, and numerical), numerical linear algebra (and the use of perturbation theory and estimation of singular values), computational harmonic analysis (with the importance of wavelet and Fourier analysis)… Research in compressed sensing has influenced other areas of mathematics and signal processing such as, but not limited to,
• Low-rank matrix approximation: By using the nuclear norm, one may apply sparse approximation techniques to the sequence of singular values of a matrix.
• Optimization: On the quest to solve the sparse signal recovery problem, some methods have been developed and analyzed such as proximal methods, primal dual methods, some greedy and iterative methods.1
• Random matrices and probability: compressed sensing usually enjoys recovery guarantees based on estimations of singular values of some random matrices, more or less structured.
This spout of research has been motivated, we believe, by the many potential applications of sparse approximations, for instance in medical imaging (tomography, MRI, etc.), uncertainty quantification, (meta)genomics, image processing, and many more.
The present Research Topic aims at depicting a snapshot of today's ongoing research in the area of sparse approximation, low-rank approximation, and related applications. It introduces 5 original papers, each representing very distinctive and novel research directions:
The first paper, On the construction of sparse matrices from expander graphs, by Bah and Tanner, revisits the use of sparse matrices from expander graphs for compressed sensing. The results presented allow one to use very sparse matrices, useful for efficient computations, whilst keeping a large range of sparsity of the sought-after signal. The sampling bounds derived in this work allow for improved performance comparisons of compressed sensing algorithms.
In the second paper, CUR Decompositions, similarity matrices, and subspace clustering by Aldroubi et al., the authors investigate the use of CUR decompositions for the problem of clustering. CUR decompositions are known in diverse communities and may carry the name of skeleton decomposition in the PDE literature or CUR approximation in the low-rank approximation circle. This concept is here researched for its used in unsupervised clustering and applied to motion segmentation, outperforming known methods to date.
The third paper, A new nonconvex sparse recovery method for compressive sensing by Zhou and Yu, continues the research in traditional compressed sensing. In their article, the authors introduce a new nonconvex measure of sparsity based on the difference between a ℓr and an ℓ1 norm. Moreover, they can adapt the known iteratively reweighted least squares algorithm to solve the minimization problem efficiently. The guarantees involved here are related to the constrained maximum singular value, yet another interesting topic in linear algebra.
The fourth paper, Simultaneous Structures in Convex Signal Recovery—Revisiting the Convex Combination of Norms by Kliesch et al., reviews the problem of recovering signals that are highly structured and known to be in the intersection of many models, for instance matrices that are both low-rank and sparse. While it is known that simply using convex combinations of envelopes of regularizers does not improve the recovery guarantees in general, the authors are interested in looking at weighted sums of norms (or maximum of weighted norms) and the optimality of such representations in terms of sample complexity. For this they analyze the statistical dimensions of descent cones.
Finally, Derandomizing compressed sensing with combinatorial design by Jung et al., goes one step further derandomizing compressed sensing. Indeed, a big issue encountered in compressed sensing is the use of random matrices for obtaining recovery guarantees. In their paper, the authors emulate traditional random vectors used in compressed sensing via combinatorial designs. This novel construction allows them to reduce drastically the randomness involved in compressed sensing recovery results. Their main results state that combinatorial designs allow the recovery of sparse signals from a linear number of measurements (up to the traditional log factors).
Altogether the present Research Topic tackles some of the most important and current problems in signal approximation and reconstruction with applications to low-rank matrix and tensor recovery. Given the tremendous amount of effort required by the referees, authors, and editorial teams we want to send our gratitude to everyone who has taken part in the process of writing/editing/reviewing this Research Topic. Their collaborative work is what has made this snapshot of the field possible, and a great one.
Author Contributions
BB and J-LB contributed to inviting people, organizing the referee process and summed up in this editorial contribution.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
BB acknowledges the support from the funding by the German Federal Ministry of Education and Research (BMBF) for the German Research Chair at AIMS South Africa, funding for which is administered by Alexander von Humboldt Foundation (AvH).
Footnote
1. ^We do not claim that these methods have been invented for the purpose of compressed sensing but simply that their developments have been influenced by research in compressed sensing.
Keywords: compressed sensing, sparse approximation, signal reconstruction, signal approximation, low-rank and sparse matrix decomposition
Citation: Bah B and Bouchot J-L (2020) Editorial: Recent Developments in Signal Approximation and Reconstruction. Front. Appl. Math. Stat. 6:6. doi: 10.3389/fams.2020.00006
Received: 08 January 2020; Accepted: 24 February 2020;
Published: 17 March 2020.
Edited by:
Daniel Potts, Technische Universität Chemnitz, GermanyReviewed by:
Mark Iwen, Michigan State University, United StatesCopyright © 2020 Bah and Bouchot. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jean-Luc Bouchot, jlbouchot@gmail.com