Skip to main content

ORIGINAL RESEARCH article

Front. Phys., 15 May 2024
Sec. Optics and Photonics

Low-sampling high-quality Hadamard and Fourier single-pixel imaging through automated optimization neural network

Guozhong LeiGuozhong Lei1Wenchang LaiWenchang Lai1Qi MengQi Meng1Wenda Cui,Wenda Cui1,2Hao Liu,Hao Liu1,2Yan Wang,
Yan Wang1,2*Kai Han,
Kai Han1,2*
  • 1College of Advanced Interdisciplinary Studies, National University of Defense Technology, Changsha, China
  • 2Nanhu Laser Laboratory, National University of Defense Technology, Changsha, China

In this manuscript, an automated optimization neural network is applied in Hadamard single-pixel imaging (H-SPI) and Fourier single-pixel imaging (F-SPI) to improve the imaging quality at low sampling ratios which is called AO-Net. By projecting Hadamard or Fourier basis illumination light fields onto the object, a single-pixel detector is used to collect the reflected light intensities from object. The one-dimensional detection values are fed into the designed AO-Net, and the network can automatically optimize. Finally, high-quality images are output through multiple iterations without pre-training and datasets. Numerical simulations and experiments demonstrate that AO-Net outperforms other existing widespread methods for both binary and grayscale images at low sampling ratios. Specially, the Structure Similarity Index Measure value of the binary reconstructed image can reach more than 0.95 when the sampling ratio is less than 3%. Therefore, AO-Net holds great potential for applications in the fields of complex environment imaging and moving object imaging.

1 Introduction

With the rapid development of computer hardware and optoelectronic devices, computational imaging (CI) has gained increasing attention. As a novel CI technique, single-pixel imaging (SPI) is characterized by using a single-pixel detector (SPD) without spatial resolution to reconstruct image. The SPD, such as avalanche photodiode or photon multiplier, can be made of germanium, silicon and other materials with board working waveband and low cost. Therefore, SPI can be widely applied in the non-visible waveband imaging, such as infrared imaging [1], X-ray [2] and terahertz light [3, 4]. Additionally, the SPD also has the advantages of high quantum efficiency and detection sensitivity, making SPI widely used in remote sensing [5], 3D imaging [6], weak light detection [7] and other areas.

In SPI, the object is illuminated by the modulated light fields generated from a variety of devices, including rotating ground glass plate [8, 9], Digital Micromirror Devices (DMD) [6, 10, 11], liquid crystal spatial light modulator (LC-SLM) [1214], LED-based array [1517], multimode fiber (MMF) [18], Silicon-based optical phased array (OPA) [19], fiber laser array [20] and so on. And the transmitted or reflected light intensities from the object are measured by the SPD. Combining the illumination light fields and light intensities, the images can be reconstructed by a variety of algorithms [2124]. Therefore, researchers improve the imaging quality and efficiency of SPI by designing light fields with specific distributions and optimizing reconstruction algorithms.

The earliest light field used in SPI is random speckle [25]. It often requires a large number of samples to reconstruct an image, resulting in very low efficiency. Subsequently, orthogonal basis patterns are introduced into SPI as the illumination light field to improve the sampling efficiency, such as Hadamard basis patterns [10, 26], Fourier basis patterns [10, 11, 27], Discrete cosine basis patterns [28], Zernike basis patterns [29, 30]. Among of them, Hadamard single-pixel imaging (H-SPI) and Fourier single-pixel imaging (F-SPI) are two typical SPI techniques [10]. They obtain spectral information of the object through corresponding orthogonal basis transformation and efficiently reconstruct the target image by inverse transformation [31]. It has been proven that both H-SPI and F-SPI can achieve theoretically perfect reconstruction in full sampling without the noise or other distractions. Besides, due to the sparse representation in Hadamard and Fourier domains of natural images, they can obtain a large amount of low-frequency information to achieve clear imaging in under-sampling conditions. However, it also has been shown that when the sampling ratio is too low, both H-SPI and F-SPI introduce observable noise and artifacts that damage image quality. Specifically, H-SPI introduces the mosaic artifacts, while F-SPI introduces the ringing artifacts [10]. These artifacts need to be eliminated in the practical application of H-SPI and F-SPI. Additionally, there are also some theoretical differences between them. For example, H-SPI obtains the spatial information of objects in Hadamard domain by Hadamard transform and reconstructs the image by inverse Hadamard transform, while F-SPI extracts the image information in the Fourier domain. Reference [10] gives a detailed description. Moreover, it analyzed and compared the performance of H-SPI and F-SPI, indicating that F-SPI is more efficient than H-SPI and H-SPI is more noise-robust than F-SPI. In practice, the difference between the binary Hadamard and the grayscale Fourier basis will also affect the sampling efficiency.

With the advancement of deep learning (DL), numerous studies have demonstrated its effectiveness in enhancing the image quality of SPI [9, 3236]. In 2017, the deep learning ghost imaging (GIDL) was first proposed by Lyu et al. [33]. They trained a deep neural network (DNN) using reconstructed images from traditional computational ghost imaging algorithm and ground truths which cost lots of time. Another approach is an end-to-end deep-learning method based on convolutional neural network (CNN) presented by Wang et al. [34]. This method takes the single-pixel detection signal sequence as the input and directly outputs the reconstructed image, significantly improving imaging efficiency. Recently, Ulyanov et al. introduced the concept of deep image prior (DIP) for image processing, which has the advantages of not requiring advance training and large data sets [35]. They demonstrated that a randomly-initialized neural network has a subtle focus on natural images and can be used to solve the image inverse problem. Inspired by DIP, Liu et al. proposed a computational ghost imaging method based on an untrained neural network [36]. They combined DGI and DNN to obtain high-quality image without requiring data sets. In 2022, Wang et al. improved upon this algorithm with a method called Deep neural network Constraint (GIDC) which achieved far-field super-resolution ghost imaging [9]. This advancement presents a new perspective for applying deep learning in the SPI system.

Inspired by the development of DL, we introduce an automated optimization neural network (AO-Net) into H-SPI and F-SPI to achieve high imaging quality at low sampling ratios. Firstly, we employ Hadamard or Fourier inverse transformation to obtain rough images that suffer from significant artifacts and noise due to the low sampling ratios. Subsequently, they are fed into the AO-Net for automated iterative optimization and obtaining high-quality images. Through numerical simulations and experimental demonstrations, AO-Net can effectively eliminate the introduced artifacts and noise with better image details, outperforms other existing widespread methods. It holds great potential for applications in fields of complex environment imaging and moving object imaging.

2 Model and theory

Initially, a mathematical model is developed based on the principle of SPI. The object fx,y is illuminated by a series of modulated light fields Pnx,y. And then the corresponding reflected light intensities from object In are measured by a single-pixel detector. The In can be represented as

In=fx,yPnx,ydxdy(1)

where x,y is the coordinate of pixels in the object plane and the subscript n is from 1 to N and denotes the nth field and intensity. It is evident that SPI constitutes a classic inverse problem in image reconstruction.

2.1 Basic model of H-SPI and F-SPI

H-SPI is an efficient single-pixel imaging technique utilizes Hadamard transform [10]. In this approach, as mentioned in Eq. 1, the Hadamard basis patterns PHx,y are projected onto the object x,y . The Hadamard coefficient Hu,v is mathematically equivalent to the intensities measured by the single-pixel detector IHx,y. A collection of independent coefficients forms the Hadamard spectrum, and the image can be reconstructed using the Hadamard inverse transform [26]. The Hadamard basis pattern PHx,y is essentially a binary orthogonal matrix consisting of only +1 and −1 elements. It can be obtained by performing the inverse Hadamard transform on a Dirac delta function δHu,v

PHx,y=12H1δHu,v+1(2)

where u,v is the coordinate in the Hadamard domain, H1 denotes the inverse Hadamard transform and δHu,v is expressed as Eq. 3.

δHu,v=1,u=u0,v=v00,otherwise(3)

Eq. 2 reveals that the presence of −1 elements in the Hadamard matrix prevents its direct loading onto SLM in the SPI system. To maintain the orthogonality of the Hadamard matrix, a differential H-SPI approach is employed to obtain the Hadamard coefficients. As depicted in Figure 1, the pattern PHx,y is divided into PH+x,y and PHx,y which contain +1 and 0, represented as Eq. 4.

PH+x,y=PHx,yPHx,y=1PHx,y(4)

Figure 1
www.frontiersin.org

Figure 1. The method of differential H-SPI.

The corresponding detection values are IH+x,y and IHx,y, respectively. And the Hadamard coefficient is derived as Eq. 5.

Hu,v=IH+x,yIHx,y(5)

Therefore, to reconstruct an image with N pixels, it is necessary to acquire 2N measurements. Besides, H-SPI can employ a specific sampling sequence to enhance the sampling efficiency and prioritize important coefficients. This method makes the more important coefficients are ranked in front to obtain most of the information of target image in real time, such as zigzag [10], Russian Doll [26], Cake Cutting [37] and so on.

F-SPI is another efficient method based on Fourier transform [10]. Similarly, this method obtains the Fourier spectrum of object and reconstruct the image using inverse Fourier transform [11]. The method generates Fourier basis patterns PFx,y by implementing phase shifting. After illuminating the object, the measured detection intensities IFx,y are equivalent to the Fourier coefficients, which form the Fourier spectrum. The Fourier basis pattern PFx,y is a gray orthogonal basis that can be obtained by applying the inverse Fourier transform to the delta function δFu,v,φ, expressed as Eqs 6 and 7.

PFx,y=12real{F1δFu,v,ϕ+1(6)
δFu,v,ϕ=ejϕ,u=u0,v=v00,otherwise(7)

where real indicates the real part, the F1 denotes the inverse Fourier transform and ϕ is the phase. Specifically, in order to obtain the Fourier coefficients, different phase values need to be set at the same frequency to solve the spectrum. Depending on the number of equidistant phases used from 0 to 2π, F-SPI can be implemented using differential measurement methods of 4-step phase shift and 3-step phase shift [11]. In this paper, we adopt 4-step phase shift and introduce it in detail as shown in Figure 2. This method requires four Fourier basis patterns PFx,y,0, PFx,y,π2, PFx,y,π, PFx,y,3π2 with different phases and the same spatial frequency to modulate the object. These patterns correspond to the single pixel detection values IFx,y,0, IFx,y,π2, IFx,y,π, IFx,y,3π2 and the Fourier coefficient is expressed as Eq. 8.

Fu,v=IFx,y,πIFx,y,0+jIFx,y,3π2IFx,y,π2(8)

Figure 2
www.frontiersin.org

Figure 2. The method of four-step phase-shift F-SPI.

Due to the conjugate symmetry of the Fourier spectrum of real-valued images, PFx,y,0 is the inverse of PFx,y,π and PFx,y,π2 is the inverse of PFx,y,3π2. Therefore, it requires 2×M×N measurements to reconstruct an image containing M×N pixels. The 3-step phase shift method adopts similar ideas, which are described in Ref. [31]. Its performance is not as good as the 4-step phase shift due to its asymmetry. Besides, for Fourier basis, the frequency distribution of natural image can be used as prior knowledge to reduce the sampling quantity. Since the majority of the energy in a natural image is concentrated in the low-frequency region, the sampling ratio can be significantly reduced by sampling only the low-frequency coefficients and ignoring the high-frequency coefficients [31].

2.2 The process of AO-Net

Based on the above model, we introduce an automated optimization neural network into H-SPI and F-SPI which is called AO-Net. It combines the powerful feature extraction capabilities of DNN and SPI physical model to obtain high-quality images at low sampling ratios. The reconstruction process is illustrated in Figure 3 and the details are expressed as follows:

[1] Reconstructing the rough images R by using the inverse transformation in H-SPI (H1) and F-SPI (F1) with the patterns Pn(x,y) and real detection values Ir at a sampling ratio less than 10%, as shown in Eq. 9.

R=H1IrorR=F1Ir(9)

[2] Loading the rough images R into the randomly initialized automated optimization neural network Uθ and obtaining the output image O (x,y), as shown in Eq. 10.

Ox,y=UθR(10)

[3] Calculating the estimated values sequence Ii (as shown in Eq. 11, i is the iteration number) with the network output O (x,y) and the basis patterns Pn(x,y) according to Eq. 1.

Ii=Ox,yPnx,ydxdy(11)

[4] Evaluating the root-mean-square error (RMSE) between Ii and Ir as the loss function to automatically guide network parameter θ optimization, aiming to obtain the optimal AO-Net model Uθ* and best image quality O*x,y, as shown in Eqs 12 and 13.

Uθ*=argminIiIr2(12)
O*x,y=Uθ*R(13)

Figure 3
www.frontiersin.org

Figure 3. The basic process of AO-Net. Uθ is the network, θ is the parameter of the network and is the inner product.

Moreover, the network Uθ is based on the U-net deep neural network architecture, which consists of encoder, decoder and skip connection [38], as depicted in Figure 4. The input image has a resolution of 128 × 128 pixels. This structure includes four downsampling layers, one double convolutional layer and four upsampling layers. The downsampling layer involves two convolutional layers (Conv2D) to extract image features with a 3 × 3 kernel size of filters, one max-pooling layer to reduce dimensions and remove redundant information, batch normalization and the active function leaky_relu with the alpha = 0.2 to prevent the “vanishing gradient” problem. The upsampling layer contains one transposed convolutional layer to restore the image resolution, two convolutional layers, batch normalization and the active function leaky_relu. Additionally, the skip connection connects the downsampling path features with the corresponding upsampling layers to address the boundary pixel loss issue. Furthermore, the “Adam” optimizer [39] is used to better optimize the neural network parameters, which are initially set as follows: beta1 = 0.5, beta2 = 0.9 and epsilon = 1e-8. We also use the dynamic learning rate to make the algorithm can converge quickly, the initial learning rate is set to 0.01. Ultimately, the output is a high-quality image with a resolution of 128 × 128 pixels. The processes are run in Python environment and accelerated by NVIDIA GeForce GTX4060 GPU.

Figure 4
www.frontiersin.org

Figure 4. The architecture of the U-net. It contains encoder, decoder and skip connection. The input is a rough image and a high-quality image is used as the output.

3 Numerical simulation

In this section, without loss of generality, we consider two binary images (a number symbol and a Chinese character) and a typical grayscale image called ‘Peppers’ (128 × 128 pixels) as objects for analysis. Normally, if an N-pixel image is reconstructed with M measurements, then β = M/N is defined as the sampling ratio. Firstly, the rough images of objects are reconstructed by using H-SPI and F-SPI at various sampling ratios (specifically, 1%, 3%, 5%, 8%, and 10%). Besides, in the process, we adopt “zigzag” sampling strategy in H-SPI and “circular” sampling path in F-SPI to improve sampling efficiency [10]. The rough images and corresponding basis patterns serve as the prior information for AO-Net. And the one-dimensional detection values obtained from the inner product of the basis patterns with the object are used as data-driven model. AO-Net outputs high-quality images through iterative optimization. Figure 5 shows the reconstructed images of H-SPI and AO-Net (H-SPI). Figure 6 depicts the F-SPI images and corresponding AO-Net (F-SPI) images. As the sampling ratio increases from 1% to 10%, a common feature observed is that the reconstructed images exhibit clearer details and improved image quality. However, the H-SPI introduces numerous mosaic artifacts and the F-SPI introduces obvious ringing artifacts and noise, which compromise image quality and reduce resolution. In contrast, AO-Net can effectively eliminate the introduced artifacts and noise. The reconstructed images of AO-Net with enhanced resolution approximate the original image. When the sampling ratio is about 10%, the image quality obtained by the two methods is similar, and the advantage of AO-Net is not obvious. However, when the sampling ratio is less than 3%, the image obtained by AO-Net has more clearer details, higher resolution and better quality than the traditional method, showing obvious advantages. Therefore, AO-Net demonstrates significant improvement in the quality of H-SPI and F-SPI images at extremely low sampling ratios.

Figure 5
www.frontiersin.org

Figure 5. Simulation results with Hadamard patterns for binary and grayscale objects with different SPI reconstruction methods at low sampling ratios. The resolution of images is 128 × 128 pixels and the iterations of AO-Net are 100.

Figure 6
www.frontiersin.org

Figure 6. Simulation results with Fourier patterns for binary and grayscale objects with different SPI reconstruction methods at low sampling ratios. The resolution of images is 128 × 128 pixels and the iterations of AO-Net are 100.

To further quantitatively analyze the advantages of the AO-Net over the traditional H-SPI and F-SPI, we employ the Structure Similarity Index Measure (SSIM) as an evaluation parameter. A larger SSIM value indicating that the reconstructed image is closer to the original image and has better image quality. Typically, the SSIM values of grayscale image “Peppers” are analyzed and compared. Figures 7A,B respectively show the change trend of SSIM values of H-SPI, F-SPI and AO-Net reconstructed images with the increasing of sampling ratios. The black and blue lines represent H-SPI and F-SPI respectively, and the red lines represent AO-Net. Generally, as the sampling ratio increases, the SSIM values of the images also increase. Moreover, all AO-Net images exhibit higher SSIM values compared to the corresponding H-SPI and F-SPI images at the same sampling ratio, indicating better image quality. This demonstrates the effectiveness of AO-Net and the significant improvement in reconstruction efficiency. The above results illustrate that AO-Net can obtain high-quality clear images at low sampling ratios, outperforms the existing traditional methods. Besides, F-SPI has better image quality than H-SPI. And AO-Net results based on Fourier patterns have highest SSIM values and best image quality at each sampling ratio.

Figure 7
www.frontiersin.org

Figure 7. The SSIM values of simulation results with different SPI methods. (A) H-SPI; (B) F-SPI.

On the other hand, we make a simple comparison with other deep learning algorithms. Firstly, compared with the traditional training-based deep learning methods, in theory, this algorithm has stronger generalization and adaptability without large data sets and pre-training, which has been expressed in part of introduction. It has unique advantages in terms of applicability. Additionally, we also add a comparison to the simulation results based on Hadamard patterns of a typical untrained reconstruction algorithm (GIDC) proposed by Wang et al [9]. Figure 8 shows the comparison results. The number of iterations of both algorithms is 100. The object is grayscale image “peppers” and the reconstruction algorithm are GIDC and AO-Net, respectively. We also calculate the SSIM value of each image. The part marked in red shows that the image has a larger SSIM value at the same conditions, indicating the better image quality. Visually, the AO-Net images have clearer details, less noise and artifacts. The results show that the proposed AO-Net has greater potential to solve the above problems than GIDC. Therefore, we only verify the performance of AO-Net in the follow-up experiment comparison. And in the future, we will carry out more in-depth research and comparison.

Figure 8
www.frontiersin.org

Figure 8. Comparison of simulation results based on Hadamard patterns by GIDC and AO-Net.

4 Experimental results

In order to further validate the feasibility of the aforementioned method, a SPI system was assembled as depicted in Figure 9. The setup involved the emission of laser light from a solid-state laser with a wavelength of 532 nm (LSR-532NL). Subsequently, the laser was collimated and expanded by using a beam expander (BE), resulting in a spot size ten times larger than the original. The expanded laser was directed onto the DMD 1 screen, and its reflected light was then projected onto DMD 2 through a projection lens (PL) with a focal length of 200 mm. Both DMDs (Texas Instruments DLP V-650L) featured a 1280 × 800 micro-mirror array for loading modulation patterns. DMD 1 was utilized to load the generated basis pattern sequence (Hadamard basis patterns and Fourier basis patterns), while DMD 2 was employed to load the object (binary and grayscale images). Furthermore, DMD 1 and DMD 2 needed to be positioned at conjugate positions of PL to obtain a clear image. Therefore, based on the focal length of PL and the Gaussian imaging formula, the distance from DMD 1 to PL and the distance from PL to DMD 2 were both set to 400 mm. Subsequently, the reflected light from DMD 2 was collected by the single-pixel detector (SPD, Thorlabs PDA-10A2) after passing through the collecting lens (CL). The light intensities were recorded by a data acquisition card (DAC, ART USB-2872D) connected to a computer. This entire process was facilitated by self-developed data synchronization acquisition software (LABVIEW).

Figure 9
www.frontiersin.org

Figure 9. The diagram of experimental setup. BE (beam expander), PL (projection lens), CL (collecting lens), SPD (single-pixel detector), DAC (data acquisition card).

In this experiment, the resolution of 128 × 128 pixels basis patterns were sequentially loaded into DMD 1 to implement SPI. When loading the binary Hadamard basis patterns, the refresh rate of DMD could reach up to 22.4 kHz. Therefore, the projection interval was set to 2 ms, with each frame being projected for 1 ms to accommodate the response rate of the detector and acquisition card. When the grayscale Fourier basis patterns were loaded, the DMD refresh rate was only 258 Hz, so the projection internal was set to 20 ms and the projection duration of each frame was 10 ms. Additionally, DMD 2 loaded binary images representing a simple “drone”, the letter combination “NUDT”, and the grayscale image “Peppers” as imaging objects. All of them were also 128 × 128 pixels. Moreover, in order to achieve optimal modulation, the 128 × 128 pixels images were enlarged to 512 × 512 pixels, occupying the central portion of the DMD by combining each set of 4 × 4 pixels into a single resolution cell. The images were reconstructed by H-SPI, F-SPI and AO-Net. And the SSIM was employed for quantitative and comparative analysis. The sampling ratios were also set to 1%, 3%, 5%, 8% and 10% to align with the simulations.

Figure 10 and Figure 11 show the experimental results of H-SPI, F-SPI and AO-Net at different sampling ratios, respectively. As the sampling ratio increased, the details of the reconstructed images became more discernible. However, a notable difference was observed in the images generated by H-SPI, which exhibited numerous noise points and mosaic artifacts. And there were also obvious ringing artifacts and noise in F-SPI reconstructed images. On the contrary, the AO-Net results could eliminate these interferences, resulting in higher-quality images that were closer to the original more than traditional methods. For binary images, it could be intuitively seen that the advantage of AO-Net was particularly pronounced, enabling clear imaging at a low sampling ratio less than 3%. And the AO-Net results based on Fourier patterns were best among these images.

Figure 10
www.frontiersin.org

Figure 10. Experimental results with Hadamard patterns for binary and grayscale objects with different SPI reconstruction methods at low sampling ratios. The iterations of AO-Net are 300.

Figure 11
www.frontiersin.org

Figure 11. Experimental results with Fourier patterns for binary and grayscale objects with different SPI reconstruction methods at low sampling ratios. The iterations of AO-Net are 300.

In order to further illustrate the advantages of AO-Net, we analyzed the SSIM value of the grayscale reconstructed images. Figures 12A,B respectively depict the SSIM values of H-SPI, F-SPI and AO-Net reconstructed images. Similarly, the black and blue lines represented H-SPI and F-SPI results respectively, and the red lines represented AO-Net. Intuitively, the SSIM values increased with the increase of sampling ratio for every method, indicating better image quality. It was apparent that the SSIM values of the AO-Net images were noticeably higher than those of the H-SPI and F-SPI images, suggesting a closer resemblance to the original image. The above results showed that AO-Net can achieve higher-quality imaging at low sampling ratios. Therefore, combined with simulation results, we could choose the appropriate basis patterns according to different application scenes to ensure maximum efficiency.

Figure 12
www.frontiersin.org

Figure 12. The experimental SSIM values of different SPI methods. (A) H-SPI; (B) F-SPI.

5 Discussion and conclusion

In conclusion, we introduce an automated optimization neural network into H-SPI and F-SPI called AO-Net to obtain high-quality reconstructed images at low sampling ratios. One-dimensional detection values are obtained by SPI process and fed into the designed AO-Net. The network parameters are automatically optimized and outputs high-quality images without pre-training and datasets. Through the numerical simulations and experimental demonstrations, we validate that H-SPI and F-SPI introduce unavoidable artifacts and noise in the reconstructed images at low sampling ratios. On the contrary, AO-Net can effectively eliminate these disturbances for both binary and grayscale objects. Consequently, the reconstructed images of AO-Net have better image quality, enhanced contrast and clearer details. Furthermore, the advantages for binary reconstructed images are particularly evident. It is obvious that the reconstructed images have clearer details and higher image quality which are close to the original image at a sampling ratio less than 3%. For grayscale images, the ability of the algorithm to extract image information needs to be improved. Meanwhile, the process of synchronous data acquisition in the experiment needs to be further optimized. The above results indicate that the proposed AO-Net has the potential to solve the above problems. Therefore, by leveraging the high detection efficiency of SPD and the fast modulation speed of DMD, AO-Net can find applications in the fields of moving object imaging, recognition and tracking.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.

Author contributions

GL: Formal Analysis, Investigation, Methodology, Validation, Writing–original draft. WL: Software, Validation, Writing–review and editing. QM: Investigation, Validation, Writing–review and editing. WC: Supervision, Writing–review and editing. HL: Supervision, Writing–review and editing. YW: Methodology, Supervision, Writing–review and editing. KH: Conceptualization, Methodology, Supervision, Writing–review and editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Edgar MP, Gibson GM, Bowman RW, Sun B, Radwell N, Mitchell KJ, et al. Simultaneous real-time visible and infrared video with single-pixel detectors. Scientific Rep (2015) 5(1):10669. doi:10.1038/srep10669

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Klein Y, Schori A, Dolbnya IP, Sawhney K, Shwartz S. X-ray computational ghost imaging with single-pixel detector. Opt Express (2019) 27(3):3284. doi:10.1364/oe.27.003284

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Chan WL, Charan K, Takhar D, Kelly KF, Baraniuk RG, Mittleman DM. A single-pixel terahertz imaging system based on compressed sensing. Appl Phys Lett (2008) 93(12). doi:10.1063/1.2989126

CrossRef Full Text | Google Scholar

4. Lu Y, Wang X-K, Sun W-F, Feng S-F, Ye J-S, Han P, et al. Reflective single-pixel terahertz imaging based on compressed sensing. IEEE Trans Terahertz Sci Technology (2020) 10(5):495–501. doi:10.1109/tthz.2020.2982350

CrossRef Full Text | Google Scholar

5. Jianwei M. A single-pixel imaging system for remote sensing by two-step iterative Curvelet Thresholding. IEEE Geosci Remote Sensing Lett (2009) 6(4):676–80. doi:10.1109/lgrs.2009.2023249

CrossRef Full Text | Google Scholar

6. Jiang W, Yin Y, Jiao J, Zhao X, Sun B. 2,000,000 fps 2D and 3D imaging of periodic or reproducible scenes with single-pixel detectors. Photon Res (2022) 10(9):2157. doi:10.1364/prj.461064

CrossRef Full Text | Google Scholar

7. Morris PA, Aspden RS, Bell JEC, Boyd RW, Padgett MJ. Imaging with a small number of photons. Nat Commun (2015) 6(1):5913. doi:10.1038/ncomms6913

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Gong W, Zhao C, Yu H, Chen M, Xu W, Han S. Three-dimensional ghost imaging lidar via sparsity constraint. Scientific Rep (2016) 6(1):26133. doi:10.1038/srep26133

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Wang F, Wang C, Chen M, Gong W, Zhang Y, Han S, et al. Far-field super-resolution ghost imaging with a deep neural network constraint. Light: Sci Appl (2022) 11(1):1. doi:10.1038/s41377-021-00680-w

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Zhang Z, Wang X, Zheng G, Zhong J. Hadamard single-pixel imaging versus Fourier single-pixel imaging. Opt Express (2017) 25(16):19619. doi:10.1364/oe.25.019619

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Zhang Z, Ma X, Zhong J. Single-pixel imaging by means of Fourier spectrum acquisition. Nat Commun (2015) 6(1):6225. doi:10.1038/ncomms7225

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Bromberg Y, Katz O, Silberberg Y. Ghost imaging with a single detector. Phys Rev A (2009) 79(5):053840. doi:10.1103/PhysRevA.79.053840

CrossRef Full Text | Google Scholar

13. Huang J, Shi D. Multispectral computational ghost imaging with multiplexed illumination. J Opt (2017) 19(7):075701. doi:10.1088/2040-8986/aa72ff

CrossRef Full Text | Google Scholar

14. Sun S, Liu W-T, Lin H-Z, Zhang E-F, Liu J-Y, Li Q, et al. Multi-scale adaptive computational ghost imaging. Scientific Rep (2016) 6(1):37013. doi:10.1038/srep37013

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Xu Z-H, Chen W, Penuelas J, Padgett M, Sun M-J. 1000 fps computational ghost imaging using LED-based structured illumination. Opt Express (2018) 26(3):2427. doi:10.1364/oe.26.002427

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Salvador-Balaguer E, Latorre-Carmona P, Chabert C, Pla F, Lancis JS, Ejoe T. Low-cost single-pixel 3D imaging by using an LED array. Opt Express (2018) 26(12):15623–31. doi:10.1364/oe.26.015623

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Zhao W, Chen H, Yuan Y, Zheng H, Liu J, Xu Z, et al. Ultrahigh-speed color imaging with single-pixel detectors at low light Level. Phys Rev Appl (2019) 12(3):034049. doi:10.1103/PhysRevApplied.12.034049

CrossRef Full Text | Google Scholar

18. Fukui T, Kohno Y, Tang R, Nakano Y, Tanemura T. Single-pixel imaging using multimode fiber and silicon photonic phased array. J Lightwave Technology (2021) 39(3):839–44. doi:10.1109/JLT.2020.3008968

CrossRef Full Text | Google Scholar

19. Kohno Y, Komatsu K, Tang R, Ozeki Y, Nakano Y, Tjoe T. Ghost imaging using a large-scale silicon photonic phased array chip. Opt Express (2019) 27(3):3817–23. doi:10.1364/oe.27.003817

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Lai W, Lei G, Meng Q-S, Wang Y, Ma Y, Liu H, et al. Efficient single-pixel imaging based on a compact fiber laser array and untrained neural network. Front Optoelectron (2024) 17:9. doi:10.1007/s12200-024-00112-8

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Bian L, Suo J, Dai Q, Chen F. Experimental comparison of single-pixel imaging algorithms. J Opt Soc America A (2017) 35(1):78. doi:10.1364/josaa.35.000078

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Ferri F, Magatti D, Lugiato LA, Gatti A. Differential ghost imaging. Phys Rev Lett (2010) 104(25):253603. doi:10.1103/PhysRevLett.104.253603

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Sun B, Welsh SS, Edgar MP, Shapiro JH, Padgett MJ. Normalized ghost imaging. Opt Express (2012) 20(15):16892. doi:10.1364/oe.20.016892

CrossRef Full Text | Google Scholar

24. Duarte MF, Davenport MA, Takhar D, Laska JN, Sun T, Kelly KF, et al. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag (2008) 25(2):83–91. doi:10.1109/msp.2007.914730

CrossRef Full Text | Google Scholar

25. Chen L. Resolution to the quantum-classical dilemma in thermal ghost imaging. 2016.

Google Scholar

26. Sun M-J, Meng L-T, Edgar MP, Padgett MJ, Radwell N. A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging. Scientific Rep (2017) 7(1):3464. doi:10.1038/s41598-017-03725-6

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Zhang Z, Wang X, Zheng G, Zhong J. Fast Fourier single-pixel imaging via binary illumination. Scientific Rep (2017) 7(1):12029. doi:10.1038/s41598-017-12228-3

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Liu B-L, Yang Z-H, Liu X, Wu L-A. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform. J Mod Opt (2016) 64(3):259–64. doi:10.1080/09500340.2016.1229507

CrossRef Full Text | Google Scholar

29. Lei G, Lai W, Meng Q, Liu H, Shi D, Cui W, et al. Efficient and noise-resistant single-pixel imaging based on Pseudo-Zernike moments. Opt Express (2023) 31(24):39893. doi:10.1364/oe.506062

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Lai W, Lei G, Meng Q, Shi D, Cui W, Ma P, et al. Single-pixel imaging using discrete Zernike moments. Opt Express (2022) 30(26):47761. doi:10.1364/oe.473912

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Lu T, Qiu Z, Zhang Z, Zhong J. Comprehensive comparison of single-pixel imaging methods. Opt Lasers Eng (2020) 134:106301. doi:10.1016/j.optlaseng.2020.106301

CrossRef Full Text | Google Scholar

32. Song K, Bian Y, Wu K, Liu H, Han S, Li J, et al. Single-pixel imaging based on deep learning. 2023.

Google Scholar

33. Lyu M, Wang W, Wang H, Wang H, Li G, Chen N, et al. Deep-learning-based ghost imaging. Scientific Rep (2017) 7(1):17865. doi:10.1038/s41598-017-18171-7

PubMed Abstract | CrossRef Full Text | Google Scholar

34. Wang F, Wang H, Wang H, Li G, Situ G. Learning from simulation: an end-to-end deep-learning approach for computational ghost imaging. Opt Express (2019) 27(18):25560. doi:10.1364/oe.27.025560

PubMed Abstract | CrossRef Full Text | Google Scholar

35. Ulyanov D, Vedaldi A, Lempitsky V. Deep image prior. Int J Computer Vis (2020) 128(7):1867–88. doi:10.1007/s11263-020-01303-4

CrossRef Full Text | Google Scholar

36. Liu S, Meng X, Yin Y, Wu H, Jiang W. Computational ghost imaging based on an untrained neural network. Opt Lasers Eng (2021) 147:106744. doi:10.1016/j.optlaseng.2021.106744

CrossRef Full Text | Google Scholar

37. Yu W-K Super Sub-Nyquist single-pixel imaging by means of Cake-Cutting Hadamard basis Sort. Sensors (2019) 19(19):4122. doi:10.3390/s19194122

PubMed Abstract | CrossRef Full Text | Google Scholar

38. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for Biomedical image Segmentation. Med Image Comput Computer-Assisted Intervention – MICCAI (2015) 2015:234–41. Chapter Chapter 28 (Lecture Notes in Computer Science). doi:10.1007/978-3-319-24574-4_28

CrossRef Full Text | Google Scholar

39. Yi D, Ahn J, Ji S. An effective optimization method for Machine learning based on ADAM. Appl Sci (2020) 10(3):1073. doi:10.3390/app10031073

CrossRef Full Text | Google Scholar

Keywords: Hadamard single-pixel imaging, Fourier single-pixel imaging, low sampling, high imaging quality, deep neural network, automated optimization

Citation: Lei G, Lai W, Meng Q, Cui W, Liu H, Wang Y and Han K (2024) Low-sampling high-quality Hadamard and Fourier single-pixel imaging through automated optimization neural network. Front. Phys. 12:1391608. doi: 10.3389/fphy.2024.1391608

Received: 26 February 2024; Accepted: 29 April 2024;
Published: 15 May 2024.

Edited by:

Mario Alan Quiroz-Juarez, National Autonomous University of Mexico, Mexico

Reviewed by:

Lu Rong, Beijing University of Technology, China
Armando Perez.Leija, University of Central Florida, United States
Alfred U’Ren, National Autonomous University of Mexico, Mexico

Copyright © 2024 Lei, Lai, Meng, Cui, Liu, Wang and Han. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yan Wang, wangyan101712@163.com; Kai Han, hankai0071@nudt.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.