Imaging objects hidden behind turbid media is of great scientific importance and practical value, which has been drawing a lot of attention recently. However, most of the scattering imaging methods rely on a narrow linewidth of light, limiting their application. A mixture of the scattering light from various spectra blurs the detected speckle pattern, bringing difficulty in phase retrieval. Image reconstruction becomes much worse for dynamic objects due to short exposure times. We here investigate non-invasively recovering images of dynamic objects under white-light irradiation with the multi-frame OTF retrieval engine (MORE). By exploiting redundant information from multiple measurements, MORE recovers the phases of the optical-transfer-function (OTF) instead of recovering a single image of an object. Furthermore, we introduce the number of non-zero pixels (NNP) into MORE, which brings improvement on recovered images. An experimental proof is performed for dynamic objects at a frame rate of 20 Hz under white-light irradiation of more than 300 nm bandwidth.

- Chinese Optics Letters
- Vol. 22, Issue 6, 060007 (2024)

**Note:**This section is automatically generated by AI . The website and platform operators shall not be liable for any commercial or legal consequences arising from your use of AI generated content on this website. Please be aware of this.

Abstract

1. Introduction

In traditional imaging methods, including traditional lens imaging or coherence diffraction imaging, the information of the light transmission is determinable, e.g., the point-spread-function (PSF) of the system can be resolved. However, when light is scattered by turbid media, the information of the propagation is scrambled and cannot be resolved directly or with a simple formula, resulting in difficulty in imaging. Unfortunately, such a scattering situation is frequently happening in our common lives, e.g., such as atmospheric disturbances in astronomical imaging, biological tissues in medical imaging, and foggy weather in daily life^{[1–3]}. Along with the technological developments, there are more and more demands for research on how to overcome the limitations of traditional imaging methods and realize the imaging of objects hidden in scattering media so that people can observe the morphological structure and other appearance information of the target even when they cannot see it directly. This not only has important scientific research value but also has great potential and practical value in industry and daily life.

During the past decades, many methods and techniques have been proposed, for example, the wavefront modulation technique^{[4–9]}, the optical transmission matrix measurement technique^{[10–14]}, the scattering holography technique^{[15–18]}, and the speckle correlation imaging technique^{[19–22]}. The principle of the wavefront modulation technique is to achieve super diffraction-limited focusing and imaging through the scattering medium by precisely controlling the spatial light modulator (SLM). However, this usually requires an auxiliary guide star or another known object as a reference in the target plane. The optical transmission matrix measurement technique uses the transmission matrix to fully characterize the effect of the scattering medium to reconstruct the image of the hidden object, which is based on the principle of using the two-dimensional transmission matrix to characterize the scattering system as a linear system and using the spatial light modulator and full-field phase-shift interferometry to successfully measure the optical transmission matrix of the complex scattering medium, which requires very high accuracy in transmission matrix determination.

The speckle correlation method (originated from speckle interferometry) seeks the Fourier magnitude of an object based on the memory effect and reconstructs the image with phase retrieval operation^{[23–29]}. This method has the advantages of being non-invasive, simple, and less computationally consuming and has attracted a lot of attention during the last two decades^{[19,20]}. However, phase retrieval algorithms such as hybrid input-output (HIO) and error reduction (ER) are quite vulnerable to noise (from the environment or detection system)^{[30]}, making speckle correlation hard to deal with in low signal-to-noise ratio (SNR) situations. A method called multi-frame OTF retrieval engine (MORE) was proposed to work for non-invasive imaging under low SNRs^{[31]}, which not only introduces the optical-transfer-function (OTF) constraint in the iteration process but also simultaneously recovers the OTF and multiple sub-objects together, bringing high stability for phase retrieval.

Sign up for ** Chinese Optics Letters** TOC. Get the latest issue of

*Chinese Optics Letters*delivered right to you！Sign up now

When light transmits through a turbid medium, the interference among the scatterers forms a random-like diffraction pattern (so called a speckle pattern). Different wavelengths construct different speckle patterns. Therefore, the PSF under broad spectral illumination is a superposition of different patterns, which makes it blurred—not only the size of the speckles’ grains (determining the resolution) is broadened but also the background of the PSF gets high. This will cause a low SNR of the detectable speckle patterns, making imaging under a broad spectrum much more difficult than under a narrow bandwidth. Since applications with broad spectra of light are inevitable, scientists have conducted much research on this topic during the last several years^{[32–37]}. Wu *et al.* introduced the R-autocorrelation approach to increase the contrast of a PSF by randomly selecting and averaging different sub-regions of the speckle patterns^{[32]}. In the work of Sun *et al.*, by acquiring and processing speckles with polarization information, the contrast of the speckles can be improved to achieve scattering imaging under broadband illumination^{[33]}. Lu *et al.* introduced the OTF constraint into the scattering imaging under broadband illumination and successfully achieved correct results^{[35]}. Deep learning was also applied for scattering imaging under white light illumination, which however requires a large amount of sample data for end-to-end learning^{[34]}. Furthermore, MORE (including the OTF constraint and multi-frame reconstruction) was employed for imaging under very broad spectra, as well as multi-spectrum^{[37]}.

The imaging dynamic object is also inevitable for realistic applications. Many studies have been carried out on the imaging of dynamic objects in scattering media, such as digital holographic technology^{[15]}, the “shower curtain effect”^{[22]}, deep learning^{[38,39]}, and the MORE technique^{[31]}. Nevertheless, these studies were under narrow spectra. Since the short exposure time due to dynamic capture will further reduce the SNR of the image, imaging of dynamic objects in white light can be regarded as low SNR imaging in more extreme cases, where merely using MORE might fail. In this paper, we introduce constraint on the number of non-zero pixels (NNP)^{[40]} into MORE and extend it to deal with more severe low SNR cases. The experimental results and relevant simulation show that MORE can faithfully perform dynamic imaging at a fast convergence rate with just a few iterations for dynamic objects under a broad spectral illumination (more than 300 nm bandwidth). MORE does not require any calibration or preprocessing, and it utilizes several captured scattering patterns to quickly reconstruct the phase of the OTF (PTF) and then directly computes all images with the obtained PTF. Since MORE retains the relative position and orientation information of the moving object at different moments, we can simply put all the recovered images together to create a video without worrying about image misalignment.

2. Theory

When the PSF of an imaging system is shift invariant, the intensity distribution on detection plane $I\left(x,y;\lambda \right)$ can be described as a convolution of the PSF $S\left(x-\xi ,y-\eta ;\lambda \right)$ and an object function $O\left(\xi ,\eta \right)$ which is assumed to be spectrum insensitive,

In a traditional lens imaging system, $S\left(x-\xi ,y-\eta ;\lambda \right)$ is a single-peak function, and $I\left(x,y;\lambda \right)$ directly exhibits the image of the object. In a scattering scenario, $S\left(x-\xi ,y-\eta ;\lambda \right)$ is of multiple peaks (speckle-like), causing $I\left(x,y;\lambda \right)$ to have a random mixture of multiple images, which needs algorithms to decode the image. On the other hand, a different wavelength results in a different PSF, as well as a different convolution. For a broad spectrum, the overall intensity distribution is a summation of all the different spectra, so is the overall PSF,

The tide symbol denotes a Fourier operation. $\left(u,v\right)$ are the coordinates of the Fourier domain. The total OTF is ^{[31]}. Thus, the diffractive-limited image can be calculated via the PTF,

Instead of recovering an image from a captured $\tilde{\mathrm{\Gamma}}\left(x,y\right)$, we reconstruct the PTF from multiple captured frames of different sub-objects or different states of a dynamic object, denoted as ${\{\tilde{\mathrm{\Gamma}}}_{f}\left(x,y\right)\}$ with $f$ indexing the sub-objects. This method is named MORE, which has been proved to be capable of non-invasive imaging at a low signal-to-noise ratio^{[23]} and under broad spectra.

On the other hand, since $\mathrm{\Gamma}\left(x,y\right)$ has a very high background, the signal above the background is relatively small. A camera has a limited dynamic range, making the measurement of the signal inaccurate. Thus, a broad spectrum would result in a low detection SNR. This would deteriorate the reconstruction. We therefore introduce the NNP constraint to improve the reconstructed image quality. The procedure of MORE is described as below.

In step (5), the real part of any pixel in ${M}_{f}\left(x,y\right)$ is reserved, and the imaginary part is directly removed. Meanwhile, the real part should not be negative; otherwise the pixel must be set to zero. $\mathrm{\Omega}$ is an estimate area existing in the object, and that outside of $\mathrm{\Omega}$ is set to zero.

In step (6), since the reconstruction images at the very first iterations are far from the correct ones, the NNP constraint would have an adverse side effect. The NNP constraint is only activated after the ${P}_{s}$th iteration.

3. Experiments

The experimental setup is shown in Fig. 1. A projector projects a picture onto an object plane, simulating a self-emitting object. Light from the object plane propagates to a diffuser (a 220-grit ground glass) and then gets scattered towards a CCD. Right behind the diffuser is a circular aperture with a diameter of 5 mm. The resolution of the CCD is $5496\times 3672$. Each pixel is 2.4 µm large. The distance from the target plane to the diffuser is $u=110\text{\hspace{0.17em}}\mathrm{cm}$. The distance from the CCD to the diffuser is $v=10\text{\hspace{0.17em}}\mathrm{cm}$. The effective magnification of the scattering lens is $M=v/u=1/11$. The exposure time of the camera is set to 50–300 ms for recording the scattering pattern.

Figure 1.Schematic diagram of the experimental setup.

3.1. Experiment for dynamic objects under white light irradiation

The project plays a movie to simulate a dynamic target. The object size is $1\text{\hspace{0.17em}}\mathrm{cm}\times 1\text{\hspace{0.17em}}\mathrm{cm}$. The CCD captures a sequence of speckle-like patterns with a 20 Hz frame rate. Five frames are randomly selected and input into the MORE algorithm, which finally reconstructs the PTF and is then used to recover all the images from the captured frames. Note that the NNP constraint is turned off. We test two kinds of object: (1) five letters successively passing through a small aperture on the object plane, where the aperture size is $1\text{\hspace{0.17em}}\mathrm{cm}\times 1\text{\hspace{0.17em}}\mathrm{cm}$ and (2) a rotating letter “E.” Figure 2 exhibits the results.

Figure 2.Samples of the reconstructed video with MORE. (a) Video samples for the five translating letters at 50 ms exposure time. See Visualization 1. (b) Video samples for the rotating letter “E” at 50 ms exposure time. See Visualization 2.

Traditional phase retrieval algorithms use independent phase retrieval for each scattering pattern, resulting in an uncertain position or orientation of each state of the object. Differently, the MORE algorithm uses PTF deconvolved for all states of the object, which preserves the relative position and orientation information of the moving objects at different moments so that during the recovery process of dynamic imaging we can simply stack the recovered images together to create a video without any image-to-image processing and without worrying about misalignment between each frame.

3.2. Experiments on the recovery of objects under white light irradiation by MORE with NNP constraint

We take the scattering patterns of the object “A” to “E” separately. Figures 3(b) and 3(c) are the recovery results using MORE without the NNP under 50 ms exposure time and 300 ms exposure time, respectively. The contrast of the captured speckle patterns at 50 ms exposure time is $\sim 4.5\%$. Note that contrast = std/mean, with std standing for standard deviation.

Figure 3.Recovery of static objects under white light exposure experiment 1. (a) The objects “A” to “E,” (b) recovered images with MORE under 50 ms exposure time, and (c) recovered images with MORE under 300 ms exposure time.

Figure 4.(a) shows the objects A–E projected by the projector, (b) shows the scattering pattern of the original corresponding object recorded by the camera under 300 ms exposure time, (c) shows the recovery of scattering of (b) by MORE algorithm with real and non-negative constraints, and (d) shows the recovery of scattering of (b) by MORE algorithm with the addition of non-zero-pixel constraint.

We then turn on the NNP constraint and investigate how well it can improve the reconstruction. The NNP can be estimated from the autocorrelation of the object^{[26]}. Theoretically, the pixel number of the autocorrelation of an object is four times larger than that of the object. However, under very noisy circumstances, the autocorrelation from the measured data might be very different from what it is supposed to be. The estimation of the NNP should be larger than that of the original object. In the following, we enlarge the NNP by a factor (denoted as $\alpha $) and see how the factor affects the recovered results.

As shown in Fig. 5, at 300 ms exposure time (a less noisy circumstance), the tightest NNP ($\alpha =1$) leads to the best reconstruction. The NNP constraint obviously improves the reconstruction. Nevertheless, at 50 ms exposure time (a more noisy circumstance), the tightest NNP results in failure of the reconstruction. This is the reason why the plot does not show the data point from $\alpha =1$ to $\alpha =1.4$. It has a peak at $\alpha =1.8$. It suggests that an NNP around two times larger than the original one would also effectively improve the reconstruction.

Figure 5.SSIM versus the magnification factor of the original NNP.

We next investigate the optimal starting iteration (${P}_{s}$) when the NNP is turned on the NNP constraint. According to Fig. 6, the later the NNP involves, the better the reconstruction is.

Figure 6.SSIM of the recovered images with or without adding NNP constraint at different exposure times.

In order to measure the spatial resolution of the imaging system, the USAF1951 resolution plate was used as the object. The reconstruction images are shown in Fig. 7.

Figure 7.(a) USAF1951 resolution board. The red rectangle indicates the part used as the object. (b) Recovered image of elements 3 and 4 of group 3 at 300 ms exposure time. (c) Recovered image of element 4 of group 3 at 300 ms exposure time.

Element 4 of group 3 indicates a resolution of 22 µm. According to the spatial resolution formula of the system

4. Simulation

A simulation platform is built up to simulate the experiment, in order to investigate how the NNP affects the performance of MORE. The diffuser is simulated with random phases. The light propagating from the diffuser constructs an interference pattern in the Fresnel zone using the Rayleigh–Sommerfeld solution. By this way, we calculated 100 sub-PSFs at different wavelengths between 400 nm and 700 nm in an increment of 3 nm. Adding the 100 PSFs together produces the total PSF of the system. The speckle pattern of an object is formed by calculating the convolution of the PSF and the object function. By adding random noise to the PSF, different SNR situations can be simulated.

As shown in Fig. 8, at a certain noisy situation, MORE without the NNP cannot recover images with a feasible quality. As soon as the NNP is turned on, correct images are reconstructed. Figures 8(b)–8(h) show the recovery results of MORE with NNP under white light, where the amplification coefficients of NNP are 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, and 2.2, respectively. It shows that the best magnification coefficient is around 1.4–1.8. When the amplification factor is greater than 2.0, the reconstruction effect gradually deteriorates. This is consistent with the experimental results.

Figure 8.Simulation results with different magnification factors of NNP under white light irradiation.

5. Discussion

Phase retrieval can be thought of as a problem of solving a system of equations. It aims to recover the phase of an object from magnitude measurements. The unknown phases are the solution to a system of equations. If the number of unknown phases is more than the number of independent equations, it becomes an ill-posed problem. Unfortunately, noise would deteriorate the accuracy of the equations, equivalently decreasing the number of independent equations, causing phase retrieval vulnerable to low SNRs. A support constraint can highly reduce the number of unknown variables, increasing the reliability of a phase retrieval. A tight support can effectively increase the reliability of reconstruction^{[29]}. An NNP constraint can reduce much more unknown phases than the corresponding support, bringing more reliability than the support constraint itself. Moreover, MORE simultaneously recovers the phase of the OTF and five sub-objects. They mutually reinforce each other to faithful convergence: a correct recovered result of a sub-object will lead to a correct reconstruction of the OTF phase as well as the other sub-objects and vice versa. Therefore, five NNP constraints together conduct the phase retrieval to a fast and reliable convergence to the global minimum.

As shown in the experimental results and simulation, a broad bandwidth causes a low SNR of the detected light pattern through a turbid medium. The SNR is even lower when a dynamic object is dealt with, since the exposure time for each frame is very short. MORE with NNP not only can quickly converge to the correct images but also can improve the quality of recovered images. This work also inspires further research on imaging for grayscale objects or reflected targets^{[41]} with spectral difference using MORE plus NNP.

References

[1] L. S. McLean**. Electronic Imaging in Astronomy. Detectors and Instrumentation(2008)**.

[2] V. Tuchin**. Tissue Optics: Light Scattering Methods and Instruments for Medical Diagnosis(2015)**.

[8] I. M. Vellekoop. Feedback-based wavefront shaping**. Opt. Express, 23, 12189(2015)**.

[23] A. Labeyrie. Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images**. Astron. Astrophys., 6, 85(1970)**.

[27] J. R. Fienup. Phase retrieval algorithms: a comparison**. Appl. Opt., 21, 2758(1982)**.

[28] J. R. Fienup. Phase retrieval with continuous version of hybrid input-output**. Frontiers in Optics, OSA Technical Digest, ThI3(2003)**.

[32] T. Wu, C. Guo, X. Shao. Non-invasive imaging through thin scattering layers with broadband illumination**(2018)**.

Set citation alerts for the article

Please enter your email address