Research describing quantum-inspired computational imaging earns impact award
Research describing imaging techniques that will one day allow doctors to more safely view tumors deep inside the body, take an image of an object that’s out of sight – far away – or in extremely low light, or dramatically improve automobile safety for both drivers and pedestrians, has been recognized with a new international paper award.
The publication was the result of a collaboration between six researchers from the U.S. and U.K. who had been working on advanced digital imaging techniques from unique perspectives. With the goal of advancing the technology forward, they published their overview in the 2018 Science article, Quantum-inspired computational imaging.
The impact on the scientific community was unmistakable, and five years later, recognized publicly with the inaugural Best Paper Award in Theoretical Computer and Information Sciences. Funded by the Beijing government, the award was announced at the first International Congress of Basic Science (ICBS), and included a $25K monetary award.
The award is shared by the international team of Yoann Altmann and Stephen McLaughlin (Heriot-Watt University, Scotland), Miles Padgett (University of Glasgow, Scotland), Vivek Goyal (Boston University, US), Alfred Hero (University of Michigan, US), and Daniele Faccio (University of Glasgow).
We discussed the research with Alfred Hero, John H. Holland Distinguished University Professor of EECS, which is summarized in the following Q&A.
What’s the paper about?
This is the first published work that describes the promise of using quantum-inspired single photo detection coupled with computation for imaging in situations that were not possible in the past. The crux of the method is to capture light in flight dynamically as each photon falls on the detector. The method is designed to work when the target is illuminated by a laser light source at very low intensities.
Single photon imaging is a very exciting new technology that’s just starting to gain traction.
What are the applications for this technology?
Some promising applications are in security, urban safety, space, and medicine.
The technology allows us to take images of objects when there is no line of site between the illumination source and the object, allowing us to, for example, see around corners or to perform 3D imaging from a single source. It is also possible to reconstruct objects that are very far away. A recent article described an experiment where they were able to image a small object at a distance of 50 km (~30 miles). This is extremely low light imaging. These open up potential applications to security and safety. As one example, the technology can be incorporated into automobiles to quickly detect unsafe situations, such as obstacles with low light reflectivity, or an around the corner vehicle who is about to run a red light.
In space, the technology is already being used for satellite telemetry and high resolution imaging of celestial objects, e.g., detailed topographical imaging of the moon and planets in the solar system.
Another emerging application is medical imaging using pico-second resolved low intensity single photon laser scanning to avoid damaging tissue. For example, single and two photon laser imaging technologies have been recently proposed for non-invasive, cellular-resolution retinal imaging and skin cancer screening.
Key advantages to this technology are its compact size, low energy requirements, high resolution, and non-line-of site detection capabilities – enabled by the computational techniques that are employed.
Where are we in terms of products on the market that use this technology?
Single photon imaging is a very exciting new technology that’s just starting to gain traction. For instance, there is much current interest in using it to develop a new generation of single photon lidar systems that can image through opaque media like fog and attain 3D sub-millimeter depth resolution.
I think we’re now within 5 years of seeing some pretty amazing new imaging technologies result from this quantum-inspired marriage of photo-detection and computation.
Right now, the bottleneck is not the computation, we have that largely figured out. The main issue now is with the SPAD [single-photon avalanche diode] technology, which is used for single photon counting and timing. These devices have existed since about 2005, but they need to get much better from an energy consumption, photon collection efficiency, noise rejection, and timing resolution standpoint.
In particular, to improve timing resolution, the devices need to be able to recover more quickly after receiving and registering a single photon. Timing resolution has improved from milliseconds of recovery time to 10s of picoseconds – but that’s still not fast enough to resolve smaller details, e.g., fine features of a face imaged at large distances or sub-cellular structures on the surface of the skin. To attain this level the timing resolution would need to improve tenfold or more.
Some systems are starting to appear, like single photon lidar systems for autonomous driving, where you need split second timing accuracy and fast 3D image acquisition time. I think we’re now within 5 years of seeing some pretty amazing new imaging technologies result from this quantum-inspired marriage of photo-detection and computation.
Can you describe these imaging devices?
To take these types of images, you need a camera that can achieve single photon counting with very precise timing information. This allows the computational camera to separate the different paths that individual photons take when they leave the laser, bounce off an object, and then return.
The cameras contain a solid-state photodetector with a single-photon avalanche diode. They are able to differentiate between 2 photons where their time of flight differs by just 10 to 20 picoseconds, which gives you sub-centimeter accuracy. A picosecond is one trillionth of a second – so these are extremely timing-sensitive photodetectors.
How does this differ from traditional imaging methods?
In standard, or what we call classical, imaging methods – you collect all the photons for a given period of time onto a plate or piece of film, or onto a CD array of a digital camera, and form the image.
Here, we use a quantum-inspired methodology that adds an additional dimension to the camera. It actually counts when each photon hits the surface, and allows you to tease out the direct path light and indirect path light. Photons from the direct path light are ignored, while the indirect path light is computationally inverted in order to come up with the final image.
Why did you all decide to work together on this paper?
We were all at a Royal Academy of Science meeting held outside London a decade ago, and we talked about the tremendous potential of combining computational techniques with very low light, single photon imaging applications. We thought it was a shame that we were all publishing in our own somewhat siloed venues, and there was no overarching survey of these technologies, and what they’re capable of in the imaging world. So we decided to do something about that.
We came up with the term quantum-inspired imaging because we’re dealing with quanta of photons. Others have looked at techniques exploiting non-classical properties of light, e.g., exploiting quantum entanglement, which are distinct from the methods described in the paper.
I had been collaborating with first author Yoann Altmann for many years, and he spent time in my group as a visiting scholar about ten years ago. With Yoann, we developed mathematical models for imaging with time-resolved multispectral low light photo-detectors. These mathematical models used Bayesian statistics to do robust image formation using sparse models to induce resilience to noise, and were subsequently applied to materials science and nuclear particle detection.
Vivek Goyal from Boston University is a pioneer in first photon imaging, which enables high 3D resolution imaging with a very limited number of photons. More recently he has been developing these methods for single photon lidar, which has many applications including cameras for autonomous vehicles, and imaging through random media, allowing the camera to see through fog, smoke, or foliage, for example.
When Yoann Altman returned to Heriot Watt from U-M, our co-authors Miles Padgett and Steve McGlaughlin worked with him to develop time-resolved computational photo-detection methods for imaging small objects at large distances (several kilometers) using co-located laser illumination and SPAD photo-detectors.
To round out the team of co-authors, corresponding author Daniele Faccio from University of Glasgow pioneered methods known as ghost imaging, also sometimes called compressive imaging. This method images an object through randomly coded masks, using correlation between incident photons to reconstruct the object.
We had to build a common language to talk about all of these methods and applications, while emphasizing the fundamental principles of the underlying quantum-inspired computational imaging and associated technology.
The paper has been widely cited by researchers in physics, engineering, industry, medicine, and other fields. That wouldn’t have happened if we simply continued in our own trajectories, and shows the value of this type of collaborative endeavor to write an expository high level paper like our 2018 Science paper.