DARPA-Funded Work May Help Troops See Around Corners

A U.S. Air Force loadmaster uses night vision goggles during a night flight over Bulgaria, July 15, 2018.

A U.S. Air Force loadmaster uses night vision goggles during a night flight over Bulgaria, July 15, 2018. U.S. Air Force

By setting up multiple sensors, researchers have learned to “see” what’s out of sight.

Think back a moment to the Bin Laden raid and the dangerous business of invading an enemy headquarters, with adversaries lurking around every corner. In the future, that might be a solved problem. Researchers from the University of Central Florida, with funding from the U.S. military, have developed a method for seeing around corners by calculating how the light waves emanating from a source, reflecting off objects like people and then reflect off of walls.

It works even when the light is invisible to the human eye.

“We have shown that information about a non-line-of-sight object can be obtained completely passively without using mirrors and without any access to the source of natural light,” they write in their paper, published this week in the journal Nature Communications.

“So far, we have demonstrated that we can identify the shape and assess the size and location of a spatially incoherent source,” meaning an object, like a person, or anything else that reflects natural, ambient light, said UCF researcher Aristide Dogariu.

Unlike earlier efforts to see around corners using sound vibrations and shadows, the new technique uses light-wave effects directly, potentially providing more details in even less light.

When light waves encounter an object like a wall they scatter and become less “coherent.” But not all the information that they carry disappears.

If you can see an object, your eye is picking up enough information to recover an image; if you are separated by a wall, the light wave is incoherent to you. But incoherent doesn’t mean non-existent. What Dogariu and his colleagues discovered is that it’s possible to piece together information about the wave and the object that reflected it in the same way you can piece together an image in the shards of a broken mirror. All that you really need are points of reference. It’s sort of the same way having multiple radars can help you pinpoint the location of an aircraft, or having two people describe a person leads to a better picture of that person’s appearance.

The method uses wavefront shearing interferometry at multiple points to collect information about the less coherent light waves in order to piece together a statistical picture of size, shape, and distance.

The work was funded, in part, by the Defense Advanced Research Projects Agency.