The Increasingly Automated Hunt for Mobile Missile Launchers

Medium range Nodong ballistic missiles are paraded in Pyongyang, North Korea during the 70th anniversary celebrations of its ruling party's creation.

Medium range Nodong ballistic missiles are paraded in Pyongyang, North Korea during the 70th anniversary celebrations of its ruling party's creation. Wong Maye-E/AP

The trick is training computers to filter the normal from an ocean of imagery, alert human analysts sparingly, and learn from their feedback.

The U.S. intelligence community is quietly experimenting with algorithms that might help keep tabs on mobile missile launchers like the ones North Korea has used in recent tests, affording more warning before an attack.

The goal of the $10 million project is to train computers to pick out launcher-shaped objects in the ocean of digital imagery collected by American spy satellites, manned aircraft, and drones, and do it at least well enough to alert analysts to possible hits.

“It’s harder to separate out the needles in a haystack because of the vast amounts of data available,” a senior defense official said. “What was largely a manual process for intelligence analysts has to become an automated one.”

Such launchers are hard to spot, particularly when disguised as semi trucks or shipping containers, like this version of Russia’s Club-K ship-killing missile. Moscow also says it is building a train-launched ICBM.

The project is focused on finding transporter-erector-launchers, known as TELs, like the ones North Korea used during two recent missile tests. In a March 18 test, Pyongyang fired two Nodong missiles from a mobile launcher into the Sea of Japan. A U.S. defense official said the Pentagon had little warning of the mobile launch. Another North Korean mobile launch failed in mid-April, according to U.S. officials.

Pentagon officials declined to discuss specifics of the effort, but Deputy Defense Secretary Robert Work mentioned it in an interview last November in which he described new ways that humans and computers could work together.  

“You can teach the machine to say, ‘I am looking for a transporter that’s 15 meters long, 4.7 meters wide,’” Work said.

Ultimately, the project aims to automatically alert analysts when a “facility is moving from a benign posture to a threatening posture” and expand that capability globally to monitor thousands of sites around the world, the senior defense official said. “But if something abnormal happens, a yellow flag pops up and allows you to get your analysts engaged.”

Meanwhile, the algorithms learn from the analysts’ feedback, the way Apple and Facebook apps learn to recognize faces by watching users tag them in photo albums.

Bill Gattle is president of Harris Corp.’s Space and Intelligence Solutions division, which makes geospatial intelligence sensors and processing equipment for the Pentagon. He says the key is teaching the machines what normal looks like.

“If Bob always goes to that restaurant every morning and eats breakfast and today he didn’t, he met a little white van off the to the side of the street corner, we want to know that,” Gattle said. “We want it to trigger us that, ‘Hey, something different from the normal just happened.’”

Work said this type of automation could prove most beneficial at the National Geospatial-Intelligence Agency, which snaps pictures from America’s spy satellites, analyzes the images, and feeds the information to the military and intelligence community.

“We’ve been trying to solve that problem since 1990, with the old Scud hunt” in Iraq, he said. “It’s a damned hard problem.”