When the view from above provides a crucial backup to GPS
Looking out the window of an airplane provides a unique perspective on the land passing below. A century ago, it was also the only way pilots could know if they were following their intended route.
In the era of GPS, the view from above has become optional for airborne navigation. But what if GPS suddenly disappeared in the middle of a flight? For military airborne missions, that loss could become a critical threat. Unlike commercial flights, military aircraft often need pinpoint accuracy in position when heading over unfamiliar territory, where they may be unsupported by air-traffic control. Even a tiny navigation error could make it harder to avoid threats, find targets, and complete the mission safely. Consider not only the risk to the aircraft and crew but also the risk to the ground forces relying on that aborted mission (whether for intelligence or air support), as well as all associated tactical and cost impacts.
Fortunately, should an adversary find a way to block access to GPS — either by jamming the signal or blasting out a phony, spoofed version —airborne navigation accuracy doesn’t have to go with it. That’s because of a Leidos system that finds other ways to extract an accurate location fix, taking advantage of the fact that heading, airspeed, and a video feed from equipment is usually already on hand in most military aircraft. “Our goal is to provide navigation that’s essentially equivalent to GPS, in a GPS-contested environment,” says Leidos engineer Chris Yeager, who has helped develop the system.
The system, called Assured Data Engine for Positioning and Timing (ADEPT), uses what sounds like an old-fashioned trick to get a location fix: It compares the view below to a map. Thanks to the ubiquitous availability of high-resolution cameras on airborne vehicles, along with detailed satellite maps of just about any area in the world, ADEPT can almost instantly match the view to an exact spot on a map, yielding a precise fix in real-time. “It’s doing just what a pilot would do—looking down at the ground to recognize landmarks,” explains Dr. Jonathan Ryan, who manages vision-based navigation for the Position, Navigation, and Timing (PNT) programs at Leidos. “But it’s doing it much faster and more accurately, and it can do it anywhere.”
Fixing a position with machine learning
Thanks to machine learning and other sophisticated software techniques, ADEPT is able to recognize landmarks and terrain even when human eyes would have trouble distinguishing identifying details, including over desert, unbroken forest, or salt flats. And it can do it when flying through or over clouds, as long as it can grab brief glimpses of the ground through occasional holes in the cloud cover. At night, the system can rely on infra-red imaging, or use stars and other celestial objects to get a fix. Even map images that have been rendered obsolete by the addition or destruction of buildings and roads don’t keep ADEPT from recognizing the right spot. “It doesn’t need an exact match, as long as some of the details in the area remain the same,” says Ryan.
Only extended, solid cloud cover below or flying far out over the ocean can keep ADEPT from a visual location fix. But even those conditions don’t mean the system can’t continue to feed precise navigation information. By pulling in data from other standard sensors on the aircraft — including airspeed indicators, barometric altimeters, and inertial measurement devices — and adding in any signals it can pick up from radio and cellphone towers or communications satellites, the system can usually continue to update position with good accuracy until it can get a more precise update from the next glimpse of the ground. In the future, radar image recognition will remove even solid cloud cover as an impediment.
That ability to seamlessly blend together information from cameras, sensors, and external signals in order to derive the most accurate possible fix is one of ADEPT’s most sophisticated tricks, says Ryan, and it relies on a sensor fusion engine developed by Leidos. “Even though many of the individual sensors don’t give us a direct, absolute position measurement on their own, we can blend them together using sensor fusion techniques to get a good picture of where we are even if we can’t see the ground,” he explains.
A flexible solution for most airborne vehicles
The fact that ADEPT doesn’t rely on any specific hardware means that it can be quickly and affordably deployed on virtually any type of airborne platform. It runs on almost any type of available on-board computing platform, from a flight computer to a cell phone processor, which ensures it can also run on small drones and guided munitions, in addition to larger manned and unmanned systems. “Because there are so many different combinations of airborne platforms and sensors, we’ve designed for maximum flexibility in deployment,” notes Yeager.
Of course, any one airborne vehicle is usually just a small part of a military operation. ADEPT takes advantage of that fact, too, by providing collaborative navigation— that is, allowing multiple vehicles and even individual personnel, whether in the air or on the ground, to share sensor and position information. ADEPT can then piece together the information to provide each member of the group with their location and its uncertainty. “It might even be a drone that can fly outside the GPS-contested environment,” says Yeager. “Then it can give everyone else their position and velocity.”
But a glimpse of the ground remains the primary key to precise airborne navigation without GPS, adds Yeager. It may be the oldest form of navigation, but thanks to the 21st-century upgrade it’s getting from ADEPT, it’s still the most accurate and dependable.