Stakes in the Sand: Surveying in the Gulf War

In 1990, US forces arrived in the Persian Gulf with a cornucopia of navigation technologies: not just GPS but also LORAN, TACAN, TERCOM (for cruise missiles), and inertial navigation systems which used laser, electrostatic, or mechanical gyroscopes, as well as old-fashioned manual tools like maps and compasses. So why were US surveyors heading off into the Saudi desert?

The surveyors were from the 30th Engineers Battalion (Topographic), which was deployed to provide map production and distribution, surveying, and terrain analysis services to the theatre. The survey platoon’s work was being done on behalf of the Corps and divisional artillery, which had their own particular navigational needs. Unlike fighter or helicopter pilots, field artillery gunners didn’t have the opportunity to see their targets and make last-minute adjustments to their own aim. Unlike bomber crews or cruise missiles, their fire missions were not planned well in advance using specialized materials. To provide precise positioning information to the guns, each artillery battalion in the Gulf was equipped with two Position and Azimuth Determining Systems (PADS), truck-mounted inertial navigation systems that keep an ongoing track of the unit’s positions. At the heart of the PADS was the standard US Navy inertial navigation system, the AN/ASN-92 Carrier Inertial Navigation System (CAINS).

Like all inertial navigation systems, PADS had a tendency to drift over time. That meant that it required regular refreshes using a pre-surveyed location, or control point. The initial specifications for PADS were to achieve a horizontal position accuracy of 20 meters over 6 hours and 220 kilometers. Actual horizontal accuracy seems to have been far better, more like 5 meters. One reason for the high accuracy was that, unlike an airplane, the vehicle carrying the PADS could come to a complete stop, during which the system detect and compensate for some of the errors by the accelerometers in the horizontal plane.

Unfortunately, the US had exactly one control point in Saudi Arabia, at Dharan airbase (Army Reserve historian John Brinkerhoff says this and several other point surveyed were done with “Doppler based methods.” I assume that means using the TRANSIT satellite system, which determined location on the basis of Doppler shift). Starting from that control point, the 30th’s surveyors extended a network of new control points northwards and westwards towards the Iraqi border. Conventional line-of-sight survey methods would have been too slow, but the surveyors had received four GPS receivers in 1989 and soon got more from the Engineer Topographic Laboratories to equip a follow-up team of surveyors. Eventually, their survey covered 10,000 square kilometers and included 95 control points. Relative GPS positioning took about two hours (according to Brinkerhoff) and offered accuracy to about 10 centimerers (compared to 17 meters for regular GPS use). Absolute positioning – done more rarely – required four hours of data collection and provided accuracy of 1–5 meters.

When the ground war began on 24 February 1991, the two survey teams tried to stay ahead of the artillery, which meant driving unescorted into the desert and marking new control points with steel pickets with reflectors (for daytime) and blinking lights (for night-time). Providing location data through headquarters was too slow, so the surveyors took to handing it directly to the artillery’s own surveyors or just tacking it to the pickets. By the ceasefire on March 1 they had surveyed all the way to 30 km west of Basra. Where the artillery outran the control points they used their own GPS receivers to make a “good enough” control point and reinitialized the battalion PADS there, so all the artillery batteries would at least share a common datum. One thing PADS could do and GPS couldn’t was provide directional information (azimuth), so units that outran their PADS capabilities had to use celestial observations or magnetic compasses to determine direction.

What the 30th Battalion and the artillery’s surveyors did in the Gulf was different enough from traditional survey methods that the some in the army already used a different phrase, “point positioning,” to describe it. In the 1968–1978 history for the Engineer Topographic Laboratories, which designed army surveying equipment, PADS was one of three surveying and land navigation instruments singled out as part of this new paradigm (the others were a a light gyroscope theodolite with the acronym SIAGL and the Analytical Photogrammetric Positioning System).

Brinkerhoff tells the story of the 30th’s surveyors as the meeting of the high and low tech, but the work really relied on a whole range of technology. Most of the GPS surveying was relative positioning that was anchored to previous Doppler surveying. Position and azimuth information was carried forward by inertial navigation, and the position of the firing battery was paired with target information from a forward observer equipped with GPS, an inertial navigation system, or a paper map or from aerial photography which could be interpreted using the aeroplane’s own navigation system or a photointerpreter’s tool like APPS. GPS surveying and navigation did not stay wrapped up with all these other navigational tools for long. The technology was flexible enough to be used in place of many of them. But in the early 1990s, GPS’s success was contingent on these other systems too.

Sources Notes: The story of the 30th and its surveyors appears in John Brinkerhoff’s monograph United States Army Reserve in Operation Desert Storm. Engineer Support at Echelons Above Corps: The 416th Engineer Command (printed in 1992). Further details appear in the Army Corps of Engineers history Supporting the Troops: The U.S. Army Corps of Engineers in the Persian Gulf War (1996) by Janet A. McDonnell and “The Topographic Challenge of DESERT SHIELD and DESERT STORM” by Edward J. Wright in the March 1992 issue of Military Review. Reflections on how the artillery used PADS and GPS in the Gulf come from the October 1991 issue of Field Artillery, a special issue on “Redlegs in the Gulf.” Technical details for PADS are from the ETL History Update, 1968–1978 by Edward C. Ezell (1979).

A Hidden Map Between Sensor and Shooter: The Point Positioning Data Base, Part Three

Back to Part Two (or Part One)

Between 1990, when the first GPS-guided missiles were used in war, and 2001, when the United States began its invasion of Afghanistan, GPS guidance for weapons went from a niche technology used only by a few systems to one of the US military’s favorite techniques. The spread of GPS guidance led to a huge demand for ways of determining target positions in a way that weapons – rather than pilots – would understand. That meant three-dimensional coordinates in World Geodetic System 84 (WGS 84), rather than grid references on maps or even coordinates in other datums. One of the most important tools for establishing these coordinates was the Point Positioning Data Base (PPDB), a database of matching imagery and coordinates that had originated in the 1970s as a tool for army field artillery.

Made widely available in an analog format in the 1980s and used during the first Gulf War, PPDB’s digitization had restricted its use mostly to computer workstations (first DEWDROP, then RainDrop) in the United States during the war over Kosovo in 1999.

By the time the invasion of Afghanistan began in late 2001, RainDrop workstations had moved from analysts’ desks in the continental US to the same airbase – Prince Sultan Air Base in Saudi Arabia – as the air operations center that was commanding the air war. That shift was only the first step in the proliferation of tools and services for point mensuration to match the American and coalition demand for mensurated target coordinates. “Cursor on Target” (see Part One) began development in 2002; Northrop Grumman released RainDrop’s successor – RainStorm – in 2004; and another system, Precision Strike Suite for Special Operations Forces (PSS-SOF), was created to provide “near-mensurated” coordinates to troops in the field.

By 2009, when Noah Shachtman wrote a description of how mensuration was used to plan air strikes in Afghanistan, the process had been in regular use for almost a decade. Here’s his description of what was being done in the air operations center for Afghanistan:

An officer, I’ll call him Paul, walks me through the process. It starts with “targeteering,” figuring out where a pilot should attack. Just getting GPS coordinates or an overhead image isn’t good enough. GPS is unreliable when it comes to altitude. And landscape and weather conditions can throw satellite pictures off by as much as 500 feet. “Even with Gucci imagery, there’s always errors,” Paul says. He points to a pair of screens: On the right side is an aerial image of a building. On the left, two satellite pictures of the same place — taken from slightly different angles — flicker in a blur. Paul hands me a pair of gold-rimmed aviator glasses. I put them on, and those flickers turn into a single 3-D image. Paul compares the 2-D and 3-D images, then picks exactly where the building should be hit. Depending on elevation, adding a third dimension can shrink a 500-foot margin of error down to 15 feet.

Tying a point on the ground to a global grid precise enough to be used for air strikes anywhere in the world was now a matter of course. Fifty years after the CIA’s photo interpreters bought their first mainframe to help them analyze and map targets in the Soviet Union, calculating a target’s position in global terms has become simple – even if knowing what is at the target location is not. The technology here is also a long way from the cobbled-together equipment for which the PPDB was first created. The Analytical Photogrammetric Positioning System (APPS) combined digital and analog electrical components with old-fashioned optics and the human eye.

The transformation from APPS to RainStorm over the course of thirty years is pretty remarkable, but its also been hard to track. This is technology that doesn’t get a lot of praise or get singled out for attention, but that doesn’t mean its not interesting or important.

For one thing, APPS was a military application of commercial off-the-shelf (COTS) technology before COTS was cool. The Hewlett Packard 9810A desk calculator at its heart was not designed for military use or developed from military-sponsored research. It was just an office tool that was re-purposed for a very different office.

More importantly, APPS and PPDB are a good example of an enabling technology that was created long before its eventual requirement even existed. If there had been no PPDB, the development of GPS-guided bombs would have forced its creation. Instead, it was an Army project begun around the same time the first GPS satellites were being designed that provided the necessary service. That’s luck, not good planning.

Lastly, and equally interestingly, PPDB is a layer of complexity in modern warfare that’s easily overlooked because it sits in the middle of things. It provides the map of coordinates on which grander, more significant, moves are sketched, and which disappears into obscurity except when something goes wrong. Between cursor and target, or sensor and shooter, there are a lot of layers like this one.

Global Positioning Synecdoche: An Interim Update

The next installment about guidance and navigation during the Gulf War is a little delayed since it’s sprawled from a brief description of a cruise missile targeting aid to cover photogrammetry and mensuration tools from the late 1960s to the near-present. That’ll take some wrangling to bring back under control.

In the meantime, I’ve been thinking about some of the trends in what I’ve already written. The systems that became associated with victory in the Gulf (GPS, cruise missiles, laser-guided bombs, etc.) were all reliant on a range of less obvious technologies. TERCOM guidance, for example, required accurate satellite mapping that was itself an offshoot of satellite reconnaissance photography. Few of the systems, either the marquee technologies or the supporting ones, were designed with their eventual use in mind. Only luck or serendipity brought them together in the way that led to the success. The commercial GPS receivers that were so popular during the war existed because President Reagan had made a political point by opening the system to civilian use. Ring laser gyroscopes in aircraft navigation systems were available because Boeing’s commercial aviation division had made a big bet on the technology when the Defense of Department had balked at widespread adoption in the late 1970s. What I’m working on, the story of the Point Positioning Data Base and Analytical Photogrammetric Positioning System, is a similar case of technology created for one mission that proved far more useful, in conjunction with other equipment, for a different one.

We’ll see how long this series lasts, but I’m realizing that the list of systems that deserve some coverage is pretty long. Reading about long-range radionavigation like Loran-C pointed to the fact that I can find almost nothing about Tactical Air Navigation (TACAN). The planning tools for each day’s airstrikes are more discussed, but there’s plenty to say about how they came into existence. I’ll have to figure out where to go after that.

The Pathfinders of Task Force Normandy

Though the F-117 stealth bomber, the Tomahawk cruise missile, and the laser-guided bomb probably garnered the majority of the public attention and accolades during the first Gulf War air campaign, true aficionados know first shots fired in the war came not from any type of Air Force jet but from two quarters of Army attack helicopters in the Iraqi desert. Their attack on two early warning radar stations, code-named EAGER ANVIL, is yet another example of the apparent transformation that GPS enabled. Led by Air Force special operation helicopters, the joint team known as Task Force Normandy flew more than twenty miles into Iraq, across mostly featureless desert on a moonless night to destroy two radars and open up a gap in the Iraqi air defense network for the Coalition’s air forces to exploit.

The mission’s origins were somewhat convoluted. What began as a planned ground assault by Army special forces infiltrated over the border on foot gradually morphed into a joint helicopter attack by Air Force special operations helicopters and AH-64 Apache attack helicopters from the Army’s 101st Airborne Division. The mission couldn’t be handed over to the Apaches alone because AH-64’s Doppler radar navigation system was, as one aviation officer put it, only “a ball park navigator” that would drift 300–500 meters in a two-hour flight (GAO/OSI-93-4, p.55).

MH-53J Pave Low IIIE. MSgt Dave Nolan, Airman Magazine

MH-53J Pave Low IIIE. Photo by MSgt Dave Nolan, Airman Magazine.

The Air Force’s MH-53J Pave Low III helicopters, on the other hand, had one of the most impressive navigational packages in the US military. What had began as a series of improvements to the Air Force’s combat search and rescue (CSAR) helicopters during the Vietnam War had in the subsequent twenty years become the Air Force’s premier rotary special operations capability. When the Pave Low III went into service in 1979, it had a precision navigation system that included terrain following/terrain avoidance radar, a forward-looking infrared (FLIR) sensor, and a combination Doppler radar and inertial navigation system connected to a projected “moving map” display. The upgrade to the Pave Low III Enhanced configuration in the late 1980s gave the Pave Low community some of the first GPS receivers in the military. Major Ed Reed, one of the program managers, recalled that:

“I went to the first GPS meeting. I was a junior major there and everybody listed all of the aircraft that were going to get GPS and . . . every fighter was covered, every bomber was covered. And then some line in the nineties, the H-53 was going to get it. Wrong answer! I said, ‘Sorry, I’ve got a FAD-1 so I move to the front of the line.’ They said, ‘You can’t do that. We’ve got this list!’ I said, “You’d better call this office and find out if that’s going to be your list at the end of the day. So later I got a nice phone call. They said, ‘You can have the first boxes off the line.’ I said, ‘That would work just fine.’” (On A Steel Horse I Ride, p.229)

Because of this, Task Force Normandy paired four Pave Low helicopters with eight Apaches. Ten miles out from the targets, crew on board the Pave Lows would drop clusters of chemical lights. The Apaches would update their Doppler navigation systems as they flew over the lights, then close in their targets and destroy them with laser-guided Hellfire missiles, rockets, and gunfire.

Despite its challenging nature and the uncertainty surrounding it, the mission went smoothly. After four minutes, twenty-seven Hellfire missiles, 100 rockets and 4,000 rounds of 30mm ammunition, both radar sites were destroyed. No aircraft were lost.

Task Force Normandy’s mission is a great encapsulation of both the contribution GPS made to the Gulf War and the limits of that contribution. GPS made navigation on the mission more or less trivial (though I’m not sure it would have been impossible with the pre-GPS Pave Low navigation computer), but the capability was tied to those helicopters rather than a broader network. To pass navigational information to the accompanying Apache’s, Task Force Normandy had to resort to a supremely low-tech solution.

The problem was that while GPS was global, it was not universal. A GPS receiver knew its location to within ten meters, but that knowledge only had meaning in conjunction with other, less precise information – like maps or the navigational systems on other aircraft. The problem was particularly apparent for search and rescue, the primary mission for the Air Force special operations force. As Lieutenant Colonel Richard Comer, the commander of the squadron which flew the Pave Low in Desert Storm, described it:

We were dealing with coordinates from somebody who was flying out there and doing acrobatics dodging missiles with an INS or Doppler, [which is why] his coordinate wasn’t going to be close to where that person was on our GPS. We didn’t know that. We thought that . . . we should just be able to fly to it and be able to hover above the guy and be able to drop the hoist down through the fog and pick him up. We didn’t know. (CSAR in Desert Storm, p.153)

Even small differences in the systems of coordinates GPS used could make the difference. B-52 bombers that initialized their inertial navigation systems at Diego Garcia using coordinates from WGS72 datum dropped their bombs 400–600 feet away from targets whose coordinates were set using the WGS84 datum. The full value of global positioning, it appeared, only appeared when friend and foes had commensurable a global position.

TERCOM, System and Symbol: Part Three

From Part Two

Direct Action’s bombing of the Litton factory in Rexdale and its assembly line for cruise missile guidance systems reflected discomfort on the Canadian left with the way the Cold War was heating up again after the years of détente and what they saw as an insufficient willingness to put distance between Canada and US foreign policy. Neither it, nor the anti-cruise missile protests, put much specific emphasis on the technology involved. For them, TERCOM itself was just one particular articulation of the US military-industrial complex.

In fact, TERCOM in general was a dead end in military guidance. No other weapons used the same guidance technique, and both the Tomahawk and conventional ALCM used GPS as their primary guidance as soon as practical. I have yet to see any reference to a commercial spin-off from TERCOM either. The idea was more or less a one-off as a guidance technique. It survived into the twenty-first century in only one niche, the nuclear-armed Tomahawk. Because, unlike GPS or other radionavigation systems, there were no outside signals to be jammed or spoofed, TERCOM-assisted inertial navigation remained the sole guidance system on the nuclear-armed Tomahawks even after the conventional versions switched over using to GPS. The last nuclear-armed Tomahawks were only retired in 2013.

On the other hand, TERCOM did demonstrate the value and cost of good mapping, charting and geodetic data. It wasn’t the first system to make use of it – every US strategic bomber and ballistic missile relied on mapping and geodetic information to some extent – but it was the first to demand not just knowledge about Point A and Point B but also about the terrain along the way. That requirement put the Defense Mapping Agency in a bind, forcing it both to go into overdrive and to triage its TERCOM processing work. Recognising that that sort of crash project couldn’t be repeated for every new weapon, the deputy Secretary of Defense issued Program Decision Memorandum 85 (PDM-85) in 1985, which required early military department to “fund with its own resources the cost of unique earth data products.” Though Larson and Pelletiere wrote in the late 1980s that the rule was proving unenforceable, it was a mark of further recognition that this type of information was a critical war weapon.

After the Cold War ended, the Defense Mapping Agency was merged with many of the intelligence community’s imagery creation and analysis office to create the National Imagery and Mapping Agency (NIMA). In 2003, NIMA was renamed the National Geospatial-Intelligence Agency (NGA), a change that reflected the increasing conceptual consolidation of these kinds of information under the umbrella of geospatial intelligence (GEOINT). The term, as the US Geospatial Intelligence Foundation explains, was only about as old as the agency’s new name. But while The Atlantic’s Marc Ambinder could title as story about the agency in 2011 “The Little-Known Agency That Helped Kill Bin Laden,” NGA was pretty deeply embedded in the US national security establishment. More than thirty years after DMA started weaponizing its digital terrain elevation data (DTED), the idea that the military might not only demand detailed maps of its targets but also the underlying data, to transform into a three-dimensional computer model, a physical mockup, or – bringing us right back to the first uses of the DTED – a flight simulator profile (which was, after all, the first use for, back in the 1970s), is old news.

Source Notes: US cruise missiles are pretty widely discussed, so a lot of these posts were cobbled together from a lot of sources. Jay L. Larson and George Pelletiere’s Earth Data and New Weapons (available from DTIC here) was very useful for understanding how the DMA supported TERCOM, and is one of the few places to mention PDM-85. The explanation of how satellite stereophotogrammetry is done comes mostly from the NRO’s internal history Hexagon Mapping Camera Program and Evolution (as reprinted by the Center for the Study of National Reconnaissance). Information about Canadian protests against cruise missile testing and the Litton bombing in Toronto come from John Clearwater’s 2006 book Just Dummies: Cruise Missile Testing in Canada. Ann Hansen, one of Direct Action’s members, published a memoir after he release from prison. Direct Action: Memoirs of an Urban Guerilla offers more but similar details about the Litton bombing and reprints Direct Action’s communique.

TERCOM, System and Symbol: Part One

On the opening night of the first Gulf War in 1991, seven B-52 bombers flew 14,000 miles from Barksdale Air Force base in Louisiana to launch the first attack in history with GPS-guided weapons. The bombers, whose operation was formally code-named SENIOR SURPRISE but quickly nicknamed “Secret Squirrel,” launched thirty-five Conventional Air-Launched Cruise Missiles (CALCM), each equipped with a single-channel GPS receiver. That made the CALCM was the first and only GPS-guided weapon fired during Operation Desert Storm, though GPS navigation was used by many attack aircraft.

SENIOR SURPRISE was, it turns out, the first step towards a era of cheap and easy GPS-guided weapons. But during Operation Desert Storm those thirty-five missiles fired were vastly outnumbered by their close cousin, the US Navy’s BGM-109 Tomahawk cruise missile. The two missiles shared a common engine, the Williams F107 turbofan, and they had been designed with a common guidance system in mind too. The CALCM’s were a variant of the US Air Force’s nuclear cruise missile: the nuclear version was designated AGM-86A, and it and the BGM-109 shared a guidance system built around Terrain Contour Matching, or TERCOM.

US Navy photo of a Block IV "Tactical Tomahawk" in flight. 021110-N-0000X-003 China Lake, Calif. Courtesy Wikipedia.

US Navy photo of a Block IV “Tactical Tomahawk” in flight. 021110-N-0000X-003 China Lake, Calif. Courtesy Wikipedia.

Today, we think of the first Gulf War as GPS’s coming-out party – which it was – but imagine it as the backbone of the many new weapons the US unleashed during the war – which it wasn’t. That begs the question, what actually was guiding US weapons during the first Gulf War, and why those systems never captured the imagination – for better or worse. After all, in 1992 two activists took axes to unlaunched GPS satellites because GPS was “military in its origins, military in its goals, [and] military in its development.”

TERCOM was the guidance system at the heart of the US cruise missile triad: an air-launched missile that extended the range and capabilities of the US bomber fleet (the ALCM); a submarine and ship-launched missile that gave US Navy ships a strategic nuclear weapon (the Tomahawk); and a ground-launched missile to be based in Europe as a counterpart to the Soviet intermediate-range nuclear arsenal (the Gryphon). Of the three, Tomahawk would have the most significant career. While the ground-launched Gryphon was fairly swiftly traded away in exchange for the Soviets demobilizing their intermediate-range missiles through the INF Treaty, the Tomahawk became the American military’s go-to weapon for punitive strikes during the 1990s. Tomahawks were fired against Iraqi military targets in 1993 and at Afghan and Sudanese targets in response to the East African embassy bombings in 1998, as well as used in conjunction with airstrikes in Bosnia (1995), Iraq (1998), and Serbia (1999). They remain, as journalist Mark Thompson put it in TIME magazine last year, “the curtain-raiser on U.S. military strikes since 1991’s Gulf War.”

(Since the Block III version was introduced in 1993, Tomahawk missiles without nuclear warheads have had GPS guidance in addition to the TERCOM system. The nuclear-armed version remains TERCOM only.)

How TERCOM Worked

Tomahawk flight diagram. From GAO/NSIAD-95-116 (1995), p. 17

Tomahawk flight diagram. From GAO/NSIAD-95-116 (1995), p. 17

Unlike GPS or other radionavigation systems, TERCOM did not provide continuous position information. Instead, it was an adjunct to the cruise missile’s inertial navigation system (INS), which used gyroscopes and accelerometers to measure the weapon’s movement. What TERCOM did was provide updates to the INS by comparing the measurements from the missile’s radar altimeter with stored information on the terrain the missile was flying over. Assuming this was a sufficiently “bumpy” area, the altimeter measurements should match only location – providing a precise position fix. By using TERCOM to update its location three times during flight, the cruise missile had far superior accuracy to a missile using only inertial navigation. (On non-nuclear missiles, a separate system known as Digital Scene Matching Area Correlation [DSMAC] offered even more precise guidance for the final stage of the flight.)

Making Maps for TERCOM

The terrain information which the TERCOM system used (called a “matrix”) was based on a database of Digital Terrain Elevation Data (DTED), which contained the elevation above sea level for all the points on a (roughly) 300 foot grid. Luckily enough, the Defense Mapping Agency (DMA) had already been collecting this information for a far more mundane purpose: to use in flight simulators. TERCOM, however, required much, much more of this information. The DMA had been involved in the early testing of TERCOM but the agency wasn’t involved in the decision to put the cruise missiles using the technology into full-scale development. DMA had only been creating TERCOM matrices by hand (so to speak), and now it was looking at the requirement for more than 8,000 of them. According to Jay L. Larson and George A. Pelletiere, “it took DMA five years of ‘crash’ effort to fulfill these requirements.” Demands for TERCOM matrices had to be triaged, with three-quarters of the effort going to the ALCM and the other quarter to the submarine-launched and ground-launched cruise missile programs. If the US military had gone to war in the interim, they would have been unable to use cruise missiles except in particular areas.

To extend the Digital Terrain Elevation Data database over the entire world, and particularly over areas like the interior of the Soviet Union, the DMA used photography collected by the United States’ reconnaissance satellites. The technical principle involved, stereogrammetry, had been well understood since the First World War. Start with two photographs of the same scene taken from slightly different vantage points, such as two moments along the flight path of a plane or satellite. Look at the two images, one with each eye, and move them until they like a single three-dimension image – just like how the human brain normally combines the images from each eye … if your eyes were several hundred meters apart.

If you know the distance between the points from which the images were taken (that hundred metre bridge of the nose) and the angles between the two viewpoints and an object in the image, you can calculate the size of that object. Add in the altitude from which the images were taken and you can calculate the elevation. Conventionally, stereogrammetry is done using a human brain to process illuminated transparencies or with mirrors and hardcopy prints. In the early 1960s computerized system like the Universal Automated Map Compilation Equipment (UNAMACE) took over, speeding up the process tremendously.

UNAMACE. From Hexagon (KH-9) Mapping Camera Program and Equipment (CSNR, 2012), p. 301.

Pictures of UNAMACE, from Hexagon (KH-9) Mapping Camera Program and Equipment (CSNR, 2012), p. 301.

Even once TERCOM matrices were readily available, the mission planning process remained long and complex. During the first Gulf War, it had to be performed at a specially equipped facility (one of the Navy’s two Cruise Missile Support Activities) and took between 24 and 80 (!) hours.

The US cruise missile program attracted a lot of controversy in the 1970s and 1980s, particularly from critics who argued that a new, flexible, and less detectable nuclear weapons delivery system could be destabilizing or even provocative, but TERCOM itself did not itself draw that much of the attention. The exception was up in Canada, where a guidance system subcontractor found itself the focus of a blast of “direct action” against the cruise missile.

Forward to Part Two

Global Positioning Synecdoche: Prelude, Part One

It’s been several months since I read Ingrid Burrington’s Atlantic article on Keith Kjoller and Peter Lumsdaine’s attempt to destroy GPS satellites because the system was “military in its origins, military in its goals, military in its development.” My immediate response was that GPS had actually had less to do with military operations – especially targeting – than many other contemporary systems, and that Kjoller and Lumsdaine targeted GPS not because it was a military but because it was civilian enough to be well-known.

I still can’t help wondering, what percentage of the navigation and targeting in the first Gulf War was actually connected to GPS? What other systems were in use then, and were they anywhere near as intertwined between civilian and military uses as GPS was?

I started my search with the TERCOM terrain-following technique used by the non-GPS-guided cruise missiles fired during the war, and quickly discovered that this was a far more complicated topic than I had though. It’s going to take me a while to chew through all the material, but in the meantime it put me on to an interesting story that shows that the fuzzy boundary between military and civilian systems has always been controversial.

In this case the story starts with an interesting article by historian William Rankin, published in Technology and Culture a few years ago. “The Geography of Radionavigation and the Politics of Intangible Artifacts” argues, among other things, that radionavigation was offering a globe-spanning positioning system long before GPS came into service. A profusion of systems, developed by both Axis and Allies during World War II, meant that by the 1960s positioning via radionavigation – though not as as precise or as consistent as via GPS – was available over most inhabited parts of the world.

The Rise of Radionavigation

The lattice of fixes from two station pairs in a hyperbolic navigation system. From the American Practical Navigator, fig. 1208a.

The lattice of fixes from two station pairs in a hyperbolic navigation system. From the American Practical Navigator, fig. 1208a.

The most successful long-range radionavigation systems at the end of the war used the same basic principle that would be applied with GPS. By measuring the difference in the arrival time of radio signals from two synchronized stations with known positions, you could locate yourself along a single “line of positioning.” These lines are hyperbolas, which gives such systems the name “hyperbolic navigation.” Taking measurements from two pairs of stations gives two lines, which should overlap in only one or two locations. Taking measurements from three pairs means should lead to a single position.

The pulse pattern for LORAN-C. From the American Practical Navigator, fig. 1203a.

The pulse pattern for LORAN-C. From the American Practical Navigator, fig. 1203a.

The US system, called Loran (short for Long Range Navigation), had a reliable range of about 1,500 km and an absolute accuracy of 1,800 meters, but an improved system with better range and accuracy, Loran-C, was in military service by 1957. Using a lower frequency and more precise time delay measurements, Loran-C gave its users a reliable range of about 1,850 kilometers and an absolute accuracy of 460 meters or less, which was pretty impressive for the time.

NATO soon adopted Loran as its standard long-range navigational aid, and the US lost no time in spreading chains of Loran stations across the world to offer precision navigation wherever possible. Unsurprisingly, there was a focus on the seas surrounding Europe and the Soviet Union, since Loran would be both a valuable wartime aid and a way for US ballistic missile submarines – tied at the time to waters close to the USSR because of the 1,900 km range of the first Polaris missiles – to update their inertial navigation systems (INS). (International navigation systems have a tendency to “drift,” so an occasional reset based on an external fix is necessary to keep them accurate.) Unsurprisingly, circa 1962, there was good coverage of the Mediterranean from a chain with stations in Italy, Turkey, Libya, and Spain and of the Norwegian Sea from a chain with stations in mainland Norway, Jan Mayen Island, Iceland, and the Faroes.

That’s where Rankin’s article concludes, with a net of navigational signals – not just Loran-A and -C, but also several competing European systems – that spread across most of the earth. Despite being a military service that wouldn’t be made available to civilian users until 1974, the spread of Loran-C didn’t attract much adverse attention. Not least that was because the US didn’t necessarily tell partner nations about the specific connection between the system and the SLBM fleet. (That was something that would cause trouble in Norway later on.) Loran’s semi-successor, the truly global Omega radionavigation system, did not avoid the limelight.

Forward to Part Two