Stakes in the Sand: Surveying in the Gulf War

In 1990, US forces arrived in the Persian Gulf with a cornucopia of navigation technologies: not just GPS but also LORAN, TACAN, TERCOM (for cruise missiles), and inertial navigation systems which used laser, electrostatic, or mechanical gyroscopes, as well as old-fashioned manual tools like maps and compasses. So why were US surveyors heading off into the Saudi desert?

The surveyors were from the 30th Engineers Battalion (Topographic), which was deployed to provide map production and distribution, surveying, and terrain analysis services to the theatre. The survey platoon’s work was being done on behalf of the Corps and divisional artillery, which had their own particular navigational needs. Unlike fighter or helicopter pilots, field artillery gunners didn’t have the opportunity to see their targets and make last-minute adjustments to their own aim. Unlike bomber crews or cruise missiles, their fire missions were not planned well in advance using specialized materials. To provide precise positioning information to the guns, each artillery battalion in the Gulf was equipped with two Position and Azimuth Determining Systems (PADS), truck-mounted inertial navigation systems that keep an ongoing track of the unit’s positions. At the heart of the PADS was the standard US Navy inertial navigation system, the AN/ASN-92 Carrier Inertial Navigation System (CAINS).

Like all inertial navigation systems, PADS had a tendency to drift over time. That meant that it required regular refreshes using a pre-surveyed location, or control point. The initial specifications for PADS were to achieve a horizontal position accuracy of 20 meters over 6 hours and 220 kilometers. Actual horizontal accuracy seems to have been far better, more like 5 meters. One reason for the high accuracy was that, unlike an airplane, the vehicle carrying the PADS could come to a complete stop, during which the system detect and compensate for some of the errors by the accelerometers in the horizontal plane.

Unfortunately, the US had exactly one control point in Saudi Arabia, at Dharan airbase (Army Reserve historian John Brinkerhoff says this and several other point surveyed were done with “Doppler based methods.” I assume that means using the TRANSIT satellite system, which determined location on the basis of Doppler shift). Starting from that control point, the 30th’s surveyors extended a network of new control points northwards and westwards towards the Iraqi border. Conventional line-of-sight survey methods would have been too slow, but the surveyors had received four GPS receivers in 1989 and soon got more from the Engineer Topographic Laboratories to equip a follow-up team of surveyors. Eventually, their survey covered 10,000 square kilometers and included 95 control points. Relative GPS positioning took about two hours (according to Brinkerhoff) and offered accuracy to about 10 centimerers (compared to 17 meters for regular GPS use). Absolute positioning – done more rarely – required four hours of data collection and provided accuracy of 1–5 meters.

When the ground war began on 24 February 1991, the two survey teams tried to stay ahead of the artillery, which meant driving unescorted into the desert and marking new control points with steel pickets with reflectors (for daytime) and blinking lights (for night-time). Providing location data through headquarters was too slow, so the surveyors took to handing it directly to the artillery’s own surveyors or just tacking it to the pickets. By the ceasefire on March 1 they had surveyed all the way to 30 km west of Basra. Where the artillery outran the control points they used their own GPS receivers to make a “good enough” control point and reinitialized the battalion PADS there, so all the artillery batteries would at least share a common datum. One thing PADS could do and GPS couldn’t was provide directional information (azimuth), so units that outran their PADS capabilities had to use celestial observations or magnetic compasses to determine direction.

What the 30th Battalion and the artillery’s surveyors did in the Gulf was different enough from traditional survey methods that the some in the army already used a different phrase, “point positioning,” to describe it. In the 1968–1978 history for the Engineer Topographic Laboratories, which designed army surveying equipment, PADS was one of three surveying and land navigation instruments singled out as part of this new paradigm (the others were a a light gyroscope theodolite with the acronym SIAGL and the Analytical Photogrammetric Positioning System).

Brinkerhoff tells the story of the 30th’s surveyors as the meeting of the high and low tech, but the work really relied on a whole range of technology. Most of the GPS surveying was relative positioning that was anchored to previous Doppler surveying. Position and azimuth information was carried forward by inertial navigation, and the position of the firing battery was paired with target information from a forward observer equipped with GPS, an inertial navigation system, or a paper map or from aerial photography which could be interpreted using the aeroplane’s own navigation system or a photointerpreter’s tool like APPS. GPS surveying and navigation did not stay wrapped up with all these other navigational tools for long. The technology was flexible enough to be used in place of many of them. But in the early 1990s, GPS’s success was contingent on these other systems too.

Sources Notes: The story of the 30th and its surveyors appears in John Brinkerhoff’s monograph United States Army Reserve in Operation Desert Storm. Engineer Support at Echelons Above Corps: The 416th Engineer Command (printed in 1992). Further details appear in the Army Corps of Engineers history Supporting the Troops: The U.S. Army Corps of Engineers in the Persian Gulf War (1996) by Janet A. McDonnell and “The Topographic Challenge of DESERT SHIELD and DESERT STORM” by Edward J. Wright in the March 1992 issue of Military Review. Reflections on how the artillery used PADS and GPS in the Gulf come from the October 1991 issue of Field Artillery, a special issue on “Redlegs in the Gulf.” Technical details for PADS are from the ETL History Update, 1968–1978 by Edward C. Ezell (1979).

Map Overlap: Warsaw Pact vs. NATO Grids

The Charles Close Society hasn’t updated its topical list of articles on military mapping since I wrote about it in 2015, but there is a new article by John L. Cruickshank (“More on the UTM Grid system”) in Sheetlines 102 (April 2015) that is now freely available on the society’s website. The connection to Soviet mapping is that Cruickshank discusses how both NATO and the Warsaw Pact produced guides and maps to help their soldiers convert between their competing grid systems. Unlike latitude and longitude, a grid system assumes a flat surface.That’s good for simplifying calculations of distance and area, but means you have the problems of distortion that come with any map projection.

Both the Soviets and Americans based their standard grids on transverse Mercator projections that divided the globe up into narrow (6° wide) north-south strips, each with own projection. These were narrow enough not to be too badly distorted at the edges but still wide enough that artillery would rarely have to shoot from a grid location in one strip at a target in another (which required extra calculations to compensate for the difference in projections). The American system was called the Universal Transverse Mercator (or UTM; the grid itself was the Military Grid Reference System, or MGRS). The Soviet one was known, in the West at least, as the Gauß-Krüger grid.

In his article, Cruickshank reports that by 1961 East German intelligence was printing 1:200,000 military topographic maps that had both UTM and Soviet Gauß-Krüger grids. By 1985 a full series existed that ran all the way west to the English Channel. Rather than print a full map series with both grids, the US Army produced intelligence guides to the conversion between them. Field Manual 34-85, Conversion of Warsaw Pact Grids to UTM Grids was issued in September 1981. A supplement, G-K Conversion (Middle East) was released in February 1983. As Cruickshank observes, both manuals have fascinating illustrated covers. Conversion of Warsaw Pact Grids features a map with a rolled up map labelled “Intelligence” standing on a grid and looking at a globe focused on Europe. G-K Conversion, on the other hand, shows an Eagle literally stealing the map out of the hand of a Bear using calipers to measure distances from Turkey to Iran across the Caspian Sea.

The article ends with the observation that the history of modern geodesy, which underpins calculations like the UTM and Gauß-Krüger grids, remains “overdue for description.” Since it was published a new book has appeared that goes a long way towards covering some of those developments (at least for non-specialists, if not experts like Cruickshank). In fact, map grids are one of the main topics of After the Map: Cartography, Navigation and the Transformation of Territory in the Twentieth Century by William Rankin (University of Chicago Press, 2016). The book is chock-full of fascinating discussions of new mapping and navigation systems that developed between the end of the nineteenth century and the appearance of GPS. Its focus is on three overlapping case studies: large-scale global maps like the International Map of the World and World Aeronautical Charts (which have their own connection to Soviet mapping), grid systems like UTM, and radionavigation networks like Gee and Loran. (The third of these was already the topic of an article by Rankin that I wrote about here.)

In the chapters on map grids, After the Map shows just how long even an ostensibly universal design like UTM remained fragmented and regional. The use of grids had begun on the Western Front during the First World War. It spread to domestic surveying in the interwar period and been adopted by all the major powers during the Second World War. But universal adoption of the principles involved did not mean adoption of a common system. Even close allies like the United States and Britain ended up just dividing the world and jointly adopting one or the other nation’s approach in each region: British grids were applied to particular war zones and a more general American system used for the rest of the world. Neither used a transverse Mercator projection.

Even once America and its NATO allies settled on UTM as a postwar standard – a decision made despite opposition from the US Navy and Air Force, who fought vigorously for a graticule rather than a grid – UTM maps did not use a single consistent projection but adopted whichever reference ellipsoid was already in use for a region. While those differences were eventually resolved, even the 1990 edition of Defense Mapping Agency Technical Manual 8358.1, Datums, Ellipsoids, Grids, and Grid Reference Systems, still included specifications for twenty British grids including the British and Irish domestic surveys (plus a further nineteen further secondary grids), as well as the Russian Gauß-Krüger. East German tank commanders should have been grateful that they could get away with only two from the Intra-German Border to the Channel!

Smart Plane, Dumb Bombs, Bad Maps?: Part Two

Back to Part One

Designed around bleeding-edge 1960s avionics  the F-111 was built to would take the guesswork out of high speed, low-level navigation. Its avionics included an inertial navigation system (INS), terrain-following and attack radars, and a navigation computer that used these inputs to determine the airplane’s current location. Since an INS tends to drift over time, due to small errors in the measurements made by its gyros and accelerometers, the F-111’s navigator provided updates by taking a radar fix on a nearby landmark, usually known as an offset aimpoint, or OAP. Though they might be taken for granted by an observer, the entire process was dependent on good maps and geodetic information. F-111 pilot Richard Crandall’s description of Operation EL DORADO CANYON, the 1986 air attacks on Libya, explains what could go wrong when the F-111 flew with bad information.

Three groups of F-111Fs were involved in the operation, two equipped with laser-guided bombs (LGBs) and a third – attacking Tripoli airport – with “dumb” bombs that would be slowed by ballutes to allow for low-level delivery. All carried the PAVE TACK laser-designating pod, which also included an infra-red camera.

An F-111F aircraft releases Mark 82 bombs equipped with ballutes over a training range in 1986. Air Force photo via Wikimedia Commons

An F-111F aircraft releases Mark 82 bombs equipped with ballutes over a training range in 1986. Air Force photo via Wikimedia Commons.

Crandall focuses on the attack on Tripoli airport, where five aircraft carrying seventy-two bombs reached Tripoli airport but only one succesfully hit the Libyan aircraft parked on the tarmac. Why? According to Crandall, the attacking planes had been provided with the distance between a very visible radar target at the airport and where their bombs were supposed to land (“the radar offset”), and that information was wrong.

The aircrew that hit the airport, hats off to them! I knew the WSO extremely well, a good friend and fellow instructor for several years. In watching his tape, he nailed the radar offset for the airport. The radar was not good at burning out flat concrete but did much better on targets with more radar reflectivity. He went to narrow sector expand mode on the offset and then switched back to the Pave Tack infrared video, and nothing appeared. He went back and checked the offset again, and still dead on. The he switched back to the Pave Tack. Every other aircraft let the bombs fly using the radar offset. Their bombs hit the airfield between the taxiway and the runway. The coordinates on the offset were evidently bad. My friend went from narrow sector to wide sector in the Pave Tack and in the right side edge of the field of view he caught sight of the IL-76s. You hear him shout “come right come right” to the Pilot who is seeing nothing except tons of anti-aircraft artillery exploding and his TFR screen. The pilot made a hard turn. As he rolls out, his WSO has fired the lasers and the bombs immediately flew off. You see in the video the Pave Tack’s video rotate to upside down due to the mechanics of the pod rotating to see the target behind the aircraft. The WSOs had to learn how to track upside down when guiding LGBs. You then see a huge explosion rip through the airplanes. That was incredible teamwork in the cockpit. Good on the WSO to switch to wide field of view—it went from a really narrow straw to a slightly fatter straw to look through, but got him onto the target.

All the aircraft attacking the Tripoli area seem to have had trouble with navigational updates, not just those at the airport. The final update by radar OAP before crossing the Libyan coastline was the island of Lampedusa, and the aircrew were given coordinates for their OAP that were off by several hundred feet. James A. Jimenez, who flew one of the F-111s attacking Bab al-Aziziyah, wrote his recollections of the mission for the December 2008 issue of Air and Space Magzine. He remembers the last radar update point as being a tower at the western tip of Lampedusa.

Our navigation system had been running sweet, but when Mike [his WSO] selected the tower, the cursors fell about one mile to the west. An error during the planning process had resulted in incorrect coordinates being issued to all crews. Mike recognized the error and did not use the coordinates to update our navigation system. His decision was probably the single greatest factor enabling us to hit our target: those who updated their nav systems based on the bad coordinates missed.

Among those who ran into trouble was one F-111 targeting the Bab al-Aziziyah barracks whose error at Lampedusa was compounded upon reaching Tripoli and which ended up a mile and half off target. Its bombs ended up hitting and damaging the French embassy.

I’m still not entirely clear on where the offset coordinates for EL DORADO CANYON came from. According to an official US Air Force history, during the F-111s first combat deployments to Vietnam the offset aiming points came from a photo-positioning database called SENTINEL DATE at the Defense Mapping Agency Aerospace Center in St. Louis, or an equivalent database called SENTINEL LOCK that was deployed to Takhli and Nakhon Phanom air force bases in Thailand. SENTINEL DATE/LOCK “provide[d] a menthod for precisely determing the latitude, longitude, and elevation of navigational fix-points, offset aim points, and targets.”

However, a student paper for Air Command and Staff College by Major James M. Giesken explains that the update points for EL DORADO CANYON were geolocated by the F-111 fighter wing staff using the Analytical Photogrammetric Positioning System (APPS), an analog system for determing the location of an object on photo imagery. APPS’s output used the WGS 84 standard datum, while the target coordinates were expressed in the European Datum used by American units in Europe, and this was the source of the location error. (There’s no source for that information in the paper, but Giesken was an instructor at the Defense Mapping School from 1986 to 1988, then aide to the director of the Defense Mapping Agency for fourteen months and executive officer for the director for another seventeen. Hopefully he had a good source for the information.)

The Analytical Photogrammetric Positioning System. From Army Research & Development, May-June 1976, p.24

An early version of the Analytical Photogrammetric Positioning System. From Army Research & Development, May-June 1976, p.24

What happened during Operation EL DORADO CANYON demonstrated the obstacles to accuracy that could not be erased by the use of advanced technology, whether in a bomb or an airplane. Regardless of the precision in the weapon, an attack was only as accurate as the underlying information – and problems with that information could end up embedded in the relationships between the very systems that were supposed to provide a more precise attack than ever before.

A Hidden Map Between Sensor and Shooter: The Point Positioning Data Base, Part Three

Back to Part Two (or Part One)

Between 1990, when the first GPS-guided missiles were used in war, and 2001, when the United States began its invasion of Afghanistan, GPS guidance for weapons went from a niche technology used only by a few systems to one of the US military’s favorite techniques. The spread of GPS guidance led to a huge demand for ways of determining target positions in a way that weapons – rather than pilots – would understand. That meant three-dimensional coordinates in World Geodetic System 84 (WGS 84), rather than grid references on maps or even coordinates in other datums. One of the most important tools for establishing these coordinates was the Point Positioning Data Base (PPDB), a database of matching imagery and coordinates that had originated in the 1970s as a tool for army field artillery.

Made widely available in an analog format in the 1980s and used during the first Gulf War, PPDB’s digitization had restricted its use mostly to computer workstations (first DEWDROP, then RainDrop) in the United States during the war over Kosovo in 1999.

By the time the invasion of Afghanistan began in late 2001, RainDrop workstations had moved from analysts’ desks in the continental US to the same airbase – Prince Sultan Air Base in Saudi Arabia – as the air operations center that was commanding the air war. That shift was only the first step in the proliferation of tools and services for point mensuration to match the American and coalition demand for mensurated target coordinates. “Cursor on Target” (see Part One) began development in 2002; Northrop Grumman released RainDrop’s successor – RainStorm – in 2004; and another system, Precision Strike Suite for Special Operations Forces (PSS-SOF), was created to provide “near-mensurated” coordinates to troops in the field.

By 2009, when Noah Shachtman wrote a description of how mensuration was used to plan air strikes in Afghanistan, the process had been in regular use for almost a decade. Here’s his description of what was being done in the air operations center for Afghanistan:

An officer, I’ll call him Paul, walks me through the process. It starts with “targeteering,” figuring out where a pilot should attack. Just getting GPS coordinates or an overhead image isn’t good enough. GPS is unreliable when it comes to altitude. And landscape and weather conditions can throw satellite pictures off by as much as 500 feet. “Even with Gucci imagery, there’s always errors,” Paul says. He points to a pair of screens: On the right side is an aerial image of a building. On the left, two satellite pictures of the same place — taken from slightly different angles — flicker in a blur. Paul hands me a pair of gold-rimmed aviator glasses. I put them on, and those flickers turn into a single 3-D image. Paul compares the 2-D and 3-D images, then picks exactly where the building should be hit. Depending on elevation, adding a third dimension can shrink a 500-foot margin of error down to 15 feet.

Tying a point on the ground to a global grid precise enough to be used for air strikes anywhere in the world was now a matter of course. Fifty years after the CIA’s photo interpreters bought their first mainframe to help them analyze and map targets in the Soviet Union, calculating a target’s position in global terms has become simple – even if knowing what is at the target location is not. The technology here is also a long way from the cobbled-together equipment for which the PPDB was first created. The Analytical Photogrammetric Positioning System (APPS) combined digital and analog electrical components with old-fashioned optics and the human eye.

The transformation from APPS to RainStorm over the course of thirty years is pretty remarkable, but its also been hard to track. This is technology that doesn’t get a lot of praise or get singled out for attention, but that doesn’t mean its not interesting or important.

For one thing, APPS was a military application of commercial off-the-shelf (COTS) technology before COTS was cool. The Hewlett Packard 9810A desk calculator at its heart was not designed for military use or developed from military-sponsored research. It was just an office tool that was re-purposed for a very different office.

More importantly, APPS and PPDB are a good example of an enabling technology that was created long before its eventual requirement even existed. If there had been no PPDB, the development of GPS-guided bombs would have forced its creation. Instead, it was an Army project begun around the same time the first GPS satellites were being designed that provided the necessary service. That’s luck, not good planning.

Lastly, and equally interestingly, PPDB is a layer of complexity in modern warfare that’s easily overlooked because it sits in the middle of things. It provides the map of coordinates on which grander, more significant, moves are sketched, and which disappears into obscurity except when something goes wrong. Between cursor and target, or sensor and shooter, there are a lot of layers like this one.

A Hidden Map Between Sensor and Shooter: The Point Positioning Data Base, Part Two

Back to Part One

One part of the long pre-history surrounding the deployment of GPS-guided bombs began in the late 1960s with US Army Corps of Engineers and a research project to improve the accuracy of American field artillery. The Analytical Photogrammetric Positioning System (APPS) was a tool to calculate the coordinates of a target seen on reconnaissance photography. Introduced into service in the mid-1970, APPS and the geo-referenced imagery that it used (the Point Positioning Data Base, or PPDB) proved so useful that they were borrowed by US Air Force and Navy airstrike planners too.

The desire to fix targets from aerial photography and strike them with precision was hardly unique to APPS’s users. The Air Force also had a system for calculating target coordinates under development. The Photogrammetric Target System (PTS) was part of a far grander system for detecting, locating, and destroying enemy surface-to-air missile (SAM) sites called the Precision Location and Strike System (PLSS). Unlike APPS, which printed out target coordinates for human use, the proposed PTS was a fully computerized system that would transmit the coordinates to PLSS’s central computer somewhere in West Germany or the United Kingdom, where they would be converted into guidance instructions for the 2,000-lb glide bombs that were going to be the sharp end of the system.

The TR-1, a renamed U-2 reconnaissance plane, was the aerial platform for the PLSS system. (U.S. Air Force Photo by Master Sgt. Rose Reynolds)

The TR-1, a renamed U-2 reconnaissance plane, was the aerial platform for the PLSS system. (U.S. Air Force Photo by Master Sgt. Rose Reynolds)

You can see how PTS’s fortunes waxed and waned by following the annual briefings on PLSS that the Air Force gave to Congress. What began in 1973 was gradually scaled back as PLSS’s own funding declined. Plans for a manual prototype PTS were cancelled when it became clear that APPS could do the same job, and the system disappeared from the briefing in 1980.

Much of the imagery for point positioning came from mapping cameras on the KH-9 HEXAGON satellite. NRO photograph courtesy Wikimedia.

Much of the imagery for point positioning came from mapping cameras on the KH-9 HEXAGON satellite. NRO photograph courtesy Wikimedia.

While the Air Force was experimenting with PTS and APPS to plan aerial attacks, PPDB was expanding in importance to become part of the targeting process for non-nuclear Tomahawk missiles being operated by the US Navy. Simultaneously, crises with Iran and the demands of the Carter Doctrine drove the expansion of PPDB coverage in the Middle East to 930,000 square nautical miles by 1981.

That meant that when Iraq invaded Kuwait in 1990 the US had 100% PPDB coverage of the theater, better than the coverage with either 1:50,000 topographical maps or 1;250,000 Joint Operations Graphic-Air. Unfortunately, the PPDB imagery was woefully out of date, forcing the Defense Mapping Agency (DMA) to make PPDB updates part of its vast cartographic build-up for Operation Desert Shield. That included 30 new PPDB sets (of 83 requested), 26 video PPDB sets, and 7,972 target coordinates.

Despite those deliveries, the obsolescence of PPDB imagery was noticed during Operation Desert Storm. The annual official history of 37th Fighter Wing – which flew the F-117 stealth fighter during Desert Storm – complained that:

Spot imagery was not of sufficient high resolution to support the technical requirements of a high technology system such as the F-117A Stealth Fighter. And, the available Analytical Photogrammetric Positioning System (APPS) Point Positioning Data Base (PPDB) was grossly outdated. It was not until the last week of the war that more current PPDBs arrived, which was too late to have an effect on combat operations.

After 1991, the need for precise target coordinates grew alongside the spread of precision guided weapons that needed those coordinates, which meant that what had begun as an Army instrument became more and more vital to aviation. A 1994 naval aviation handbook reminded users that “reliable target coordinates come only from a limited number of classified sources,” including the Defense Mapping Agency’s “Points Program” (which accepted requests by phone or secure fax) and APPS systems carried on aircraft carriers.

Unlike laser or electro-optical-guided bombs that homed in on a signature that their target emitted or reflected, bombs and missiles guided by GPS simply fly or fall towards the coordinates they are given. Widespread deployment during the bombing of Serbia in 1999 (Operation ALLIED FORCE) therefore meant a vast demand for precise target coordinates.

The Point Positioning Data Base, now provided in digital form rather than as a film chip/magnetic cassette combination, was an important source of those coordinates because it provided not just two-dimension latitude/longitude coordinates but also elevation. In a desert environment like Iraq, a bomb dropped from above could more or less be assumed to hit its target no matter how large the gap between the actual elevation of the ground. Where the terrain was more varied, however, aiming to high or too low could cause the bomb to slam into a hill short of the target or fly right over it and land long. Securing that elevation information from aerial photography was known as “mensuration.”

Though APPS was a computerized tool, it used film chips rather than digital imagery. To take the entire system digital, the National Imagery and Mapping Agency (which had absorbed the Defense Mapping Agency in 1996) developed a computer workstation called DEWDROP that could provide mensurated coordinates using the Point Positioning Data Base. That was followed a few years later by a similar system called RainDrop. In February 1999, a little over a month before ALLIED FORCE began, the Air Force committed to buy 170 RainDrop systems for $1.8 million from Computek Research, Inc. (Here’s the press release.)

During ALLIED FORCE, mensurated coordinates were needed for Tomahawk, CALCM, and SLAM missiles, as well as the JDAM bombs being carried by the first B-2 stealth bombers. To get them, the air operations center in Vincenza, Italy had to reach back to analysts in the United States, which was where the mensuration workstations were located. Here’s how Vice Admiral Daniel J. Murphy, Jr. describes the process of acquiring them, starting from a rough fix provided by an ELINT satellite:

So I walked into the intelligence center and sitting there was a 22-year-old intelligence specialist who was talking to Beale Air Force Base via secure telephone and Beale Air Force Base was driving a U–2 over the top of this spot. The U–2 snapped the picture, fed it back to Beale Air Force base where that young sergeant to my young petty officer said, we have got it, we have confirmation. I called Admiral Ellis, he called General Clark, and about 15 minutes later we had three Tomahawk missiles en route and we destroyed those three radars.

About a year later the Air Force ordered another 124 RainDrop systems. (Another press release.) Three months later, Northrop Grumman bought Computek for $155 million in stock.

ALLIED FORCE was confirmation for many observers that coordinate-guided weapons were the wave of the future. Tools like PPDB were necessary infrastructure for that transformation.

Forward to Part Three

Between Sensor and Shooter: The Point Positioning Data Base, Part One

In 2002, after the kick-off of the US invasion of Afghanistan but before the invasion of Iraq, US Air Force Chief of Staff General John Jumper gave a speech in which he said that “The sum of all wisdom is a cursor over the target.” Two years later, the Air Force was demonstrating how, using an XML schema, troops could pass target coordinates to a bomb-armed F-15E flying overhead in an entirely machine-to-machine process in which each device involved communicated seamlessly with each other. The system was called, in honor of General Jumper’s speech, “Cursor on Target.”

Describing the basic process in a “Senior Leader Perspective” for the Air Force’s Air & Space Power Journal, project supporter Brigadier General Raymond A. Shulstad explained how target coordinates input into a laptop in the hands of a forward air controller in the field pass directly to a map workstation in the air operations center (the command center that coordinates air operations in a theater) and then, after the operator clicks on those coordinates on the map, appear automatically on the F-15’s head-up display.

Even Shulstad’s article only touches lightly on the element of “Cursor on Target” that’s absent in its name: the map (or the geodetic information) over which both cursor and target are positioned. The workstations in the air operations center that are in the middle of the observer-to-airplane communication are not just there to monitor what’s happening and to ensure that the coordinates really are a target (as opposed to friendly forces or a hospital). They are also there to provide information that’s missing from the initial observer’s message – the three-dimensional positioning needed to ensure that the bomb truly strikes its target.

Michael W. Kometer, who was an Air Force lieutenant colonel when he wrote an excellent book on some of these issues, explains that process, usually called “mensuration,” as follows:

Mensuration is a process that translates coordinates from a flat chart to take into consideration the elevation of the target. If a weapon could fly to a target directly perpendicular to the earth, mensuration would not be necessary. Since it approaches at an angle, however, a weapon could be long or short if the target is at a low or high elevation.

Today, mensuration software is a corporate product. The industry standard for military operations is a Northrop Grumman product called RainStorm that has its own openly accessible support page on the internet. But that wasn’t always the case. The map hiding within Cursor on Target only came into existence after a long process that began in a very different place: among the US Army’s field artillery crews.

Before APPS
The best place to begin the story is in the years immediately after the Second World War, when the US Army Corps of Engineers began trying to create a comprehensive coordinate system (or datum) that would cover not just Europe but the entire world. Until the Second World War, most nations did their own mapping using their own national datums – which meant that trying to plot directions and distances using coordinates from two different national surveys could lead to significant errors unless one did the math to convert coordinates from one datum into the other (sacrificing time and accuracy in the process).

The discrepancies only got worse the further the distances involved, which was a particular problem if you were trying to plot transatlantic flight paths or the trajectories of intercontinental ballistic missiles. The solution was to establish a new global coordinate system with maps to match – a process that triggered a massive turf war between the US Army and Air Force’s surveying and mapping centers, but eventually led to first World Geodetic System (WGS 60).

In Europe and North America, the US or its allies could do new surveys and create new maps whose coordinates were in WGS 60. For the Soviet Union’s territory, whose maps were closely controlled, the Americans had to turn to satellite photography. Measuring positions and distances using these photographs relied on a process called stereophotogrammetry. By examining a pair of images of the same target through a binocular fitting that showed each image to one eye, the photo interpreter saw a three-dimensional image on which he or she could make accurate measurements of distance. Connecting those measurements to known points, or using simultaneous stellar photography to determine the location of the camera-carrying satellite, let US cartographers create maps of the Soviet Union without ever setting foot on the ground.

Stereophotogrammetry was slow and calculation-intensive, which is why the CIA’s first computer was purchased by the photo interpretation center to speed up the process. But in the late 1960s the US Army’s Engineering Topographic Laboratory (ETL) began to experiment with ways of bringing that same capability to the army in the field. Using off-the-shelf commercial components they were able to build a mobile system that let any photo interpreter establish the precise position – within a matter of meters – of any object that had been captured on reconnaissance photography.

APPS and PPDB
The process began with a pair of film chips and a matching data cassette that carried the information to calculate the geographic coordinates of any point on the film. Aligning the pair of film chips under a stereo comparator (in which the user sees each image with one eye), the photo interpreter saw a three-dimensional image equivalent to the reconnaissance imagery of the target. Picking out the precise point of interest seen on the original photo, the operator zapped that information to an attached computer – a Hewlett Packard 9810A desk calculator – that consulted the magnetic tape and prints out the geographic coordinates. (When I say “zap,” I’m not kidding. An electric current ran from the cursor the operator saw onto a wire grid underneath the photographs. The voltage of the current told the computer where on the photograph the pointer is.) The entire system weighed 109 kg and could be set up on a standard-sized desk. It was called the Analytical Photogrammetric Positioning System (APPS); the tapes and film chips were known together as the Point Positioning Data Base (PPDB).

The Analytical Photogrammetric Positioning System. From Army Research & Development, May-June 1976, p.24

The Analytical Photogrammetric Positioning System. From Army Research & Development, May-June 1976, p.24

The first prototype APPS consoles were built from off-the-shelf parts and delivered to the Army in 1972–3. They offered a substantial improvement in measurement accuracy over previous techniques. In tests by Raytheon and Human Factors Research, Inc. the average error in position measurement using APPS was only 5–6 meters. ETL seems to have originally expected APPS to be used by artillery and surveying units, but it soon also became a tool for the brand-new MGM-52 Lance missile. That made sense because the Lance missile was both long-ranged and lacked any terminal guidance – without any way of homing on its target it needed to be launched with precise information on the target’s location.

Before the decade was over the original APPS was being replaced by a more advanced device, the APPS-IV, built by Autometrics, Inc., but the general principles involved remained the same. Look at two film chips through a stereo comparator, locate the point of interest, then use the attached computer to establish its coordinates. A handy tool, for sure, but not the basis for a revolution in military operations.

So what changed? The answer was GPS.

Forward to Part Two

A Home for the Photointerpreters of HTAUTOMAT

When it first flew in 1955, the U-2 reconnaissance airplane was simply unprecedented. Even thirty-four years later, in 1989, the plane was able to set sixteen world altitude and time-to-climb records with the Fédération Aéronautique Internationale. Thought its time as the most advanced photographic reconnaissance platform in the world was brief – the first photo reconnaissance satellite flew only four years later – it broke new ground in intelligence collection.

It makes sense that the organization that collected and analyzed the U-2’s photographs would also be unprecedented in the scope of its efforts to understand what it saw. Its analysts would fly over the United States’ most secret facilities to get a comparative perspective and draw on the expertise of every branch of government and the private sector. So it might be a surprise that this crack team, which became the National Photographic Interpretation Center in 1961, began life not in a custom-built secret facility but on the upper floors of an automobile salesroom in downtown Washington.

The CIA’s Photo-Intelligence Division (PID) began with thirteen employees, a $40,000 budget, and 800 square feet in a temporary wooden building on the National Mall. Without access to new photography of the Soviet Union, the division’s photo interpreters spent most of their time working with German aerial photography that was captured at the end of the Second World War. As the U-2 program took shape PID acquired more space, including “the third floor of the dilapidated former Briggs School building.”

Eventually, the portion of the division that would handle the highly-secret photographs taken by the U-2’s cameras moved to a separate and still larger space: 55,000 square feet on the upper floors of the Steuart Motors building at 5th St. and New York Ave., N.W. A new codeword, TALENT, distinguished U-2 photography from other TOP SECRET intelligence. The photo-intelligence operation was codenamed HTAUTOMAT, a little joke by the director, Arthur C. Lundhl, who expected that the new operation would be as busy as an automated restaurant (or “automat”).

According to historian and photointerpreter Dino Brugioni, Lundahl’s recollection of the new office was that “there was no place to eat, no place to park, no air conditioning, our people were getting mugged on the streets before it was fashionable. I guess the best thing you could say is that it had wonderful security cover, because I am sure nobody would ever believe that anything of any importance to the United States could be taking place in the trashy neighborhood.” The location’s security was somewhat compromised by the sign “Rented to CIA” that was put up several days before the photo interpreters arrived, as well as the steady stream of high-ranking visitors who came either to see the imagery or to be involved in its interpretation. (You can see pictures of the building in its later, more dilapidated years, here.)

Among the more advanced and esoteric technology that HTAUTOMAT acquired was an early mainframe computer. The code-breaking National Security Agency already owned an ALWAC-III when the photointerpreters acquired one in late 1957. The delivery itself was a comedy of errors. The computer, delivered directly from a business show in Cleveland, was sent to the wrong address, rerouted to the mail dock at one of the CIA’s other buildings, and only delivered through the intervention of a extra government moving crew and a truck whose lift gate could barely take the load.

HTAUTOMAT used the computer to make the calculations involved in measuring the size of objects in photographs. Eventually, the engineers involved were able to automate much of the process. The overall system was something of a kludge: a comparator was used to measure points on the negative, the measurements were output onto a punched paper tape, the paper tape was loaded onto a teleprinter that could read the printer tape and convert it into electrical impulses that could be understood by the ALWAC and converted into an actual distance measurement.

Upgraded to from a “division” to a “center” in 1958, the CIA Photographic Interpretation Center was merged with the Department of Defense’s strategic photointerpretation resources in 1961 to become the National Photographic Interpretation Center. Two years later it moved into what would be its home for forty-eight years – longer than it would have the NPIC name. Unlike the Steuart Motors building, NPIC’s new facility was not a converted commercial space. Instead it was a converted industrial space at the Washington Navy Yard that had been built in 1944 to store the steel blanks for naval guns. Despite its mundane origins, Building 213 was enough of an improvement on the Steuart building that employees nicknamed it “Lundahl’s Palace.”

The business of photo intelligence had changed a lot by the time Building 213 was vacated in 2011. By then NPIC had gone through two agency amalgamations, becoming part of what is now the National Geospatial-Intelligence Agency (NGA). NGA consolidated its Washington, DC-area staff into a single campus in 2012 and the new building is pretty swanky. Architects RTKL told Metal Architecture magazine that the $1.4 billion campus had a concept based on

the “terrestrial” meeting the “celestial.” “NGA uses their assets to develop an understanding of the earth and man-made improvements on it,” says [vice president Timothy J.] Hutcheson. “Traditionally this information has come from maps, aerial reconnaissance and satellites. The act of NGA looking down on the earth has become a metaphor for the building design, which is the reason why we chose a metal as the primary cladding element.” … “The upper portion of the building is more futuristic, with V columns that are intended to impart a feeling of lightness to the way the building touches the terrestrial base,” Hutcheson adds. “This allows the majority of the building to hover over the earth with precast façade divided into triangles reinforcing a celestial language.”

That’s a far cry from a temporary wood building, a dilapidated former school, and an auto manufacturer’s upper floors.

Source Note: The first two volumes in a CIA history of NPIC have been declassified and posted at www.governmentattic.org, here and here.