Hidden Figures

The release of two widely publicized books on female computers in the early Space Age in the same year (one of them with a forthcoming movie adaptation too) has to be unprecedented. The first was Rise of the Rocket Girls, about the women who worked as human computers (a redundant term before the 1950s) for the Jet Propulsion Laboratory in Pasadena, California. The second, Hidden Figures, is about the African-American women among those who did similar work for the Langley research center in Virginia. (There’s even a third book, by Dava Sobel, that covers an earlier generation of computers who worked at the Harvard Observatory).

Both Rise of the Rocket Girls and Hidden Figures are fascinating accounts of the essential roles that female computers played in aerospace research, capturing the challenging social milieu in which they worked. Hidden Figures also manages to address the impact of segregation and discrimination in the overlapping local, regional, and national contexts surrounding the work of the computers at Langley (itself a segregated workplace). It’s a story well worth reading, before or after the movie adaptation – focusing on Katherine Johnson’s contribution to the calculations for the first orbital Mercury flight – goes into wide release in January. The trailers I’ve seen look good, though Kevin Costner as a fictional NASA manager gets to strike a literal blow (with a fire axe!) against racism that goes way beyond anything NASA management actually did for their African-American staff.

In the last chapter of Hidden Figures, Shetterly discusses having to cut the section of the book about how several of its key figures moved into human resources and advocacy to try and overcome the less obvious discrimination against women and minorities in the workforce that was still going on in the 1970s and 80s. You never know from a trailer, but I suspect the movie’s not going to end with the uphill battle for recognition and equal treatment that persisted even after Johnson’s work.

As Sobel’s made clear in some of her pre-publication publicity, the stories of female computers are less undiscovered than regularly and distressingly forgotten. The women who worked in the Harvard Observatory were well known at the time; Katherine Johnson received substantial publicity at least within the African-American press for her work on Mercury. Academic writing, including a book with Princeton University Press, has covered the work of female computers in various fora. Perhaps a major Hollywood movie will help the story stick this time.

Tides of War, Part One

The best-known story about environmental science and D-Day has to be that of the last-minute forecast that let the invasion go ahead. That prediction, though, was only one of many contributions by Allied environmental scientists to the success of the invasion. Another was the secretive preparation of mundane but vital preparations for the assault: calculating the tides for D-Day.

The theoretical basis for tide prediction was the work of Newton, Daniel Bernoulli, and Pierre Simon Laplace, the third of whom was the first to outline the equations that describe the rise and fall of the tides. Laplace’s equations were too complex to use in practice, but in the mid-nineteenth century the British scientist William Thomson (later ennobled as Lord Kelvin) demonstrated that, given enough tidal measurements, one could use harmonic analysis to divide the tide-generating forces for a particular shoreline into a series of waves of known frequencies and amplitudes (the tidal constituents). That same process, carried out in reverse, would let one predict the tides along that shore. Unfortunately, making those calculations was was time-consuming the point of impracticality. However, Thomson also demonstrated that it was possible to construct an analog machine that would do the necessary work automatically.

Thomson’s machine drew a curve representing the height of the tide with a pen that was attached to the end of a long wire. The wire ran over top of a series of pulleys, which were raised and lowered by gears which reflected the the frequency and amplitude of the tidal constituents. As each pulley rose or fell, it affected the length of the wire’s path and thus the position of the pen. Altogether, they reflected the combined effect of the tidal constituents being simulated.

Thomson's design sketch for the third tide-predicting machine, 1879. Image courtesy Wikimedia.

Thomson’s design sketch for the third tide-predicting machine, 1879. Image courtesy Wikimedia.

The first machine, built in 1872, had gears for only ten constituents, but later machines could represent many more. Machines of his design, many of them built in Great Britain, were also used in other countries to create the necessary tide tables for their ports. In the United States, a different mechanical approach developed by William Ferrel was used to build similar machines. Altogether, though, tide-predicting were specialized, expensive, and rare. According to a modern inventory, only thirty-three were ever built – twenty-five of them in London, Glasgow, or Liverpool.

During the Second World War, the Admiralty Hydrographic Office relied on two tide-predicting machines operated by Arthur Thomas Doodson at the Liverpool Tidal Institute to do all their tidal calculations. One was Thomson’s original machine, refitted to handle twenty-six constituents. The other was a machine designed by Edward Roberts in 1906 and equipped for forty constituents.

Both Doodson and the Tidal Institute had their own unique histories of military collaboration. Doodson, despite being a conscientious objector, had worked on anti-aircraft ballistics for the Ministry of Munitions during the First World War. The Institute, established in 1919 with corporate and philanthropic support, had an important connection with the Admiralty’s own Hydrographic Department. Though the Hydrographic Department did not provide any direct funding until 1923, after that it made the Institute the Admiralty’s exclusive supplier of tide calculations. At the same time, the Hydrographic Department began appointing a representative to the Institute’s governing board.

Though they were the basis for only some of the Institute’s Admiralty work during the war, the tide-predicting machines in Liverpool were busy creating tide tables for Allied ports. According to historian Anna Carlsson-Hyslop’s research, the number of tidal predictions being performed doubled from 77 for 1938, the last pre-war year, to 154 for 1945. (Carlsson-Hyslop’s research is focused on areas of the Institute’s work other than the creation of tide tables, but much of it sheds light on its relationship with the Royal Navy and state patronage.)

In 1943 the Admiralty Hydrographic Office requested calculations to create tide tables for the invasion beaches to be used on D-Day in Normandy. Since the landing zone remained top secret, Commander William Ian Farquharson was responsible for establishing the constituents and providing them (anonymized under the codename “Point Z”) to Doodson in Liverpool. Unfortunately, there were no existing calculations for the area of the beaches. Nor, because tidal constituents were sensitive to local conditions, could he just extrapolate from the data for the ports to the east and west at Le Havre and Cherbourg. Instead, Farquharson combined fragmentary data from some local measurement points near the beaches, clandestine on-the-spot measurements made by Allied beach reconnaissance teams, and guesswork to come up with eleven tidal constituents. Oceanographer Bruce Parker suspects that he began with the Le Havre constituents and then adjusted them to approximate the data he had. The calculations, despite the roughness of the information on which they were based, proved sufficiently accurate for the invasion planner.

In the Pacific, tide tables for amphibious operations were generated by the US Coast and Geodetic Survey’s Tide Predicting Machine No. 2. In both theaters, as well as the Mediterranean, oceanographers supplemented the tide tables for beaches with wind, wave, and surf forecasts. The story of wave forecasting is, if anything, even more cloak and dagger than that of the D-Day tide forecasts, since one of the scientists involved was actively suspected (incorrectly) of being a Nazi sympathizer.

Dr. E. Lester Jones, Chief, U.S. Coast and Geodetic Survey, with the Tide Predicting Machine he built. Harris & Ewing, photographer, 1915. Retrieved from the Library of Congress, https://www.loc.gov/item/hec2008004303/

A US tide predicting machine, probably No.2. The caption from the Library of Congress attributes the machine’s construction to E. Lester Jones, Chief of the Coast and Geodetic Survey. Harris & Ewing, photographer, 1915. Retrieved from the Library of Congress, https://www.loc.gov/item/hec2008004303/

Beyond their civilian and military wartime work, tide-predicting machines had an oblique impact on Second World War cryptanalysis. Those developments would eventually put the machines out of work after the war, but not before the machines would have their final strategic significance.

Forward to Part Two, including Source Notes (soon)

 

How Not to Network a Nation

petersI’ve been looking to read How Not to Network a Nation by Benjamin Peters since MIT Press announced it last November, but a mixture of delays, library closings over summer, and general busyness meant that I didn’t lay hands on a copy until a few weeks ago. I’m really glad that I remembered, since it’s a wonderful book that sheds a lot of light on the development of computer networking and the internet.

Peters examines a series of failed attempts to create large-scale civilian computer networks in the Soviet Union in the 1960s, 70s, and 80s, which he explains in the context of the Soviet economy and the development of cybernetics as a discipline. (Those wanting a overview of the argument can listen to his lovely interview with the New Books Network). By analyzing these Soviet proposals, Peters not only describes Soviet efforts at network-building but also sheds some light on the parallel processes going on in the United States.

Comparing the success of the Internet to the failure of the Soviet network proposals helps highlight the distinctive features of the network that ultimately developed out of the US ARPANET experiment. It also casts what Peters calls the “post-war American military-industrial-academic complex” in the unusual role of altruistic and disinterested benefactor. In contrast to the Soviet Union, where the military and its suppliers jealously guarded their power and priorities, the US government ended up funding a lot of research that – though loosely justified on the basis of military need – was more or less unrelated to specific military requirements and ended up being spread far and wide through civilian connections before it ever proved to have military significance.

How Not to Network a Nation is probably most rewarding for those with some knowledge of the Soviet economic and political system, including its perennial bureaucratic battles and black markets deals for influence and resources. (Anyone wanting to know more, for example, about the debates over how to mathematically optimize the planned economy, with or without computers, should read Francis Spufford’s well-footnoted novel Red Plenty.) Its biggest omission is any discussion of the technical features of the Soviet projects. Arguably, one of the reasons that the internet became the Internet is that it was built from architecture (particularly TCP/IP) flexible enough to span multiple thinly-connected networks with varying capabilities and purposes. That flexibility made it possible for networking to thrive even without the kind of deliberate and wide-ranging support that a large-scale, well-planned project would have required. Peters’s book, illuminating as it is, never addresses those aspects of network development.

Smart Plane, Dumb Bombs, Bad Maps?: Part One

At The Drive‘s “War Zone” (and, before that, Gizmodo‘s “Foxtrot Alpha”) Tyler Rogoway has been regularly posting first-hand reflections on flying military jets. The latest, by Richard Crandall, covers the F-111 Aardvark, probably the leading all-weather strike aircraft of the Cold War. Designed around bleeding-edge 1960s avionics that would take the guesswork out of high speed, low-level navigation, the F-111 ended up being caught between the limits of its technology and the development of a newer generation of weapons. At the same time, its combat debut pointed to the limits of the entire system of navigational tools that military aircraft have been using ever since.

General Dynamics F-111F at the National Museum of the United States Air Force. (U.S. Air Force photo)

General Dynamics F-111F at the National Museum of the United States Air Force. (U.S. Air Force photo)

The F-111 was built to penetrate enemy airspace at low level regardless of rain, fog, or darkness. How low? Crandall recalls that “sometimes we would be flying low through the mountains of New Mexico or southwest Texas and the jet’s external rotating beacon would flash off the terrain that we were flying by and it would seem to be right next to the wingtip. Some aircrew would turn it off as it unnerved them.”

The airplane’s avionics were supposed to be interconnected to make the attack process practically automatic. While the F-111’s terrain-following radar (TFR) kept the plane 200 feet above the ground, the autopilot flew the plane from waypoint to waypoint. Upon reaching the target, the ballistics computer automatically released the F-111’s bombs when the plane reached the correct parameters (position, airspeed, delivery angle, etc.). Or, if the crew put it into manual mode, the computer showed the pilot a “continually computed impact point” (CCIP) that adjusted for wind and drift to indicate where the bombs would land if they were released at that moment. In the planned ultimate version of the F-111, the F-111D, all this equipment would be integrated in to a fully digital computer system complete with “glass cockpit” multi-function electronic displays. Even the other versions of the F-111 that flew with fully or partly analog systems included similar systems.

At the heart of the system was the airplane’s inertial navigation system (INS), a package of gyroscopes and accelerometers that provided an ongoing track of the airplane’s location. Because the INS drifted by about half a nautical mile every hour, the F-111 required regular position updates to keep itself on course. The problem of correcting for INS drift wasn’t unique to the F-111. Ballistic missile submarines updated their INS using a fix from the Loran-C radio or Transit satellite network. Tomahawk missiles used an onboard terrain-matching system called TERCOM.The Strategic Air Command’s variant of the F-111 carried an Litton ASQ-119 Astrotracker that took position fixes based on the locations of up to fifty-seven stars, day or night (as well as a more accurate INS that used an electrostatically suspended gyro).

The tactical F-111 usually took position updates by locking the non-TFR attack radar onto a pre-selected terrain feature with a known position and good radar reflectivity (called an offset aimpoint, or OAP). When everything worked, that gave the F-111 remarkable accuracy. As F-111 WSO Jim Rotramel remarked to writers Peter E. Davies and Anthony M. Thornborough for their book on the F-111, “if the coordiantes for various offset aim-points (OAPs), destinations and targets were all derived from accurate sources, the radar crosshairs were more likely to land neatly on top of them.”

Even if some parts of the system failed, the remaining elements were good enough to let the F-111 complete the mission. As Crandall explains:

Our inertial navigation system was nice, but we trained to use dead reckoning and basic radar scope interpretation to get to the target even without the INS. We had backups to the backups to the backup. INS dead? Build a wind model and use the computers without the INS. That doesn’t work? Use a radar timed release based on a fixed angle offset. We practiced them all and got good at them all.

The navigator could even use the raw, unprocessed data from the TFR, a truly terrifying process that Crandall calls a “truly a no-kidding, combat-emergency-only technique.”

Trouble with the F-111Ds avionics, which proved too ambitious for the time, meant that the US Air Force flew three other versions of the plane while trying the debug them. The first was the F-111A, which had an all-analog cockpit; the -E (Crandall: “basically an F-111A with bigger air inlets”), which added a few features; and the -F, which added more digital equipment but not the full suite of the features designed for the -D. The last of these was also upgraded to operate the PAVE TACK pod that let the plane drop laser-guided bombs (LGBs). LGBs put the terminal “smarts” for precision bombing into the weapon itself, with the bomb following laser energy reflected off the target from a beam in the PAVE TACK pod.

The US Air Force Museum in Dayton, Ohio, has a 360-degree photo of the cockpit of the F-111A in their collection here. It’s all switches and gauges, with no screens apart from those for the radar.

However, even with laser-guided bombs, the F-111F still needed the airplane’s avionics to get to the target, and those systems remained dependent on maps and geodetic information in order to ensure that INS updates and OAPs were accurate. In 1986, that would prove to be the weak link during the F-111’s combat debut.

To Part Two

A Hidden Map Between Sensor and Shooter: The Point Positioning Data Base, Part Three

Back to Part Two (or Part One)

Between 1990, when the first GPS-guided missiles were used in war, and 2001, when the United States began its invasion of Afghanistan, GPS guidance for weapons went from a niche technology used only by a few systems to one of the US military’s favorite techniques. The spread of GPS guidance led to a huge demand for ways of determining target positions in a way that weapons – rather than pilots – would understand. That meant three-dimensional coordinates in World Geodetic System 84 (WGS 84), rather than grid references on maps or even coordinates in other datums. One of the most important tools for establishing these coordinates was the Point Positioning Data Base (PPDB), a database of matching imagery and coordinates that had originated in the 1970s as a tool for army field artillery.

Made widely available in an analog format in the 1980s and used during the first Gulf War, PPDB’s digitization had restricted its use mostly to computer workstations (first DEWDROP, then RainDrop) in the United States during the war over Kosovo in 1999.

By the time the invasion of Afghanistan began in late 2001, RainDrop workstations had moved from analysts’ desks in the continental US to the same airbase – Prince Sultan Air Base in Saudi Arabia – as the air operations center that was commanding the air war. That shift was only the first step in the proliferation of tools and services for point mensuration to match the American and coalition demand for mensurated target coordinates. “Cursor on Target” (see Part One) began development in 2002; Northrop Grumman released RainDrop’s successor – RainStorm – in 2004; and another system, Precision Strike Suite for Special Operations Forces (PSS-SOF), was created to provide “near-mensurated” coordinates to troops in the field.

By 2009, when Noah Shachtman wrote a description of how mensuration was used to plan air strikes in Afghanistan, the process had been in regular use for almost a decade. Here’s his description of what was being done in the air operations center for Afghanistan:

An officer, I’ll call him Paul, walks me through the process. It starts with “targeteering,” figuring out where a pilot should attack. Just getting GPS coordinates or an overhead image isn’t good enough. GPS is unreliable when it comes to altitude. And landscape and weather conditions can throw satellite pictures off by as much as 500 feet. “Even with Gucci imagery, there’s always errors,” Paul says. He points to a pair of screens: On the right side is an aerial image of a building. On the left, two satellite pictures of the same place — taken from slightly different angles — flicker in a blur. Paul hands me a pair of gold-rimmed aviator glasses. I put them on, and those flickers turn into a single 3-D image. Paul compares the 2-D and 3-D images, then picks exactly where the building should be hit. Depending on elevation, adding a third dimension can shrink a 500-foot margin of error down to 15 feet.

Tying a point on the ground to a global grid precise enough to be used for air strikes anywhere in the world was now a matter of course. Fifty years after the CIA’s photo interpreters bought their first mainframe to help them analyze and map targets in the Soviet Union, calculating a target’s position in global terms has become simple – even if knowing what is at the target location is not. The technology here is also a long way from the cobbled-together equipment for which the PPDB was first created. The Analytical Photogrammetric Positioning System (APPS) combined digital and analog electrical components with old-fashioned optics and the human eye.

The transformation from APPS to RainStorm over the course of thirty years is pretty remarkable, but its also been hard to track. This is technology that doesn’t get a lot of praise or get singled out for attention, but that doesn’t mean its not interesting or important.

For one thing, APPS was a military application of commercial off-the-shelf (COTS) technology before COTS was cool. The Hewlett Packard 9810A desk calculator at its heart was not designed for military use or developed from military-sponsored research. It was just an office tool that was re-purposed for a very different office.

More importantly, APPS and PPDB are a good example of an enabling technology that was created long before its eventual requirement even existed. If there had been no PPDB, the development of GPS-guided bombs would have forced its creation. Instead, it was an Army project begun around the same time the first GPS satellites were being designed that provided the necessary service. That’s luck, not good planning.

Lastly, and equally interestingly, PPDB is a layer of complexity in modern warfare that’s easily overlooked because it sits in the middle of things. It provides the map of coordinates on which grander, more significant, moves are sketched, and which disappears into obscurity except when something goes wrong. Between cursor and target, or sensor and shooter, there are a lot of layers like this one.

A Hidden Map Between Sensor and Shooter: The Point Positioning Data Base, Part Two

Back to Part One

One part of the long pre-history surrounding the deployment of GPS-guided bombs began in the late 1960s with US Army Corps of Engineers and a research project to improve the accuracy of American field artillery. The Analytical Photogrammetric Positioning System (APPS) was a tool to calculate the coordinates of a target seen on reconnaissance photography. Introduced into service in the mid-1970, APPS and the geo-referenced imagery that it used (the Point Positioning Data Base, or PPDB) proved so useful that they were borrowed by US Air Force and Navy airstrike planners too.

The desire to fix targets from aerial photography and strike them with precision was hardly unique to APPS’s users. The Air Force also had a system for calculating target coordinates under development. The Photogrammetric Target System (PTS) was part of a far grander system for detecting, locating, and destroying enemy surface-to-air missile (SAM) sites called the Precision Location and Strike System (PLSS). Unlike APPS, which printed out target coordinates for human use, the proposed PTS was a fully computerized system that would transmit the coordinates to PLSS’s central computer somewhere in West Germany or the United Kingdom, where they would be converted into guidance instructions for the 2,000-lb glide bombs that were going to be the sharp end of the system.

The TR-1, a renamed U-2 reconnaissance plane, was the aerial platform for the PLSS system. (U.S. Air Force Photo by Master Sgt. Rose Reynolds)

The TR-1, a renamed U-2 reconnaissance plane, was the aerial platform for the PLSS system. (U.S. Air Force Photo by Master Sgt. Rose Reynolds)

You can see how PTS’s fortunes waxed and waned by following the annual briefings on PLSS that the Air Force gave to Congress. What began in 1973 was gradually scaled back as PLSS’s own funding declined. Plans for a manual prototype PTS were cancelled when it became clear that APPS could do the same job, and the system disappeared from the briefing in 1980.

Much of the imagery for point positioning came from mapping cameras on the KH-9 HEXAGON satellite. NRO photograph courtesy Wikimedia.

Much of the imagery for point positioning came from mapping cameras on the KH-9 HEXAGON satellite. NRO photograph courtesy Wikimedia.

While the Air Force was experimenting with PTS and APPS to plan aerial attacks, PPDB was expanding in importance to become part of the targeting process for non-nuclear Tomahawk missiles being operated by the US Navy. Simultaneously, crises with Iran and the demands of the Carter Doctrine drove the expansion of PPDB coverage in the Middle East to 930,000 square nautical miles by 1981.

That meant that when Iraq invaded Kuwait in 1990 the US had 100% PPDB coverage of the theater, better than the coverage with either 1:50,000 topographical maps or 1;250,000 Joint Operations Graphic-Air. Unfortunately, the PPDB imagery was woefully out of date, forcing the Defense Mapping Agency (DMA) to make PPDB updates part of its vast cartographic build-up for Operation Desert Shield. That included 30 new PPDB sets (of 83 requested), 26 video PPDB sets, and 7,972 target coordinates.

Despite those deliveries, the obsolescence of PPDB imagery was noticed during Operation Desert Storm. The annual official history of 37th Fighter Wing – which flew the F-117 stealth fighter during Desert Storm – complained that:

Spot imagery was not of sufficient high resolution to support the technical requirements of a high technology system such as the F-117A Stealth Fighter. And, the available Analytical Photogrammetric Positioning System (APPS) Point Positioning Data Base (PPDB) was grossly outdated. It was not until the last week of the war that more current PPDBs arrived, which was too late to have an effect on combat operations.

After 1991, the need for precise target coordinates grew alongside the spread of precision guided weapons that needed those coordinates, which meant that what had begun as an Army instrument became more and more vital to aviation. A 1994 naval aviation handbook reminded users that “reliable target coordinates come only from a limited number of classified sources,” including the Defense Mapping Agency’s “Points Program” (which accepted requests by phone or secure fax) and APPS systems carried on aircraft carriers.

Unlike laser or electro-optical-guided bombs that homed in on a signature that their target emitted or reflected, bombs and missiles guided by GPS simply fly or fall towards the coordinates they are given. Widespread deployment during the bombing of Serbia in 1999 (Operation ALLIED FORCE) therefore meant a vast demand for precise target coordinates.

The Point Positioning Data Base, now provided in digital form rather than as a film chip/magnetic cassette combination, was an important source of those coordinates because it provided not just two-dimension latitude/longitude coordinates but also elevation. In a desert environment like Iraq, a bomb dropped from above could more or less be assumed to hit its target no matter how large the gap between the actual elevation of the ground. Where the terrain was more varied, however, aiming to high or too low could cause the bomb to slam into a hill short of the target or fly right over it and land long. Securing that elevation information from aerial photography was known as “mensuration.”

Though APPS was a computerized tool, it used film chips rather than digital imagery. To take the entire system digital, the National Imagery and Mapping Agency (which had absorbed the Defense Mapping Agency in 1996) developed a computer workstation called DEWDROP that could provide mensurated coordinates using the Point Positioning Data Base. That was followed a few years later by a similar system called RainDrop. In February 1999, a little over a month before ALLIED FORCE began, the Air Force committed to buy 170 RainDrop systems for $1.8 million from Computek Research, Inc. (Here’s the press release.)

During ALLIED FORCE, mensurated coordinates were needed for Tomahawk, CALCM, and SLAM missiles, as well as the JDAM bombs being carried by the first B-2 stealth bombers. To get them, the air operations center in Vincenza, Italy had to reach back to analysts in the United States, which was where the mensuration workstations were located. Here’s how Vice Admiral Daniel J. Murphy, Jr. describes the process of acquiring them, starting from a rough fix provided by an ELINT satellite:

So I walked into the intelligence center and sitting there was a 22-year-old intelligence specialist who was talking to Beale Air Force Base via secure telephone and Beale Air Force Base was driving a U–2 over the top of this spot. The U–2 snapped the picture, fed it back to Beale Air Force base where that young sergeant to my young petty officer said, we have got it, we have confirmation. I called Admiral Ellis, he called General Clark, and about 15 minutes later we had three Tomahawk missiles en route and we destroyed those three radars.

About a year later the Air Force ordered another 124 RainDrop systems. (Another press release.) Three months later, Northrop Grumman bought Computek for $155 million in stock.

ALLIED FORCE was confirmation for many observers that coordinate-guided weapons were the wave of the future. Tools like PPDB were necessary infrastructure for that transformation.

Forward to Part Three

Between Sensor and Shooter: The Point Positioning Data Base, Part One

In 2002, after the kick-off of the US invasion of Afghanistan but before the invasion of Iraq, US Air Force Chief of Staff General John Jumper gave a speech in which he said that “The sum of all wisdom is a cursor over the target.” Two years later, the Air Force was demonstrating how, using an XML schema, troops could pass target coordinates to a bomb-armed F-15E flying overhead in an entirely machine-to-machine process in which each device involved communicated seamlessly with each other. The system was called, in honor of General Jumper’s speech, “Cursor on Target.”

Describing the basic process in a “Senior Leader Perspective” for the Air Force’s Air & Space Power Journal, project supporter Brigadier General Raymond A. Shulstad explained how target coordinates input into a laptop in the hands of a forward air controller in the field pass directly to a map workstation in the air operations center (the command center that coordinates air operations in a theater) and then, after the operator clicks on those coordinates on the map, appear automatically on the F-15’s head-up display.

Even Shulstad’s article only touches lightly on the element of “Cursor on Target” that’s absent in its name: the map (or the geodetic information) over which both cursor and target are positioned. The workstations in the air operations center that are in the middle of the observer-to-airplane communication are not just there to monitor what’s happening and to ensure that the coordinates really are a target (as opposed to friendly forces or a hospital). They are also there to provide information that’s missing from the initial observer’s message – the three-dimensional positioning needed to ensure that the bomb truly strikes its target.

Michael W. Kometer, who was an Air Force lieutenant colonel when he wrote an excellent book on some of these issues, explains that process, usually called “mensuration,” as follows:

Mensuration is a process that translates coordinates from a flat chart to take into consideration the elevation of the target. If a weapon could fly to a target directly perpendicular to the earth, mensuration would not be necessary. Since it approaches at an angle, however, a weapon could be long or short if the target is at a low or high elevation.

Today, mensuration software is a corporate product. The industry standard for military operations is a Northrop Grumman product called RainStorm that has its own openly accessible support page on the internet. But that wasn’t always the case. The map hiding within Cursor on Target only came into existence after a long process that began in a very different place: among the US Army’s field artillery crews.

Before APPS
The best place to begin the story is in the years immediately after the Second World War, when the US Army Corps of Engineers began trying to create a comprehensive coordinate system (or datum) that would cover not just Europe but the entire world. Until the Second World War, most nations did their own mapping using their own national datums – which meant that trying to plot directions and distances using coordinates from two different national surveys could lead to significant errors unless one did the math to convert coordinates from one datum into the other (sacrificing time and accuracy in the process).

The discrepancies only got worse the further the distances involved, which was a particular problem if you were trying to plot transatlantic flight paths or the trajectories of intercontinental ballistic missiles. The solution was to establish a new global coordinate system with maps to match – a process that triggered a massive turf war between the US Army and Air Force’s surveying and mapping centers, but eventually led to first World Geodetic System (WGS 60).

In Europe and North America, the US or its allies could do new surveys and create new maps whose coordinates were in WGS 60. For the Soviet Union’s territory, whose maps were closely controlled, the Americans had to turn to satellite photography. Measuring positions and distances using these photographs relied on a process called stereophotogrammetry. By examining a pair of images of the same target through a binocular fitting that showed each image to one eye, the photo interpreter saw a three-dimensional image on which he or she could make accurate measurements of distance. Connecting those measurements to known points, or using simultaneous stellar photography to determine the location of the camera-carrying satellite, let US cartographers create maps of the Soviet Union without ever setting foot on the ground.

Stereophotogrammetry was slow and calculation-intensive, which is why the CIA’s first computer was purchased by the photo interpretation center to speed up the process. But in the late 1960s the US Army’s Engineering Topographic Laboratory (ETL) began to experiment with ways of bringing that same capability to the army in the field. Using off-the-shelf commercial components they were able to build a mobile system that let any photo interpreter establish the precise position – within a matter of meters – of any object that had been captured on reconnaissance photography.

APPS and PPDB
The process began with a pair of film chips and a matching data cassette that carried the information to calculate the geographic coordinates of any point on the film. Aligning the pair of film chips under a stereo comparator (in which the user sees each image with one eye), the photo interpreter saw a three-dimensional image equivalent to the reconnaissance imagery of the target. Picking out the precise point of interest seen on the original photo, the operator zapped that information to an attached computer – a Hewlett Packard 9810A desk calculator – that consulted the magnetic tape and prints out the geographic coordinates. (When I say “zap,” I’m not kidding. An electric current ran from the cursor the operator saw onto a wire grid underneath the photographs. The voltage of the current told the computer where on the photograph the pointer is.) The entire system weighed 109 kg and could be set up on a standard-sized desk. It was called the Analytical Photogrammetric Positioning System (APPS); the tapes and film chips were known together as the Point Positioning Data Base (PPDB).

The Analytical Photogrammetric Positioning System. From Army Research & Development, May-June 1976, p.24

The Analytical Photogrammetric Positioning System. From Army Research & Development, May-June 1976, p.24

The first prototype APPS consoles were built from off-the-shelf parts and delivered to the Army in 1972–3. They offered a substantial improvement in measurement accuracy over previous techniques. In tests by Raytheon and Human Factors Research, Inc. the average error in position measurement using APPS was only 5–6 meters. ETL seems to have originally expected APPS to be used by artillery and surveying units, but it soon also became a tool for the brand-new MGM-52 Lance missile. That made sense because the Lance missile was both long-ranged and lacked any terminal guidance – without any way of homing on its target it needed to be launched with precise information on the target’s location.

Before the decade was over the original APPS was being replaced by a more advanced device, the APPS-IV, built by Autometrics, Inc., but the general principles involved remained the same. Look at two film chips through a stereo comparator, locate the point of interest, then use the attached computer to establish its coordinates. A handy tool, for sure, but not the basis for a revolution in military operations.

So what changed? The answer was GPS.

Forward to Part Two