Excluded Computers: Marie Hicks’s Programmed Inequality

It should be no surprise, fifty to seventy years after the fact, that the introduction of electronic computers in government and industry reflected societal prejudices on women’s employment in the workforce. Books released last year about female computers at the Jet Propulsion Laboratory and NASA Langley narrated the discrimination and exclusion of those women, whose jobs reflected the messy transition from human to automated calculation in large-scale engineering (both are jointly reviewed, along with Dava Sobel’s book on an earlier generation of female computers, in the New York Review of Books here)

The number of women involved in each of these endeavors were dwarfed, though, by the female workforce of the British civil service that’s discussed in Marie Hicks’s excellent Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing. The Civil Service was large enough to document its decisions in painstaking detail and confident enough not to mince words in its internal papers, which makes Hicks’ book a cringeworthy account of the open, blatant, self-satisfied gender discrimination that accompanied the spread of electro-mechanical and then electronic data processing in the British government.

Hicks describes how, from the late 1940s all the way to the 1970s, the civil service took a pool of machine workers that was mostly female and deliberately and repeatedly hemmed them into job categories where their wages could be kept low and their promotion opportunities (which would mean raises) constrained, at the same time as it relied on their technical skills, practical knowledge, and commitment to keep the government running. Separate pay scales for women, eliminated in 1955, were replaced by a series of “excluded grades,” including machine workers, where pay rates would be lowered to the old women’s rate rather than raised the existing men’s rate. When the growth of automated data processing made the need for more senior professional and managerial positions obvious, the service recruited men for those positions – even when it meant starting them with no computer experience – rather than take the traumatic step of letting female staff from the machine operator grades manage men and be compensated at executive-level pay scales. Perhaps unsurprisingly, the government then found it hard to retain those men, with many taking their new skills into private industry or moving back out of computing to other areas in government.

As Hicks explains it, how the the civil service managed its workforce was not only immoral and inefficient but also terrible for the long-term health of the British computer industry. While segregating away the female computing workforce kept costs low, it also hamstrung modernization. By the time the government realized its need for programmers, most of the people with those skills, being women, could not actually be classed as “programmers,” since that job was conceptualized as higher-status and therefore reserved for men. That led the government to prioritize mainframe designs that could be run with a small expert staff, since retaining skilled male programmers was hard and female machine operators with no promotion opportunities were per se unreliable. Following that decision, made by the leading purchaser of British computers, led the companies that built British computers down a blind-alley in design at just the time that microelectronics were putting more computers on more desks and sparking a revolution in the American computer industry.

The blind alley. The International Computers Limited (ICL) 2966 was one of the last mainframe series to be designed in the UK. This machine is at the National Museum of Computing in Bletchley Park, though it’s so large that only about half is on display. Photograph by Steve Parker, CC-BY-2.0, from flickr as of April 4, 2017.

 

 

Advertisements

Tides of War, Part Two

The first part of this post appeared on the blog in November 2016. The second part was supposed to come out within a week or two, as soon as I found a little more on the post-war use of analog tide-predicting machines. Unfortunately, the search for “a little more” ended up taking way more time than expected and turning up nothing within easy reach. I’d skip the apology if it wasn’t for the fact that the proper references to Anna Carlsson-Hyslop‘s research (discussed in part one) were buried in the source note at the end of this post. Sorry.

Tide-predicting machines, the first of which appeared in the late nineteenth century, were an elegant mechanical solution to a complex mathematical problem. Used mostly to produce information useful to commercial shipping, during the two world wars they also played an important role in the planning of amphibious operations like the Normandy landings.

That contribution is interesting enough to give them a spot in the history of war, but the basic design of the British machines – using multiple gears to create an analog approximation of a mathematical function – also has an oblique connection to one of the most important technical achievements of the war: the mechanization of cryptanalysis.

Alan Turing is justifiably famous for his role in the breaking of the German Enigma cipher, and particularly for his contribution to designing electro-mechanical computing tools that transformed the process. (Even if popular versions of the story do a terrible disservice to the story by erasing everyone except Turing from the picture. Imitation Game, I’m looking at you.) Less well known are some of Turing’s pre-war flirtations with the mechanization of mathematical problem-solving. Andrew Hodges’ biography describes two projects which Turing took on, at least briefly. The first, during his time at Princeton in 1937, was to use electromagnetic relays for binary multiplication to create an extremely large number that could be used as the key for a cipher. This was, as Hodges puts it, a “surprisingly feeble” idea for a cipher but a practical success as far as constructing the relays was concerned.

The second project was an attempt to disprove the Riemann hypothesis about the distribution of prime numbers by calculating the Riemann zeta-function (for the hypothesis and the zeta-function, read Hodges or Wikipedia. There’s no chance of me describing it properly) by showing that not all instances where the function reached zero lay on a single line, as the hypothesis stated. An Oxford mathematician had already calculated the first 104 zeroes using punched-care machines to implement one approximation of the function. Since the zeta-function was the sum of circular functions of different frequencies, just like Thomson’s harmonic analysis of the tides, Turing realized it could be calculated using the same method. Or, more precisely, the machine could rule out enough values that only a few would have to be calculated by hand.

With a grant of £40 from the Royal Society, Turing and Donald MacPhail designed a machine that, like the tide calculators, used meshed gear wheels to approximate the thirty frequencies involved. The blueprint was completed by 17 July 1939 and the grinding of the wheels was underway when the war broke out at Turing joined the Government Code and Cypher School at Bletchley Park.

Nothing in the work that Turing did at Bletchley connected directly to the zeta-function machine, but, as Hodges notes, it was unusual for a mathematician like Turing to have any interest in using machines to tackle abstract problems of this sort. Clearly, though, Turing had been mulling the question of how machines could be applied to pure mathematics long before he became involved in the specific cryptanalytic problems that were tackled at Bletchley.

Of course, the secrecy surrounding code-breaking meant that no hint of the connection, or any of Turing’s wartime work, would have leaked out to those operating the tide-predicting machines in Liverpool or elsewhere. The end of the war meant a return to usual practice, but their strategic importance remained.

Probably the last analog machine to be constructed was a thirty-four constituent machine built in 1952–5 for East Germany (and now in the collection of the German Maritime Museum in Bremen). The Soviet Union had ordered a Kelvin-type machine for forty constituents from Légé and Co. in 1941 that was delivered to the State Oceanographic Institute in Moscow in 1946, on the eve of the Cold War. Bernard Zetler, an oceanographer who worked on tide prediction at the Scripps Institution of Oceanography in San Diego, recalls that he was unable to visit the machine in 1971 because it or its location was classified. The Soviet tide tables certainly were.

The American Tide Predicting Machine No. 2 remained in use until 1966, but played no role in the American amphibious landing at Inchon during the Korean War. The wide tidal range at Inchon meant that the landing needed good tidal information, but rather than making new calculations the existing American and Japanese tide tables were supplemented by first-hand observation by Navy Lieutenant Eugene F. Clark, whose unit reconnoitered the area for two weeks preceding the landings.

When analog machines like Tide Predicting Machine No. 2 were retired, they were replaced by digital computers whose architecture originated in other wartime projects like the ENIAC computer, which had been built to calculate ballistics tables for US artillery. The world’s navies have not relinquished their interest in tools to predict the tides. Their use, though, has never matched the high drama of prediction during the Second World War.

Source Note: The D-Day predictions are discussed many places on the internet, but almost all the accounts trace back to an article oceanographer Bruce Parker published in Physics Today, adapted from his 2010 book The Power of the Sea. Where Parker disagrees with the inventory of machines commissioned by the National Oceanography Centre, Liverpool (itself a descendant of the Liverpool Tidal Institute), I’ve followed Parker. Details on the work of Arthur Doodson and the Liverpool Tidal Institute come from Anna Carlsson-Hyslop‘s work: the articles “Human Computing Practices and Patronage: Antiaircraft Ballistics and Tidal Calculations in First World War Britain,” Information & Culture: A Journal of History 50:1 (2015) and “Patronage and Practice in British Oceanography: The Mixed Patronage of Storm Surge Science at the Liverpool Tidal Institute, 1919–1959,” Historical Studies in the Natural Sciences 46:3 (2016), and her dissertation for the University of Manchester (accessible through the NERC Open Repository). The scientist suspected of Nazi sympathies was Harald Sverdrup, a Norwegian national who worked with Walter Munk on wave prediction methods used in several amphibious landings. Turing’s experiments calculating the Riemann zeta-function appear in Andrew Hodges, Alan Turing: The Engima (1983; my edition the 2014 Vintage movie tie-in).

Hidden Figures

The release of two widely publicized books on female computers in the early Space Age in the same year (one of them with a forthcoming movie adaptation too) has to be unprecedented. The first was Rise of the Rocket Girls, about the women who worked as human computers (a redundant term before the 1950s) for the Jet Propulsion Laboratory in Pasadena, California. The second, Hidden Figures, is about the African-American women among those who did similar work for the Langley research center in Virginia. (There’s even a third book, by Dava Sobel, that covers an earlier generation of computers who worked at the Harvard Observatory).

Both Rise of the Rocket Girls and Hidden Figures are fascinating accounts of the essential roles that female computers played in aerospace research, capturing the challenging social milieu in which they worked. Hidden Figures also manages to address the impact of segregation and discrimination in the overlapping local, regional, and national contexts surrounding the work of the computers at Langley (itself a segregated workplace). It’s a story well worth reading, before or after the movie adaptation – focusing on Katherine Johnson’s contribution to the calculations for the first orbital Mercury flight – goes into wide release in January. The trailers I’ve seen look good, though Kevin Costner as a fictional NASA manager gets to strike a literal blow (with a fire axe!) against racism that goes way beyond anything NASA management actually did for their African-American staff.

In the last chapter of Hidden Figures, Shetterly discusses having to cut the section of the book about how several of its key figures moved into human resources and advocacy to try and overcome the less obvious discrimination against women and minorities in the workforce that was still going on in the 1970s and 80s. You never know from a trailer, but I suspect the movie’s not going to end with the uphill battle for recognition and equal treatment that persisted even after Johnson’s work.

As Sobel’s made clear in some of her pre-publication publicity, the stories of female computers are less undiscovered than regularly and distressingly forgotten. The women who worked in the Harvard Observatory were well known at the time; Katherine Johnson received substantial publicity at least within the African-American press for her work on Mercury. Academic writing, including a book with Princeton University Press, has covered the work of female computers in various fora. Perhaps a major Hollywood movie will help the story stick this time.

Tides of War, Part One

The best-known story about environmental science and D-Day has to be that of the last-minute forecast that let the invasion go ahead. That prediction, though, was only one of many contributions by Allied environmental scientists to the success of the invasion. Another was the secretive preparation of mundane but vital preparations for the assault: calculating the tides for D-Day.

The theoretical basis for tide prediction was the work of Newton, Daniel Bernoulli, and Pierre Simon Laplace, the third of whom was the first to outline the equations that describe the rise and fall of the tides. Laplace’s equations were too complex to use in practice, but in the mid-nineteenth century the British scientist William Thomson (later ennobled as Lord Kelvin) demonstrated that, given enough tidal measurements, one could use harmonic analysis to divide the tide-generating forces for a particular shoreline into a series of waves of known frequencies and amplitudes (the tidal constituents). That same process, carried out in reverse, would let one predict the tides along that shore. Unfortunately, making those calculations was was time-consuming the point of impracticality. However, Thomson also demonstrated that it was possible to construct an analog machine that would do the necessary work automatically.

Thomson’s machine drew a curve representing the height of the tide with a pen that was attached to the end of a long wire. The wire ran over top of a series of pulleys, which were raised and lowered by gears which reflected the the frequency and amplitude of the tidal constituents. As each pulley rose or fell, it affected the length of the wire’s path and thus the position of the pen. Altogether, they reflected the combined effect of the tidal constituents being simulated.

Thomson's design sketch for the third tide-predicting machine, 1879. Image courtesy Wikimedia.

Thomson’s design sketch for the third tide-predicting machine, 1879. Image courtesy Wikimedia.

The first machine, built in 1872, had gears for only ten constituents, but later machines could represent many more. Machines of his design, many of them built in Great Britain, were also used in other countries to create the necessary tide tables for their ports. In the United States, a different mechanical approach developed by William Ferrel was used to build similar machines. Altogether, though, tide-predicting were specialized, expensive, and rare. According to a modern inventory, only thirty-three were ever built – twenty-five of them in London, Glasgow, or Liverpool.

During the Second World War, the Admiralty Hydrographic Office relied on two tide-predicting machines operated by Arthur Thomas Doodson at the Liverpool Tidal Institute to do all their tidal calculations. One was Thomson’s original machine, refitted to handle twenty-six constituents. The other was a machine designed by Edward Roberts in 1906 and equipped for forty constituents.

Both Doodson and the Tidal Institute had their own unique histories of military collaboration. Doodson, despite being a conscientious objector, had worked on anti-aircraft ballistics for the Ministry of Munitions during the First World War. The Institute, established in 1919 with corporate and philanthropic support, had an important connection with the Admiralty’s own Hydrographic Department. Though the Hydrographic Department did not provide any direct funding until 1923, after that it made the Institute the Admiralty’s exclusive supplier of tide calculations. At the same time, the Hydrographic Department began appointing a representative to the Institute’s governing board.

Though they were the basis for only some of the Institute’s Admiralty work during the war, the tide-predicting machines in Liverpool were busy creating tide tables for Allied ports. According to historian Anna Carlsson-Hyslop’s research, the number of tidal predictions being performed doubled from 77 for 1938, the last pre-war year, to 154 for 1945. (Carlsson-Hyslop’s research is focused on areas of the Institute’s work other than the creation of tide tables, but much of it sheds light on its relationship with the Royal Navy and state patronage.)

In 1943 the Admiralty Hydrographic Office requested calculations to create tide tables for the invasion beaches to be used on D-Day in Normandy. Since the landing zone remained top secret, Commander William Ian Farquharson was responsible for establishing the constituents and providing them (anonymized under the codename “Point Z”) to Doodson in Liverpool. Unfortunately, there were no existing calculations for the area of the beaches. Nor, because tidal constituents were sensitive to local conditions, could he just extrapolate from the data for the ports to the east and west at Le Havre and Cherbourg. Instead, Farquharson combined fragmentary data from some local measurement points near the beaches, clandestine on-the-spot measurements made by Allied beach reconnaissance teams, and guesswork to come up with eleven tidal constituents. Oceanographer Bruce Parker suspects that he began with the Le Havre constituents and then adjusted them to approximate the data he had. The calculations, despite the roughness of the information on which they were based, proved sufficiently accurate for the invasion planner.

In the Pacific, tide tables for amphibious operations were generated by the US Coast and Geodetic Survey’s Tide Predicting Machine No. 2. In both theaters, as well as the Mediterranean, oceanographers supplemented the tide tables for beaches with wind, wave, and surf forecasts. The story of wave forecasting is, if anything, even more cloak and dagger than that of the D-Day tide forecasts, since one of the scientists involved was actively suspected (incorrectly) of being a Nazi sympathizer.

Dr. E. Lester Jones, Chief, U.S. Coast and Geodetic Survey, with the Tide Predicting Machine he built. Harris & Ewing, photographer, 1915. Retrieved from the Library of Congress, https://www.loc.gov/item/hec2008004303/

A US tide predicting machine, probably No.2. The caption from the Library of Congress attributes the machine’s construction to E. Lester Jones, Chief of the Coast and Geodetic Survey. Harris & Ewing, photographer, 1915. Retrieved from the Library of Congress, https://www.loc.gov/item/hec2008004303/

Beyond their civilian and military wartime work, tide-predicting machines had an oblique impact on Second World War cryptanalysis. Those developments would eventually put the machines out of work after the war, but not before the machines would have their final strategic significance.

Forward to Part Two, including Source Notes

How Not to Network a Nation

petersI’ve been looking to read How Not to Network a Nation by Benjamin Peters since MIT Press announced it last November, but a mixture of delays, library closings over summer, and general busyness meant that I didn’t lay hands on a copy until a few weeks ago. I’m really glad that I remembered, since it’s a wonderful book that sheds a lot of light on the development of computer networking and the internet.

Peters examines a series of failed attempts to create large-scale civilian computer networks in the Soviet Union in the 1960s, 70s, and 80s, which he explains in the context of the Soviet economy and the development of cybernetics as a discipline. (Those wanting a overview of the argument can listen to his lovely interview with the New Books Network). By analyzing these Soviet proposals, Peters not only describes Soviet efforts at network-building but also sheds some light on the parallel processes going on in the United States.

Comparing the success of the Internet to the failure of the Soviet network proposals helps highlight the distinctive features of the network that ultimately developed out of the US ARPANET experiment. It also casts what Peters calls the “post-war American military-industrial-academic complex” in the unusual role of altruistic and disinterested benefactor. In contrast to the Soviet Union, where the military and its suppliers jealously guarded their power and priorities, the US government ended up funding a lot of research that – though loosely justified on the basis of military need – was more or less unrelated to specific military requirements and ended up being spread far and wide through civilian connections before it ever proved to have military significance.

How Not to Network a Nation is probably most rewarding for those with some knowledge of the Soviet economic and political system, including its perennial bureaucratic battles and black markets deals for influence and resources. (Anyone wanting to know more, for example, about the debates over how to mathematically optimize the planned economy, with or without computers, should read Francis Spufford’s well-footnoted novel Red Plenty.) Its biggest omission is any discussion of the technical features of the Soviet projects. Arguably, one of the reasons that the internet became the Internet is that it was built from architecture (particularly TCP/IP) flexible enough to span multiple thinly-connected networks with varying capabilities and purposes. That flexibility made it possible for networking to thrive even without the kind of deliberate and wide-ranging support that a large-scale, well-planned project would have required. Peters’s book, illuminating as it is, never addresses those aspects of network development.

Smart Plane, Dumb Bombs, Bad Maps?: Part One

At The Drive‘s “War Zone” (and, before that, Gizmodo‘s “Foxtrot Alpha”) Tyler Rogoway has been regularly posting first-hand reflections on flying military jets. The latest, by Richard Crandall, covers the F-111 Aardvark, probably the leading all-weather strike aircraft of the Cold War. Designed around bleeding-edge 1960s avionics that would take the guesswork out of high speed, low-level navigation, the F-111 ended up being caught between the limits of its technology and the development of a newer generation of weapons. At the same time, its combat debut pointed to the limits of the entire system of navigational tools that military aircraft have been using ever since.

General Dynamics F-111F at the National Museum of the United States Air Force. (U.S. Air Force photo)

General Dynamics F-111F at the National Museum of the United States Air Force. (U.S. Air Force photo)

The F-111 was built to penetrate enemy airspace at low level regardless of rain, fog, or darkness. How low? Crandall recalls that “sometimes we would be flying low through the mountains of New Mexico or southwest Texas and the jet’s external rotating beacon would flash off the terrain that we were flying by and it would seem to be right next to the wingtip. Some aircrew would turn it off as it unnerved them.”

The airplane’s avionics were supposed to be interconnected to make the attack process practically automatic. While the F-111’s terrain-following radar (TFR) kept the plane 200 feet above the ground, the autopilot flew the plane from waypoint to waypoint. Upon reaching the target, the ballistics computer automatically released the F-111’s bombs when the plane reached the correct parameters (position, airspeed, delivery angle, etc.). Or, if the crew put it into manual mode, the computer showed the pilot a “continually computed impact point” (CCIP) that adjusted for wind and drift to indicate where the bombs would land if they were released at that moment. In the planned ultimate version of the F-111, the F-111D, all this equipment would be integrated in to a fully digital computer system complete with “glass cockpit” multi-function electronic displays. Even the other versions of the F-111 that flew with fully or partly analog systems included similar systems.

At the heart of the system was the airplane’s inertial navigation system (INS), a package of gyroscopes and accelerometers that provided an ongoing track of the airplane’s location. Because the INS drifted by about half a nautical mile every hour, the F-111 required regular position updates to keep itself on course. The problem of correcting for INS drift wasn’t unique to the F-111. Ballistic missile submarines updated their INS using a fix from the Loran-C radio or Transit satellite network. Tomahawk missiles used an onboard terrain-matching system called TERCOM.The Strategic Air Command’s variant of the F-111 carried an Litton ASQ-119 Astrotracker that took position fixes based on the locations of up to fifty-seven stars, day or night (as well as a more accurate INS that used an electrostatically suspended gyro).

The tactical F-111 usually took position updates by locking the non-TFR attack radar onto a pre-selected terrain feature with a known position and good radar reflectivity (called an offset aimpoint, or OAP). When everything worked, that gave the F-111 remarkable accuracy. As F-111 WSO Jim Rotramel remarked to writers Peter E. Davies and Anthony M. Thornborough for their book on the F-111, “if the coordiantes for various offset aim-points (OAPs), destinations and targets were all derived from accurate sources, the radar crosshairs were more likely to land neatly on top of them.”

Even if some parts of the system failed, the remaining elements were good enough to let the F-111 complete the mission. As Crandall explains:

Our inertial navigation system was nice, but we trained to use dead reckoning and basic radar scope interpretation to get to the target even without the INS. We had backups to the backups to the backup. INS dead? Build a wind model and use the computers without the INS. That doesn’t work? Use a radar timed release based on a fixed angle offset. We practiced them all and got good at them all.

The navigator could even use the raw, unprocessed data from the TFR, a truly terrifying process that Crandall calls a “truly a no-kidding, combat-emergency-only technique.”

Trouble with the F-111Ds avionics, which proved too ambitious for the time, meant that the US Air Force flew three other versions of the plane while trying the debug them. The first was the F-111A, which had an all-analog cockpit; the -E (Crandall: “basically an F-111A with bigger air inlets”), which added a few features; and the -F, which added more digital equipment but not the full suite of the features designed for the -D. The last of these was also upgraded to operate the PAVE TACK pod that let the plane drop laser-guided bombs (LGBs). LGBs put the terminal “smarts” for precision bombing into the weapon itself, with the bomb following laser energy reflected off the target from a beam in the PAVE TACK pod.

The US Air Force Museum in Dayton, Ohio, has a 360-degree photo of the cockpit of the F-111A in their collection here. It’s all switches and gauges, with no screens apart from those for the radar.

However, even with laser-guided bombs, the F-111F still needed the airplane’s avionics to get to the target, and those systems remained dependent on maps and geodetic information in order to ensure that INS updates and OAPs were accurate. In 1986, that would prove to be the weak link during the F-111’s combat debut.

To Part Two

A Hidden Map Between Sensor and Shooter: The Point Positioning Data Base, Part Three

Back to Part Two (or Part One)

Between 1990, when the first GPS-guided missiles were used in war, and 2001, when the United States began its invasion of Afghanistan, GPS guidance for weapons went from a niche technology used only by a few systems to one of the US military’s favorite techniques. The spread of GPS guidance led to a huge demand for ways of determining target positions in a way that weapons – rather than pilots – would understand. That meant three-dimensional coordinates in World Geodetic System 84 (WGS 84), rather than grid references on maps or even coordinates in other datums. One of the most important tools for establishing these coordinates was the Point Positioning Data Base (PPDB), a database of matching imagery and coordinates that had originated in the 1970s as a tool for army field artillery.

Made widely available in an analog format in the 1980s and used during the first Gulf War, PPDB’s digitization had restricted its use mostly to computer workstations (first DEWDROP, then RainDrop) in the United States during the war over Kosovo in 1999.

By the time the invasion of Afghanistan began in late 2001, RainDrop workstations had moved from analysts’ desks in the continental US to the same airbase – Prince Sultan Air Base in Saudi Arabia – as the air operations center that was commanding the air war. That shift was only the first step in the proliferation of tools and services for point mensuration to match the American and coalition demand for mensurated target coordinates. “Cursor on Target” (see Part One) began development in 2002; Northrop Grumman released RainDrop’s successor – RainStorm – in 2004; and another system, Precision Strike Suite for Special Operations Forces (PSS-SOF), was created to provide “near-mensurated” coordinates to troops in the field.

By 2009, when Noah Shachtman wrote a description of how mensuration was used to plan air strikes in Afghanistan, the process had been in regular use for almost a decade. Here’s his description of what was being done in the air operations center for Afghanistan:

An officer, I’ll call him Paul, walks me through the process. It starts with “targeteering,” figuring out where a pilot should attack. Just getting GPS coordinates or an overhead image isn’t good enough. GPS is unreliable when it comes to altitude. And landscape and weather conditions can throw satellite pictures off by as much as 500 feet. “Even with Gucci imagery, there’s always errors,” Paul says. He points to a pair of screens: On the right side is an aerial image of a building. On the left, two satellite pictures of the same place — taken from slightly different angles — flicker in a blur. Paul hands me a pair of gold-rimmed aviator glasses. I put them on, and those flickers turn into a single 3-D image. Paul compares the 2-D and 3-D images, then picks exactly where the building should be hit. Depending on elevation, adding a third dimension can shrink a 500-foot margin of error down to 15 feet.

Tying a point on the ground to a global grid precise enough to be used for air strikes anywhere in the world was now a matter of course. Fifty years after the CIA’s photo interpreters bought their first mainframe to help them analyze and map targets in the Soviet Union, calculating a target’s position in global terms has become simple – even if knowing what is at the target location is not. The technology here is also a long way from the cobbled-together equipment for which the PPDB was first created. The Analytical Photogrammetric Positioning System (APPS) combined digital and analog electrical components with old-fashioned optics and the human eye.

The transformation from APPS to RainStorm over the course of thirty years is pretty remarkable, but its also been hard to track. This is technology that doesn’t get a lot of praise or get singled out for attention, but that doesn’t mean its not interesting or important.

For one thing, APPS was a military application of commercial off-the-shelf (COTS) technology before COTS was cool. The Hewlett Packard 9810A desk calculator at its heart was not designed for military use or developed from military-sponsored research. It was just an office tool that was re-purposed for a very different office.

More importantly, APPS and PPDB are a good example of an enabling technology that was created long before its eventual requirement even existed. If there had been no PPDB, the development of GPS-guided bombs would have forced its creation. Instead, it was an Army project begun around the same time the first GPS satellites were being designed that provided the necessary service. That’s luck, not good planning.

Lastly, and equally interestingly, PPDB is a layer of complexity in modern warfare that’s easily overlooked because it sits in the middle of things. It provides the map of coordinates on which grander, more significant, moves are sketched, and which disappears into obscurity except when something goes wrong. Between cursor and target, or sensor and shooter, there are a lot of layers like this one.