Excluded Computers: Marie Hicks’s Programmed Inequality

It should be no surprise, fifty to seventy years after the fact, that the introduction of electronic computers in government and industry reflected societal prejudices on women’s employment in the workforce. Books released last year about female computers at the Jet Propulsion Laboratory and NASA Langley narrated the discrimination and exclusion of those women, whose jobs reflected the messy transition from human to automated calculation in large-scale engineering (both are jointly reviewed, along with Dava Sobel’s book on an earlier generation of female computers, in the New York Review of Books here)

The number of women involved in each of these endeavors were dwarfed, though, by the female workforce of the British civil service that’s discussed in Marie Hicks’s excellent Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing. The Civil Service was large enough to document its decisions in painstaking detail and confident enough not to mince words in its internal papers, which makes Hicks’ book a cringeworthy account of the open, blatant, self-satisfied gender discrimination that accompanied the spread of electro-mechanical and then electronic data processing in the British government.

Hicks describes how, from the late 1940s all the way to the 1970s, the civil service took a pool of machine workers that was mostly female and deliberately and repeatedly hemmed them into job categories where their wages could be kept low and their promotion opportunities (which would mean raises) constrained, at the same time as it relied on their technical skills, practical knowledge, and commitment to keep the government running. Separate pay scales for women, eliminated in 1955, were replaced by a series of “excluded grades,” including machine workers, where pay rates would be lowered to the old women’s rate rather than raised the existing men’s rate. When the growth of automated data processing made the need for more senior professional and managerial positions obvious, the service recruited men for those positions – even when it meant starting them with no computer experience – rather than take the traumatic step of letting female staff from the machine operator grades manage men and be compensated at executive-level pay scales. Perhaps unsurprisingly, the government then found it hard to retain those men, with many taking their new skills into private industry or moving back out of computing to other areas in government.

As Hicks explains it, how the the civil service managed its workforce was not only immoral and inefficient but also terrible for the long-term health of the British computer industry. While segregating away the female computing workforce kept costs low, it also hamstrung modernization. By the time the government realized its need for programmers, most of the people with those skills, being women, could not actually be classed as “programmers,” since that job was conceptualized as higher-status and therefore reserved for men. That led the government to prioritize mainframe designs that could be run with a small expert staff, since retaining skilled male programmers was hard and female machine operators with no promotion opportunities were per se unreliable. Following that decision, made by the leading purchaser of British computers, led the companies that built British computers down a blind-alley in design at just the time that microelectronics were putting more computers on more desks and sparking a revolution in the American computer industry.

The blind alley. The International Computers Limited (ICL) 2966 was one of the last mainframe series to be designed in the UK. This machine is at the National Museum of Computing in Bletchley Park, though it’s so large that only about half is on display. Photograph by Steve Parker, CC-BY-2.0, from flickr as of April 4, 2017.



Tides of War, Part Two

The first part of this post appeared on the blog in November 2016. The second part was supposed to come out within a week or two, as soon as I found a little more on the post-war use of analog tide-predicting machines. Unfortunately, the search for “a little more” ended up taking way more time than expected and turning up nothing within easy reach. I’d skip the apology if it wasn’t for the fact that the proper references to Anna Carlsson-Hyslop‘s research (discussed in part one) were buried in the source note at the end of this post. Sorry.

Tide-predicting machines, the first of which appeared in the late nineteenth century, were an elegant mechanical solution to a complex mathematical problem. Used mostly to produce information useful to commercial shipping, during the two world wars they also played an important role in the planning of amphibious operations like the Normandy landings.

That contribution is interesting enough to give them a spot in the history of war, but the basic design of the British machines – using multiple gears to create an analog approximation of a mathematical function – also has an oblique connection to one of the most important technical achievements of the war: the mechanization of cryptanalysis.

Alan Turing is justifiably famous for his role in the breaking of the German Enigma cipher, and particularly for his contribution to designing electro-mechanical computing tools that transformed the process. (Even if popular versions of the story do a terrible disservice to the story by erasing everyone except Turing from the picture. Imitation Game, I’m looking at you.) Less well known are some of Turing’s pre-war flirtations with the mechanization of mathematical problem-solving. Andrew Hodges’ biography describes two projects which Turing took on, at least briefly. The first, during his time at Princeton in 1937, was to use electromagnetic relays for binary multiplication to create an extremely large number that could be used as the key for a cipher. This was, as Hodges puts it, a “surprisingly feeble” idea for a cipher but a practical success as far as constructing the relays was concerned.

The second project was an attempt to disprove the Riemann hypothesis about the distribution of prime numbers by calculating the Riemann zeta-function (for the hypothesis and the zeta-function, read Hodges or Wikipedia. There’s no chance of me describing it properly) by showing that not all instances where the function reached zero lay on a single line, as the hypothesis stated. An Oxford mathematician had already calculated the first 104 zeroes using punched-care machines to implement one approximation of the function. Since the zeta-function was the sum of circular functions of different frequencies, just like Thomson’s harmonic analysis of the tides, Turing realized it could be calculated using the same method. Or, more precisely, the machine could rule out enough values that only a few would have to be calculated by hand.

With a grant of £40 from the Royal Society, Turing and Donald MacPhail designed a machine that, like the tide calculators, used meshed gear wheels to approximate the thirty frequencies involved. The blueprint was completed by 17 July 1939 and the grinding of the wheels was underway when the war broke out at Turing joined the Government Code and Cypher School at Bletchley Park.

Nothing in the work that Turing did at Bletchley connected directly to the zeta-function machine, but, as Hodges notes, it was unusual for a mathematician like Turing to have any interest in using machines to tackle abstract problems of this sort. Clearly, though, Turing had been mulling the question of how machines could be applied to pure mathematics long before he became involved in the specific cryptanalytic problems that were tackled at Bletchley.

Of course, the secrecy surrounding code-breaking meant that no hint of the connection, or any of Turing’s wartime work, would have leaked out to those operating the tide-predicting machines in Liverpool or elsewhere. The end of the war meant a return to usual practice, but their strategic importance remained.

Probably the last analog machine to be constructed was a thirty-four constituent machine built in 1952–5 for East Germany (and now in the collection of the German Maritime Museum in Bremen). The Soviet Union had ordered a Kelvin-type machine for forty constituents from Légé and Co. in 1941 that was delivered to the State Oceanographic Institute in Moscow in 1946, on the eve of the Cold War. Bernard Zetler, an oceanographer who worked on tide prediction at the Scripps Institution of Oceanography in San Diego, recalls that he was unable to visit the machine in 1971 because it or its location was classified. The Soviet tide tables certainly were.

The American Tide Predicting Machine No. 2 remained in use until 1966, but played no role in the American amphibious landing at Inchon during the Korean War. The wide tidal range at Inchon meant that the landing needed good tidal information, but rather than making new calculations the existing American and Japanese tide tables were supplemented by first-hand observation by Navy Lieutenant Eugene F. Clark, whose unit reconnoitered the area for two weeks preceding the landings.

When analog machines like Tide Predicting Machine No. 2 were retired, they were replaced by digital computers whose architecture originated in other wartime projects like the ENIAC computer, which had been built to calculate ballistics tables for US artillery. The world’s navies have not relinquished their interest in tools to predict the tides. Their use, though, has never matched the high drama of prediction during the Second World War.

Source Note: The D-Day predictions are discussed many places on the internet, but almost all the accounts trace back to an article oceanographer Bruce Parker published in Physics Today, adapted from his 2010 book The Power of the Sea. Where Parker disagrees with the inventory of machines commissioned by the National Oceanography Centre, Liverpool (itself a descendant of the Liverpool Tidal Institute), I’ve followed Parker. Details on the work of Arthur Doodson and the Liverpool Tidal Institute come from Anna Carlsson-Hyslop‘s work: the articles “Human Computing Practices and Patronage: Antiaircraft Ballistics and Tidal Calculations in First World War Britain,” Information & Culture: A Journal of History 50:1 (2015) and “Patronage and Practice in British Oceanography: The Mixed Patronage of Storm Surge Science at the Liverpool Tidal Institute, 1919–1959,” Historical Studies in the Natural Sciences 46:3 (2016), and her dissertation for the University of Manchester (accessible through the NERC Open Repository). The scientist suspected of Nazi sympathies was Harald Sverdrup, a Norwegian national who worked with Walter Munk on wave prediction methods used in several amphibious landings. Turing’s experiments calculating the Riemann zeta-function appear in Andrew Hodges, Alan Turing: The Engima (1983; my edition the 2014 Vintage movie tie-in).

Liner Notes: Paying for War in Angola

There’s  a common military aphorism that amateurs talk tactics but professionals talk logistics. Despite that famous statement, histories of logistics can be hard to find and among those histories of finance (beneath the strategic level) even harder. The obscurity extends beyond historians even to the militaries you would expect to know better. According to a short monograph recently published by Air University Press, the US Air Force went into both Gulf Wars without a financial management system capable of operating in a war zone.

One of the more innovative experiments in managing finance in the theater of operations comes from the Cuban intervention in Angola. It’s particularly interesting for me because it hinged on one of the more unusual instruments of postwar power, the cruise liner.

Between 1975 and 1991 more than 430,000 Cuban soldiers and civilians served in Angola. The troops, who were a mix of professionals, reservists, and conscripts, were all ostensibly volunteers. Though conscripts got the perk of reducing their service from three years to two, in general pay was poor. An ordinary soldier received seven Cuban pesos and 150 Angolan kwanzas per month, disbursed at the end of the soldier’s tour. The kwanzas could be used to buy discounted luxury goods in special subsidized shops in Luanda. The pesos were for home. To avoid having to funnel all returning troops through Havana or operate pay counters in every port of arrival, the Cubans hit on an unusual solution. For most of the 1980s they hired the Soviet cruise liner Leonid Sobinov to float off the Angolan coast as a “money ship.” Troops were shuttled out to the Sobinov to receive their back pay before the long transatlantic voyage home. Under close escorts because it carried so much money, the Sobinov usually stayed in Angolan waters for three days at a time. At least once it remained for a month.

The original designers of the Sobinov had probably never considered such as use for the ship. That said, they had probably also never considered that it would be owned by the Soviets. Like many of the Soviet Union’s larger passenger ships, it had been constructed outside the Soviet sphere entirely. Built for the Cunard Line in Britain as the RMS Saxonia in the mid-1950s, the Sobinov was sold to the Soviet Union and renamed in 1973. In addition to its unique duties as a “money ship,” it operated as an occasional troopship and cruise ship in the south Pacific and Mediterranean. It was laid up in the mid-1990s and scrapped in 1999.

Source: Edward George, The Cuban Intervention in Angola, 1965–1991: From Che Guevara to Cuito Cuanavale (Frank Cass, 2005)

Aleksandr Zhitomirsky

During the Second World War, when it still seemed like the Germans might capture Moscow, propaganda minister Joseph Goebbels wrote a list of Soviet propagandists who were to be killed upon capture. Number one was the writer Ilya Ehrenburg. Number two was chief Radio Moscow announcer Iurii Levitan. Number three was Aleksandr Zhitomirsky, the designer and artist of one of the Red Army’s chief illustrated propaganda magazines.

That, at least, was the story, one which is mentioned – with appropriate skepticism – by Erika Wolf in the catalogue to a major exhibit of artist Aleksandr Zhitomirsky’s work at the Art Institute of Chicago. A talented designer and illustrator whose most striking works were the satirical, even grotesque, photomontages that he created in the early years of the Cold War, Zhitomirsky’s work pilloried capitalism and the United States, often with allusions to the Nazi threat against which Zhitomirsky had cut his teeth propagandizing. While his main employment from 1953 to 1991 was as chief artist for Soviet Union (Sovietskii Soiuz), a glossy magazine aimed at readers in Eastern Europe and Asia, his illustrations appeared in the Literary Newspaper (Literaturnaia gazeta), official organ of the Union of Soviet Writers; Red Fleet (Krasnyi flot); Rising Generation (Smena); the satirical magazine Krokodil (Crocodile), and even occasionally in more exalted venues such as Truth (Pravda), the official newspaper of the Communist Party, and News (Izvestiia), official paper of the Soviet government. Those works attracted attention not just at home, where he was part of a major photomontage exhibit in East Berlin in 1961/2 and had his own retrospective in Moscow, but even in the US, where some of his photomontages from the Literary Gazette drew comment in the New York Times.

On balance it’s the postwar art, not just the illustrations mentioned above but also the book covers and occasional poster, that is the focus of Wolf’s Aleksandr Zhitomirsky: Photomontage as a Weapon of World War II and the Cold War (Yale University Press, 2016). For me, though, it’s Zhitomirsky’s wartime work on Front Illustrated (Frontovaia illiustratsiia) and its complementary German-language edition aimed at enemy soldiers (Front Illustrated for German Soldiers / Front-Illustrierte für den deutschen Soldaten) that’s more captivating. The postwar designs are hardly subtle. How often can one look at a monkey-like Goebbels ventriloquizing through some American symbol?

Aleksandr Zhitomirsky CoverFront Illustrated for German Soldiers, which existed to sow unease and dissension in the German ranks, had to be more indirect. For his cover designs and leaflets, Zhitomirsky mixed captured German photographs and new photography (often with himself as the model) with images borrowed for his vast trove of reference photos, often airbrushed together to the point that they became impossible to distinguish. With one leaflet, Choose! Like This or Like That!, Wolf shows how what appears to be a single photograph of dead Germans lying on the ground was actually a composite of seven different photographs, layered together, photographed, then retouched to create a seamless image. With others, she shows how Zhitomirsky mixed background photography with physical objects (like reproduced letters and snapshots) in trompe-l’œil arrangements. Taking advantage of Zhitomirsky’s personal archive, Wolf can demonstrates just how impressive his work was.

Map Overlap: Warsaw Pact vs. NATO Grids

The Charles Close Society hasn’t updated its topical list of articles on military mapping since I wrote about it in 2015, but there is a new article by John L. Cruickshank (“More on the UTM Grid system”) in Sheetlines 102 (April 2015) that is now freely available on the society’s website. The connection to Soviet mapping is that Cruickshank discusses how both NATO and the Warsaw Pact produced guides and maps to help their soldiers convert between their competing grid systems. Unlike latitude and longitude, a grid system assumes a flat surface.That’s good for simplifying calculations of distance and area, but means you have the problems of distortion that come with any map projection.

Both the Soviets and Americans based their standard grids on transverse Mercator projections that divided the globe up into narrow (6° wide) north-south strips, each with own projection. These were narrow enough not to be too badly distorted at the edges but still wide enough that artillery would rarely have to shoot from a grid location in one strip at a target in another (which required extra calculations to compensate for the difference in projections). The American system was called the Universal Transverse Mercator (or UTM; the grid itself was the Military Grid Reference System, or MGRS). The Soviet one was known, in the West at least, as the Gauß-Krüger grid.

In his article, Cruickshank reports that by 1961 East German intelligence was printing 1:200,000 military topographic maps that had both UTM and Soviet Gauß-Krüger grids. By 1985 a full series existed that ran all the way west to the English Channel. Rather than print a full map series with both grids, the US Army produced intelligence guides to the conversion between them. Field Manual 34-85, Conversion of Warsaw Pact Grids to UTM Grids was issued in September 1981. A supplement, G-K Conversion (Middle East) was released in February 1983. As Cruickshank observes, both manuals have fascinating illustrated covers. Conversion of Warsaw Pact Grids features a map with a rolled up map labelled “Intelligence” standing on a grid and looking at a globe focused on Europe. G-K Conversion, on the other hand, shows an Eagle literally stealing the map out of the hand of a Bear using calipers to measure distances from Turkey to Iran across the Caspian Sea.

The article ends with the observation that the history of modern geodesy, which underpins calculations like the UTM and Gauß-Krüger grids, remains “overdue for description.” Since it was published a new book has appeared that goes a long way towards covering some of those developments (at least for non-specialists, if not experts like Cruickshank). In fact, map grids are one of the main topics of After the Map: Cartography, Navigation and the Transformation of Territory in the Twentieth Century by William Rankin (University of Chicago Press, 2016). The book is chock-full of fascinating discussions of new mapping and navigation systems that developed between the end of the nineteenth century and the appearance of GPS. Its focus is on three overlapping case studies: large-scale global maps like the International Map of the World and World Aeronautical Charts (which have their own connection to Soviet mapping), grid systems like UTM, and radionavigation networks like Gee and Loran. (The third of these was already the topic of an article by Rankin that I wrote about here.)

In the chapters on map grids, After the Map shows just how long even an ostensibly universal design like UTM remained fragmented and regional. The use of grids had begun on the Western Front during the First World War. It spread to domestic surveying in the interwar period and been adopted by all the major powers during the Second World War. But universal adoption of the principles involved did not mean adoption of a common system. Even close allies like the United States and Britain ended up just dividing the world and jointly adopting one or the other nation’s approach in each region: British grids were applied to particular war zones and a more general American system used for the rest of the world. Neither used a transverse Mercator projection.

Even once America and its NATO allies settled on UTM as a postwar standard – a decision made despite opposition from the US Navy and Air Force, who fought vigorously for a graticule rather than a grid – UTM maps did not use a single consistent projection but adopted whichever reference ellipsoid was already in use for a region. While those differences were eventually resolved, even the 1990 edition of Defense Mapping Agency Technical Manual 8358.1, Datums, Ellipsoids, Grids, and Grid Reference Systems, still included specifications for twenty British grids including the British and Irish domestic surveys (plus a further nineteen further secondary grids), as well as the Russian Gauß-Krüger. East German tank commanders should have been grateful that they could get away with only two from the Intra-German Border to the Channel!

Revisiting the Third World War

The latest issue of the British Journal for Military History has an interesting article by Jeffrey H. Michaels on Sir John Hackett’s The Third World War (1979), a fictionalized narrative of a potential NATO-Soviet conflict in the 1980s. Though it sparked a lot of attention at the time and sold more than 3 million copies, I don’t think posterity has been very kind to the book. The Third World War was a didactic narrative written as a thinly-veiled plea for more NATO conventional armaments, and a lot of the narrative choices haven’t aged well. Much of the political prognostication was laughably wrong – already discredited by the time its semi-sequel The Third World War: The Untold Story came out in 1982. As fiction, it was quickly overshadowed by Tom Clancy’s Red Storm Rising (1986), whose wargame underpinnings and multi-media afterlife are stories in themselves.

The Third World War is mostly of interest, then, as an artifact of Cold War policy debates played out in popular culture and as the first of what became quite a lot of late Cold War future war fiction (not just Red Storm Rising but also the Team Yankee series, Ralph Peters’s novel Red Army, Shelford Bidwell’s World War 3 [which Michaels says had its prospects mostly ruined by coming out shortly after Hackett’s book], and Kenneth Macksey’s First Clash: Canadians in World War Three, not to mention various games, TV shows, and movies).

Michaels’s article doesn’t change my mind about the qualities of the book itself, but by digging into Hackett’s papers at King’s College London he does reveal some interesting facts about its origins. For one thing, I had not realized how much Hackett played around with the entire scenario of the book as he developed it. His first outline called not for the brief, eighteen-day conflict in the final book, but a multi-year war of attrition in which NATO. That was scuppered by early readers who judged it too dispiriting. The inclusion of limited nuclear strikes on Birmingham and Minsk, which bring the war to an end (and which seem to me one of the more contrived aspects of Hackett’s narrative) were a late addition and a reversal of Hackett’s earlier opinion that nuclear strikes, if any, were likely to happen at sea or in space, not against the cities of a nuclear power. Also interesting: the description of the nuclear attack on Birmingham may have been borrowed from a classified study of just that situation made by Solly Zuckerman in 1961.

Another “Smallest Aircraft Carrier”

At 131-feet in length, the helicopter landing trainer Baylander (IX-514) has been billed as the “smallest aircraft carrier” in the US Navy, if not the world, by the Navy itself, its current owners the Trenk Family Foundation, and, well, me. That claim is based on the more than 10,000 helicopter landings on the Baylander between 1986 and its retirement in 2014. But what if you want the smallest ship to regularly launch its own aircraft?

The November 1963 issue of Navy magazine All Hands crowned the 206-foot USS Targeteer (YV-3) as the fleet’s “smallest aircraft carrier.” A Drone Aircraft Catapult Ship, the Targeteer was equipped to launch and recover target drones used for gunnery practice by the fleet. The third Landing Ship, Medium (LSM) to be converted into a drone launching ship, the Targeteer was based in San Diego from 1961 to 1968, replacing the USS Launcher (YV-2, 1954–1960) and the USS Catapult (YV-1).

USS Targeteer insignia. NH 64878-KN (NHHC photo).

USS Catapult, circa 1955. NH 55065 (NHHC photo).

USS Catapult, the Targeteer‘s sister ship, circa 1955. NH 55065 (NHHC photo).

Even Targeteer‘s claim, though, is contested. The Executive Officer of the fleet tug USS Kalmia (ATA-184), which also launched and recovered drones at San Diego, wrote to All Hands to claim that its length of 143 feet entitled it to the title of “smallest aircraft carrier.” (All Hands deferred to the Navy’s official classifications. The Targeteer was a Drone Aircraft Catapult Ship, the Kalmia just an Auxiliary Ocean Tug.)

USS Kalmia underway on 16 January 1964. NH 102803 (NHHC photo).

USS Kalmia underway on 16 January 1964. NH 102803 (NHHC photo).

All three claims are weak if you are looking for a ship that launches and retrieves multiple aircraft. If, on the other hand, you are looking for the smallest Navy-crewed vessel which could land or launch a single aircraft, Baylander, Targeteer, and Kalmia all lose to the helicopter pad-equipped “Tango boats” of the Mobile Riverine Force in Vietnam. Officially designated Armored Troop Carriers (ATC)s, these were Landing Craft, Mechanized (LCM) that were modified to serve as floating armoured personnel carriers in the Mekong Delta. Some were further modified with a steel flight deck on top that ran pretty much the full length of the boat. The first helicopter landing on one of these Armored Troop Carrier (Helicopter), or ATC(H)s, took place on July 4, 1967. At 56-feet in length, which is more or less the length of a Huey helicopter, I doubt I’ll find anything smaller to claim the title.

 A U.S. Army UH-1D helicopter lands on the helicopter pad of a modified U.S. Navy Armored Troop Carrier (ATCH R-92-2) operating as part of the Riverine Mobile Force, 8 July 1967. Photography by Photographer's Mate Second Class Edward Shinton. USN 1132291 (NNHC photograph).

A U.S. Army UH-1D helicopter lands on the helicopter pad of a modified U.S. Navy Armored Troop Carrier (ATCH R-92-2) operating as part of the Riverine Mobile Force, 8 July 1967. Photography by Photographer’s Mate Second Class Edward Shinton. USN 1132291 (NNHC photograph).