Imagining Soviet Surveying

Last week I wrote about some of the apparent differences between how the US and the Soviet Union used satellites for mapping and geodesy. The Soviets seem to have been slower to operate dedicated satellites in both areas, with no apparent explanation. Though it’s dubious to use US intelligence estimates as evidence of what the Soviets were actually doing, they do at least shed light on some of the possibilities.

Two CIA reports from from the pre-satellite era, in 1954 and 1957, suggested that if the Soviets had made a connection across the Bering Strait between their own domestic surveys and the North American Datum, missiles launched from near the Bering Strait would have a Circular Error Probable (CEP) of 300–500 feet. Without the connection between datum, the error would be closer to 1,000 feet. without any additional surveys of US territory By making observations of an upcoming solar eclipse and gaining access to the equivalent measurements from US or Western European sites, the CIA predicted the error in intercontinental position could be reduced to about 500 feet from anywhere in the Soviet Union.

These estimates assumed that the target could be located on high-quality American maps, which the analysts presumed were available to Soviet planners. But what if the targets were secret sites not plotted on any maps? A Studies in Intelligence article (“Spy Mission to Montana”) from 1995 revealed that the CIA and Air Force tested those conditions as the silos for Minutemen ICBMs were being built in 1962. A three-person team, two from the CIA and one from the Army Map Service, made covert observations of the sites under construction from their rental car. Dodging both site security and the official survey being done by the Air Force’s 1381st Geodetic Survey Squadron, the covert team proved that observations could be made with a CEP of 600 feet when maps at 1:250,000 scale were available and a CEP of 200 feet with 1:62,500 scale maps.

Did the Soviet Union make a secret measurement of the Bering Strait or send its agents to survey the locations on American missile silos? The answer is probably somewhere in the files of the KGB or GRU.

Advertisements

Soviet Satellites and Mapping

John Davies’ website has announced that his and Alexander Kent’s book The Red Atlas: How the Soviet Union Secretly Mapped the World will be released by University of Chicago Press. Details for the book on the press website show 272 pages and 282 (!) colour plates and a publication month of October 2017. Having read what the authors have written elsewhere about Soviet maps, I’m really looking forward to the book. In particular, I’m hoping it will offer not just more information on how the Soviet military prepared their maps but also some insight into why and for who.

The technical military challenges that drove both American and Soviet cartographic projects during the Cold War were very similar, which leaves the differences in practice between them begging for explanation. Take, for example, the apparent difference in exploiting satellite geodesy. Both countries very swiftly exploited the fact that perturbations in satellite orbits revealed new details on gravity and, by extension, the shape of the earth. They also must have recognized that satellites made better targets for intercontinental triangulation than rockets, stars, or the sun and moon, all conventional targets at the time.

As a result, Sputnik effectively sidelined an American-led terrestrial program of geodetic measurements for the International Geophysical Year that had been under development since 1954. Led by William Markowitz of the US Naval Observatory, using dual-rate cameras of his own design, the program distributed cameras to observatories around the world to make simultaneous moon observations during 1957. Using an approach to triangulation similar to that used during eclipses, the promised precision was to within about 90 feet at each observatory. Uncertainties in the position of the moon meant the 1957 observations never delivered geodetic results, but more substantially the entire concept had been rendered obsolete.

Consequently, in addition to measurement projects that were added to other scientific satellites, the US launched its first dedicated geodetic satellite in 1962. ANNA-1B was a joint Department of Defense-NASA project that carried instruments to enable both triangulation and trilateration. Its launch came only two years after the US lofted its first photo-reconnaissance satellite, which makes sense because both satellites were part of the effort to find and target Soviet strategic missiles.

Intriguingly, then, it was six more years before the Soviet Union launched its own dedicated geodetic satellite. The first of the Sfera series (Russian for “Geoid”) satellites (11F621) flew in 1968, launched from the rocket base at Pleketsk. Built by design bureau OKB-10 on the popular KAUR satellite bus, the Sfera satellites were equipped with lights and radio transmitters similar to those on ANNA-1B. Operational flights ran from 1973 to 1980.

A similar difference was apparent in the case of satellites equipped with cameras for mapping, as opposed to high-resolution reconnaissance photography. A dedicated mapping satellite was among the planned elements of the first US reconnaissance satellite system, the Air Force’s SAMOS (or Satellite and Missile Observation System). That camera, the E-4, never flew, but the Army’s very similar project ARGON was grafted onto the CIA Corona program. ARGON was rendered obsolete by the inclusion of small mapping cameras on subsequent satellite systems but after ARGON’s first launch in 1961 – only one year after the very first US reconnaissance satellite – the US was never without a mapping capacity in orbit.

In the USSR, on the other hand, the first dedicated mapping satellite came quite late. The Zenit-4MT, program name Orion (11F629), was a variant of the main Soviet series of photo-reconnaissance satellites. First launched in 1971 and accepted into operational service in 1976, Orion began flying nine years after the first Soviet photo-reconnaissance satellite was launched. Unlike the Americans, who integrated mapping cameras into other photo-reconnaissance satellites, the Soviets seem to have continued to fly dedicated cartographic systems for the remainder of the Cold War (this is early 2000s information, so it may be obsolete now). Zenit-4MT (Orion) was followed in the early 1980s by the Yantar-1KFT, program name Siluet/Kometa (11F660), a system which combined the propulsion and instrument modules of the latest Soviet photo-reconnaissance satellite with the descent canister from the Zenit-4MT. Flying alongside Kometa was an upgraded Zenit, the Zenit-8, program name Oblik, an interim design introduced because of delays in the former.

I hope The Red Atlas or someone else can explain more about what was happening here, because it certainly looks like the Soviet Union was making very different decisions from the Americans when it came to satellite geodesy and cartography.

Source Notes: Information on Soviet satellites comes from a range of sources, much of it in the Journal of the British Interplanetary Society. For the Orion series, Philip S. Clark, “Orion: The First Soviet Cartographic Satellites,” JBIS vol. 54 (2001), pp. 417–23. For Siluet/Kometa, Philip S. Clark, “Classes of Soviet/Russian Photoreconnaissance Satellites,”JBIS vol. 54 (2001), pp. 344–650. On the launch of Sfera from Pleketsk, Bart Hendrickx,”Building a Rocket Base in the Taiga: The Early Years of the Plesetsk Launch Site (1955-1969) (Part 2),” JBIS vol. 66, Supplement 2 (2013), pp. 220 (and online). For the Markowitz moon camera, Steven J.  Dick, “Geodesy, Time, and the Markowitz Moon Camera Program: An Interwoven International Geophysical Year Story,” in Globalizing Polar Science: Reconsidering the International Polar and Geophysical Years, edited by Roger D. Launius, James Roger Fleming, and David H. DeVorkin (Palgrave Macmillan, 2010).

Excluded Computers: Marie Hicks’s Programmed Inequality

It should be no surprise, fifty to seventy years after the fact, that the introduction of electronic computers in government and industry reflected societal prejudices on women’s employment in the workforce. Books released last year about female computers at the Jet Propulsion Laboratory and NASA Langley narrated the discrimination and exclusion of those women, whose jobs reflected the messy transition from human to automated calculation in large-scale engineering (both are jointly reviewed, along with Dava Sobel’s book on an earlier generation of female computers, in the New York Review of Books here)

The number of women involved in each of these endeavors were dwarfed, though, by the female workforce of the British civil service that’s discussed in Marie Hicks’s excellent Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing. The Civil Service was large enough to document its decisions in painstaking detail and confident enough not to mince words in its internal papers, which makes Hicks’ book a cringeworthy account of the open, blatant, self-satisfied gender discrimination that accompanied the spread of electro-mechanical and then electronic data processing in the British government.

Hicks describes how, from the late 1940s all the way to the 1970s, the civil service took a pool of machine workers that was mostly female and deliberately and repeatedly hemmed them into job categories where their wages could be kept low and their promotion opportunities (which would mean raises) constrained, at the same time as it relied on their technical skills, practical knowledge, and commitment to keep the government running. Separate pay scales for women, eliminated in 1955, were replaced by a series of “excluded grades,” including machine workers, where pay rates would be lowered to the old women’s rate rather than raised the existing men’s rate. When the growth of automated data processing made the need for more senior professional and managerial positions obvious, the service recruited men for those positions – even when it meant starting them with no computer experience – rather than take the traumatic step of letting female staff from the machine operator grades manage men and be compensated at executive-level pay scales. Perhaps unsurprisingly, the government then found it hard to retain those men, with many taking their new skills into private industry or moving back out of computing to other areas in government.

As Hicks explains it, how the the civil service managed its workforce was not only immoral and inefficient but also terrible for the long-term health of the British computer industry. While segregating away the female computing workforce kept costs low, it also hamstrung modernization. By the time the government realized its need for programmers, most of the people with those skills, being women, could not actually be classed as “programmers,” since that job was conceptualized as higher-status and therefore reserved for men. That led the government to prioritize mainframe designs that could be run with a small expert staff, since retaining skilled male programmers was hard and female machine operators with no promotion opportunities were per se unreliable. Following that decision, made by the leading purchaser of British computers, led the companies that built British computers down a blind-alley in design at just the time that microelectronics were putting more computers on more desks and sparking a revolution in the American computer industry.

The blind alley. The International Computers Limited (ICL) 2966 was one of the last mainframe series to be designed in the UK. This machine is at the National Museum of Computing in Bletchley Park, though it’s so large that only about half is on display. Photograph by Steve Parker, CC-BY-2.0, from flickr as of April 4, 2017.

 

 

Tides of War, Part Two

The first part of this post appeared on the blog in November 2016. The second part was supposed to come out within a week or two, as soon as I found a little more on the post-war use of analog tide-predicting machines. Unfortunately, the search for “a little more” ended up taking way more time than expected and turning up nothing within easy reach. I’d skip the apology if it wasn’t for the fact that the proper references to Anna Carlsson-Hyslop‘s research (discussed in part one) were buried in the source note at the end of this post. Sorry.

Tide-predicting machines, the first of which appeared in the late nineteenth century, were an elegant mechanical solution to a complex mathematical problem. Used mostly to produce information useful to commercial shipping, during the two world wars they also played an important role in the planning of amphibious operations like the Normandy landings.

That contribution is interesting enough to give them a spot in the history of war, but the basic design of the British machines – using multiple gears to create an analog approximation of a mathematical function – also has an oblique connection to one of the most important technical achievements of the war: the mechanization of cryptanalysis.

Alan Turing is justifiably famous for his role in the breaking of the German Enigma cipher, and particularly for his contribution to designing electro-mechanical computing tools that transformed the process. (Even if popular versions of the story do a terrible disservice to the story by erasing everyone except Turing from the picture. Imitation Game, I’m looking at you.) Less well known are some of Turing’s pre-war flirtations with the mechanization of mathematical problem-solving. Andrew Hodges’ biography describes two projects which Turing took on, at least briefly. The first, during his time at Princeton in 1937, was to use electromagnetic relays for binary multiplication to create an extremely large number that could be used as the key for a cipher. This was, as Hodges puts it, a “surprisingly feeble” idea for a cipher but a practical success as far as constructing the relays was concerned.

The second project was an attempt to disprove the Riemann hypothesis about the distribution of prime numbers by calculating the Riemann zeta-function (for the hypothesis and the zeta-function, read Hodges or Wikipedia. There’s no chance of me describing it properly) by showing that not all instances where the function reached zero lay on a single line, as the hypothesis stated. An Oxford mathematician had already calculated the first 104 zeroes using punched-care machines to implement one approximation of the function. Since the zeta-function was the sum of circular functions of different frequencies, just like Thomson’s harmonic analysis of the tides, Turing realized it could be calculated using the same method. Or, more precisely, the machine could rule out enough values that only a few would have to be calculated by hand.

With a grant of £40 from the Royal Society, Turing and Donald MacPhail designed a machine that, like the tide calculators, used meshed gear wheels to approximate the thirty frequencies involved. The blueprint was completed by 17 July 1939 and the grinding of the wheels was underway when the war broke out at Turing joined the Government Code and Cypher School at Bletchley Park.

Nothing in the work that Turing did at Bletchley connected directly to the zeta-function machine, but, as Hodges notes, it was unusual for a mathematician like Turing to have any interest in using machines to tackle abstract problems of this sort. Clearly, though, Turing had been mulling the question of how machines could be applied to pure mathematics long before he became involved in the specific cryptanalytic problems that were tackled at Bletchley.

Of course, the secrecy surrounding code-breaking meant that no hint of the connection, or any of Turing’s wartime work, would have leaked out to those operating the tide-predicting machines in Liverpool or elsewhere. The end of the war meant a return to usual practice, but their strategic importance remained.

Probably the last analog machine to be constructed was a thirty-four constituent machine built in 1952–5 for East Germany (and now in the collection of the German Maritime Museum in Bremen). The Soviet Union had ordered a Kelvin-type machine for forty constituents from Légé and Co. in 1941 that was delivered to the State Oceanographic Institute in Moscow in 1946, on the eve of the Cold War. Bernard Zetler, an oceanographer who worked on tide prediction at the Scripps Institution of Oceanography in San Diego, recalls that he was unable to visit the machine in 1971 because it or its location was classified. The Soviet tide tables certainly were.

The American Tide Predicting Machine No. 2 remained in use until 1966, but played no role in the American amphibious landing at Inchon during the Korean War. The wide tidal range at Inchon meant that the landing needed good tidal information, but rather than making new calculations the existing American and Japanese tide tables were supplemented by first-hand observation by Navy Lieutenant Eugene F. Clark, whose unit reconnoitered the area for two weeks preceding the landings.

When analog machines like Tide Predicting Machine No. 2 were retired, they were replaced by digital computers whose architecture originated in other wartime projects like the ENIAC computer, which had been built to calculate ballistics tables for US artillery. The world’s navies have not relinquished their interest in tools to predict the tides. Their use, though, has never matched the high drama of prediction during the Second World War.

Source Note: The D-Day predictions are discussed many places on the internet, but almost all the accounts trace back to an article oceanographer Bruce Parker published in Physics Today, adapted from his 2010 book The Power of the Sea. Where Parker disagrees with the inventory of machines commissioned by the National Oceanography Centre, Liverpool (itself a descendant of the Liverpool Tidal Institute), I’ve followed Parker. Details on the work of Arthur Doodson and the Liverpool Tidal Institute come from Anna Carlsson-Hyslop‘s work: the articles “Human Computing Practices and Patronage: Antiaircraft Ballistics and Tidal Calculations in First World War Britain,” Information & Culture: A Journal of History 50:1 (2015) and “Patronage and Practice in British Oceanography: The Mixed Patronage of Storm Surge Science at the Liverpool Tidal Institute, 1919–1959,” Historical Studies in the Natural Sciences 46:3 (2016), and her dissertation for the University of Manchester (accessible through the NERC Open Repository). The scientist suspected of Nazi sympathies was Harald Sverdrup, a Norwegian national who worked with Walter Munk on wave prediction methods used in several amphibious landings. Turing’s experiments calculating the Riemann zeta-function appear in Andrew Hodges, Alan Turing: The Engima (1983; my edition the 2014 Vintage movie tie-in).

Liner Notes: Paying for War in Angola

There’s  a common military aphorism that amateurs talk tactics but professionals talk logistics. Despite that famous statement, histories of logistics can be hard to find and among those histories of finance (beneath the strategic level) even harder. The obscurity extends beyond historians even to the militaries you would expect to know better. According to a short monograph recently published by Air University Press, the US Air Force went into both Gulf Wars without a financial management system capable of operating in a war zone.

One of the more innovative experiments in managing finance in the theater of operations comes from the Cuban intervention in Angola. It’s particularly interesting for me because it hinged on one of the more unusual instruments of postwar power, the cruise liner.

Between 1975 and 1991 more than 430,000 Cuban soldiers and civilians served in Angola. The troops, who were a mix of professionals, reservists, and conscripts, were all ostensibly volunteers. Though conscripts got the perk of reducing their service from three years to two, in general pay was poor. An ordinary soldier received seven Cuban pesos and 150 Angolan kwanzas per month, disbursed at the end of the soldier’s tour. The kwanzas could be used to buy discounted luxury goods in special subsidized shops in Luanda. The pesos were for home. To avoid having to funnel all returning troops through Havana or operate pay counters in every port of arrival, the Cubans hit on an unusual solution. For most of the 1980s they hired the Soviet cruise liner Leonid Sobinov to float off the Angolan coast as a “money ship.” Troops were shuttled out to the Sobinov to receive their back pay before the long transatlantic voyage home. Under close escorts because it carried so much money, the Sobinov usually stayed in Angolan waters for three days at a time. At least once it remained for a month.

The original designers of the Sobinov had probably never considered such as use for the ship. That said, they had probably also never considered that it would be owned by the Soviets. Like many of the Soviet Union’s larger passenger ships, it had been constructed outside the Soviet sphere entirely. Built for the Cunard Line in Britain as the RMS Saxonia in the mid-1950s, the Sobinov was sold to the Soviet Union and renamed in 1973. In addition to its unique duties as a “money ship,” it operated as an occasional troopship and cruise ship in the south Pacific and Mediterranean. It was laid up in the mid-1990s and scrapped in 1999.

Source: Edward George, The Cuban Intervention in Angola, 1965–1991: From Che Guevara to Cuito Cuanavale (Frank Cass, 2005)

Aleksandr Zhitomirsky

During the Second World War, when it still seemed like the Germans might capture Moscow, propaganda minister Joseph Goebbels wrote a list of Soviet propagandists who were to be killed upon capture. Number one was the writer Ilya Ehrenburg. Number two was chief Radio Moscow announcer Iurii Levitan. Number three was Aleksandr Zhitomirsky, the designer and artist of one of the Red Army’s chief illustrated propaganda magazines.

That, at least, was the story, one which is mentioned – with appropriate skepticism – by Erika Wolf in the catalogue to a major exhibit of artist Aleksandr Zhitomirsky’s work at the Art Institute of Chicago. A talented designer and illustrator whose most striking works were the satirical, even grotesque, photomontages that he created in the early years of the Cold War, Zhitomirsky’s work pilloried capitalism and the United States, often with allusions to the Nazi threat against which Zhitomirsky had cut his teeth propagandizing. While his main employment from 1953 to 1991 was as chief artist for Soviet Union (Sovietskii Soiuz), a glossy magazine aimed at readers in Eastern Europe and Asia, his illustrations appeared in the Literary Newspaper (Literaturnaia gazeta), official organ of the Union of Soviet Writers; Red Fleet (Krasnyi flot); Rising Generation (Smena); the satirical magazine Krokodil (Crocodile), and even occasionally in more exalted venues such as Truth (Pravda), the official newspaper of the Communist Party, and News (Izvestiia), official paper of the Soviet government. Those works attracted attention not just at home, where he was part of a major photomontage exhibit in East Berlin in 1961/2 and had his own retrospective in Moscow, but even in the US, where some of his photomontages from the Literary Gazette drew comment in the New York Times.

On balance it’s the postwar art, not just the illustrations mentioned above but also the book covers and occasional poster, that is the focus of Wolf’s Aleksandr Zhitomirsky: Photomontage as a Weapon of World War II and the Cold War (Yale University Press, 2016). For me, though, it’s Zhitomirsky’s wartime work on Front Illustrated (Frontovaia illiustratsiia) and its complementary German-language edition aimed at enemy soldiers (Front Illustrated for German Soldiers / Front-Illustrierte für den deutschen Soldaten) that’s more captivating. The postwar designs are hardly subtle. How often can one look at a monkey-like Goebbels ventriloquizing through some American symbol?

Aleksandr Zhitomirsky CoverFront Illustrated for German Soldiers, which existed to sow unease and dissension in the German ranks, had to be more indirect. For his cover designs and leaflets, Zhitomirsky mixed captured German photographs and new photography (often with himself as the model) with images borrowed for his vast trove of reference photos, often airbrushed together to the point that they became impossible to distinguish. With one leaflet, Choose! Like This or Like That!, Wolf shows how what appears to be a single photograph of dead Germans lying on the ground was actually a composite of seven different photographs, layered together, photographed, then retouched to create a seamless image. With others, she shows how Zhitomirsky mixed background photography with physical objects (like reproduced letters and snapshots) in trompe-l’œil arrangements. Taking advantage of Zhitomirsky’s personal archive, Wolf can demonstrates just how impressive his work was.

Map Overlap: Warsaw Pact vs. NATO Grids

The Charles Close Society hasn’t updated its topical list of articles on military mapping since I wrote about it in 2015, but there is a new article by John L. Cruickshank (“More on the UTM Grid system”) in Sheetlines 102 (April 2015) that is now freely available on the society’s website. The connection to Soviet mapping is that Cruickshank discusses how both NATO and the Warsaw Pact produced guides and maps to help their soldiers convert between their competing grid systems. Unlike latitude and longitude, a grid system assumes a flat surface.That’s good for simplifying calculations of distance and area, but means you have the problems of distortion that come with any map projection.

Both the Soviets and Americans based their standard grids on transverse Mercator projections that divided the globe up into narrow (6° wide) north-south strips, each with own projection. These were narrow enough not to be too badly distorted at the edges but still wide enough that artillery would rarely have to shoot from a grid location in one strip at a target in another (which required extra calculations to compensate for the difference in projections). The American system was called the Universal Transverse Mercator (or UTM; the grid itself was the Military Grid Reference System, or MGRS). The Soviet one was known, in the West at least, as the Gauß-Krüger grid.

In his article, Cruickshank reports that by 1961 East German intelligence was printing 1:200,000 military topographic maps that had both UTM and Soviet Gauß-Krüger grids. By 1985 a full series existed that ran all the way west to the English Channel. Rather than print a full map series with both grids, the US Army produced intelligence guides to the conversion between them. Field Manual 34-85, Conversion of Warsaw Pact Grids to UTM Grids was issued in September 1981. A supplement, G-K Conversion (Middle East) was released in February 1983. As Cruickshank observes, both manuals have fascinating illustrated covers. Conversion of Warsaw Pact Grids features a map with a rolled up map labelled “Intelligence” standing on a grid and looking at a globe focused on Europe. G-K Conversion, on the other hand, shows an Eagle literally stealing the map out of the hand of a Bear using calipers to measure distances from Turkey to Iran across the Caspian Sea.

The article ends with the observation that the history of modern geodesy, which underpins calculations like the UTM and Gauß-Krüger grids, remains “overdue for description.” Since it was published a new book has appeared that goes a long way towards covering some of those developments (at least for non-specialists, if not experts like Cruickshank). In fact, map grids are one of the main topics of After the Map: Cartography, Navigation and the Transformation of Territory in the Twentieth Century by William Rankin (University of Chicago Press, 2016). The book is chock-full of fascinating discussions of new mapping and navigation systems that developed between the end of the nineteenth century and the appearance of GPS. Its focus is on three overlapping case studies: large-scale global maps like the International Map of the World and World Aeronautical Charts (which have their own connection to Soviet mapping), grid systems like UTM, and radionavigation networks like Gee and Loran. (The third of these was already the topic of an article by Rankin that I wrote about here.)

In the chapters on map grids, After the Map shows just how long even an ostensibly universal design like UTM remained fragmented and regional. The use of grids had begun on the Western Front during the First World War. It spread to domestic surveying in the interwar period and been adopted by all the major powers during the Second World War. But universal adoption of the principles involved did not mean adoption of a common system. Even close allies like the United States and Britain ended up just dividing the world and jointly adopting one or the other nation’s approach in each region: British grids were applied to particular war zones and a more general American system used for the rest of the world. Neither used a transverse Mercator projection.

Even once America and its NATO allies settled on UTM as a postwar standard – a decision made despite opposition from the US Navy and Air Force, who fought vigorously for a graticule rather than a grid – UTM maps did not use a single consistent projection but adopted whichever reference ellipsoid was already in use for a region. While those differences were eventually resolved, even the 1990 edition of Defense Mapping Agency Technical Manual 8358.1, Datums, Ellipsoids, Grids, and Grid Reference Systems, still included specifications for twenty British grids including the British and Irish domestic surveys (plus a further nineteen further secondary grids), as well as the Russian Gauß-Krüger. East German tank commanders should have been grateful that they could get away with only two from the Intra-German Border to the Channel!